With a wealth of experience in analytics and performance optimization, Milena Traikovich has a sharp eye for the strategic maneuvers platforms make in the ever-shifting creator economy. We sat down with her to dissect the recent discourse around AI-generated content, exploring the growing tension between platform-driven AI adoption and the intrinsic value of human creativity. Our conversation delves into the so-called “authenticity arms race,” the murky economics behind verification systems, and the enduring power of a human-originated idea in a world increasingly saturated with digital replicas.
The head of Instagram, Adam Mosseri, recently argued that AI is making authenticity “infinitely reproducible.” From your perspective, how does this technological shift truly affect the value of human creators, and what kind of sophisticated strategies should platforms adopt to genuinely champion original work beyond just slapping on an “AI-generated” label?
It’s a fascinating and frankly, a very convenient, corporate narrative. On the surface, the idea that AI can replicate the aesthetic of authenticity is true. It can add film grain, mimic a shaky hand, or generate a seemingly candid moment. But this completely misses the point of what makes a creator valuable. The value isn’t in the polish, or lack thereof; it’s in the unique perspective, the lived experience, and the genuine connection that forms the kernel of the content. A simple label is a lazy solution. Platforms that are serious about this need to evolve their metrics. Instead of just tracking views, they should develop systems that measure and reward deep engagement—things like the sentiment and length of comments, the rate at which content is shared into DMs for personal conversations, and the loyalty of a returning audience. It’s about algorithmically identifying and elevating the resonance of a human connection, which is a signal that AI-generated slop, no matter how well-produced, simply cannot fake at scale.
Mosseri pointed out that creators are using “rawness as proof” of their humanity, but he predicts AI will soon master this as well. Could you walk us through what this escalating “authenticity arms race” might look like in practice, and how a savvy creator might continuously evolve their approach to stay one step ahead?
This arms race is very real, and it’s a constant cat-and-mouse game. The first stage was the move away from hyper-polished, perfect content to a more “raw” aesthetic. But as Mosseri notes, that’s just a visual style that AI can easily learn and replicate. The next stage, therefore, has to be about embedding un-replicable proof of humanity directly into the content creation process. For instance, a creator could start incorporating more live, interactive elements, pulling real-time comments from their audience and responding to them on the fly within a recorded video. An AI can’t predict those unique, spontaneous interactions. Another step would be to build an entire content strategy around a deep, shared community culture—full of inside jokes, niche references, and a history that an AI model, trained on the vast, generic internet, simply wouldn’t have the context to understand or mimic effectively. The goal is to move beyond proving authenticity through aesthetics and instead prove it through interactive, time-stamped, and community-specific engagement.
The article expresses skepticism that new tools like Meta Verified are truly about protecting creators, suggesting they might be another way to monetize the concept of authenticity. What’s your analysis of this? How can a platform realistically demonstrate that its verification tools are designed for the community’s benefit rather than as just another revenue-generating tactic?
From where I sit, that skepticism is entirely justified. In platform strategy, when you see a problem being highlighted and a paid solution being offered in the same breath, you have to be critical. The danger of a system like Meta Verified is that it risks turning authenticity into a premium feature, a pay-to-play system where those who can afford it get their “humanity” prioritized in the feed. To build genuine trust, a platform must decouple verification from monetization. A robust, free verification process based on proving one’s identity and a history of original work would be a true community benefit. Furthermore, there has to be radical transparency. Platforms need to publish data showing how these systems work. Are verified human accounts actually being boosted because their content is original and resonant, or is it just because they paid a monthly fee? Without that data-backed proof, it will always feel less like a shield for creators and more like a tollbooth on the road to visibility.
A central argument in the piece is that the one thing AI can’t truly replicate is a human-originated idea and the community connection that flows from it. Could you elaborate on this and perhaps share an example of how this principle plays out in the real world, creating something AI “slop” could never touch?
Absolutely, this is the core of the matter. AI is a tool of derivation, not inception. It can synthesize, remix, and replicate based on existing data, but it cannot generate a truly novel concept born from unique human passion or experience. Think about a creator who decides, out of a personal love for local history, to start a channel dedicated to finding and restoring forgotten vintage farm equipment. That initial idea—the spark—comes from a deeply personal place. The content that follows, filled with the real-life frustration of a rusted bolt, the greasy satisfaction of a newly-oiled part, and the emotional triumph of hearing an engine turn over for the first time in decades, is pure human experience. The community that rallies around that creator isn’t there because the videos have perfect lighting; they are there for the shared journey, the authenticity of the struggle, and the creator’s infectious passion. An AI could generate a script about this, but it could never originate the idea or live the process. That human-centered story is the ultimate differentiator.
What is your forecast for the future of authentic content on social media?
My forecast is that we are rapidly approaching a great content bifurcation. On one side, we will have a vast, ever-expanding ocean of AI-generated content—it will be easy, cheap, and often used as passive, disposable entertainment or background noise. On the other side, authentic, human-created content will become a premium, highly sought-after commodity. Audiences will become much more discerning, developing an innate sense for the soullessness of AI-generated “slop” versus the resonance of a true human story. The platforms that succeed in this new era won’t be the ones that can host the most content, but the ones that build the best systems for discovering, verifying, and rewarding genuine human creators. Authenticity will cease to be an aesthetic choice and will instead become a verifiable, and incredibly valuable, asset.