During a recent appearance on the Joe Rogan Experience, Mark Zuckerberg described a pivotal change he’s implementing for Meta: moving away from third-party fact-checkers and embracing a Community Notes system that echoes what X (formerly Twitter) has tried. Throughout the podcast, he stressed that traditional fact-checking—although well-intentioned—can inadvertently create distrust among users who feel their voices are overshadowed by “official” gatekeepers. In his view, collective user input might be the next logical step for more organic and transparent moderation on platforms like Facebook and Instagram.
Zuckerberg spoke candidly about user skepticism regarding fact-checkers. He explained how, in the current system, select organizations are granted significant authority over what is deemed “true” or “misleading.” This top-down structure, he believes, can fuel perceptions of bias. Users often see headlines about independent fact-checkers making decisions that seem out of touch with what they’ve personally observed or researched. By transitioning to something more user-driven, Zuckerberg hopes to tap into a broader spectrum of perspectives that might capture nuances missed by singular authorities.
Yet, he’s not dismissing the challenges. Throughout the conversation, Joe Rogan pressed him on the risk that Community Notes could turn into a battleground of differing ideologies, brigading, and troll campaigns. Zuckerberg acknowledged those potential pitfalls but argued that a carefully designed, crowd-sourced model still offers benefits that may outweigh the risks. Speed is one of them. Third-party fact-checkers frequently need time to investigate a claim, and by the time they publish a verdict, a post could have already gone viral. In a Community Notes system, users can flag and annotate suspect content much faster, potentially containing the spread of misinformation before it spirals out of control.
Empowering the User Community
Zuckerberg emphasized how social media thrives when users feel they have a stake in shaping the platform. If people trust each other more than they trust an external authority, then a community-led approach could foster more meaningful engagement. He also noted that when everyone has the chance to weigh in, the moderation process ceases to be a black box. Instead of receiving a “false” label from an unknown entity, users can see fellow community members citing sources and explaining their reasoning. Over time, this transparency might rebuild trust, particularly among those who believe their voices have been marginalized.
That said, the Meta CEO didn’t completely dismiss the possibility of retaining some form of expert oversight. Much like a teacher who steps in when student debates get out of hand, a specialized internal team may still provide clarifications or issue final judgments in severe cases. This hybrid model—crowd-sourced moderation with a safety net—could offer users more autonomy while reassuring advertisers and regulators that Meta isn’t relinquishing all control.
Transparency, Speed, and the Changing Social Media Landscape
During the podcast, Zuckerberg described how the pace of online information flows has increased exponentially. Memes, short videos, and text-based posts can reach millions in a matter of hours. Fact-checking partnerships, while still valuable, often lag behind the real-time nature of viral content. By empowering the crowd to jump in immediately with contextual notes or corrections, Meta can create a community that’s not only consuming content but also actively shaping its credibility.
He added that this approach reflects broader changes in the media environment. Users are less willing to trust official labels if they believe those labels come from partisan or corporate interests. They want to see the proof for themselves, often turning to multiple sources or relying on personal networks. In short, the “classic era” of top-down fact-checking might be giving way to a new form of networked accountability—one in which millions of micro interactions produce a consensus over what’s misleading and what’s accurate.
Addressing Free Speech Concerns
Zuckerberg and Rogan also touched on free speech—a topic that’s become inseparable from discussions about content moderation. Officially, the First Amendment limits what the government can suppress, but private platforms like Facebook set their own rules. Even so, Zuckerberg has repeatedly stated that Meta’s policies are shaped by “First Amendment values,” aiming to avoid the undue silencing of legitimate voices.
By deploying a Community Notes system, Meta could uphold this free speech ethos in spirit, offering a space for users to engage each other rather than deferring to a single authority to deem something right or wrong. The hope is that such a framework fosters a marketplace of ideas, where poor arguments eventually get exposed through robust, crowd-driven debate instead of being hidden by a “fact-checked” label that doesn’t tell the whole story.
Still, Zuckerberg conceded the complexity in striking the right balance. If Community Notes are too permissive, hate speech or propaganda might slip by until it’s too late. If they’re too restrictive, Meta risks accusations of censorship. He suggested that the ultimate solution might be a fluid process where the community’s input is continually refined by ongoing data, expert interventions when needed, and technological tools that detect patterns of abuse.
Advertisers’ Reaction to X
When the topic turned to advertisers, Rogan pointed out how X’s looser moderation policies led to a retreat by many major brands. Zuckerberg acknowledged that brand safety is a paramount concern for any platform reliant on advertising. In his view, if advertisers sense that there is a mass influx of disinformation or hateful content, they could associate the platform with those elements and pull their budgets.
Zuckerberg admitted that rolling out Community Notes could prompt some apprehension among marketers who are used to the clear-cut approach of official fact-checking labels. It’s easy to show advertisers a partnership with respected organizations like PolitiFact or AFP. It’s harder to promise that millions of users, armed with their personal biases, will keep conversations civil and factual. Still, he argued that if Meta can demonstrate consistent success in weeding out the worst offenders, advertisers might warm up to the new model. The platform’s sheer scale gives him confidence that, with the right incentives, user-driven moderation can outperform smaller fact-checking teams in terms of speed and breadth.
Practical Ways to Roll Out Community Notes
Although Zuckerberg didn’t outline a detailed blueprint during the podcast, he did hint at the importance of gradual implementation. A pilot phase could help Meta identify how effectively groups of users catch and correct misinformation. Internal data on the efficacy of Community Notes—such as average response times, user satisfaction, and reduced virality of false stories—could guide subsequent expansions.
He also emphasized the need for a robust mechanism to keep bad actors from hijacking the system. Weighted credibility might be one solution. Under such a scheme, users who have proven trustworthy over time gain more influence, while new or suspicious accounts find it harder to manipulate the note-ranking process. Frequent reminders about the importance of sourcing, respectful engagement, and evidence-based arguments could further elevate the quality of notes.
For advertisers, Zuckerberg mentioned providing transparency in how flagged content is managed. If advertisers see a strong, structured process—even one driven by users—they may remain confident that their brands won’t end up next to inflammatory or misleading content. Meta could even designate “brand-safe” zones for higher scrutiny, requiring additional layers of user consensus or extra review by Meta staff.
A Critical Balancing Act
Throughout the podcast, Zuckerberg’s position boiled down to a balancing act between protecting free expression and maintaining a hospitable environment for advertisers. He championed the idea that removing third-party fact-checkers doesn’t mean an “anything goes” approach; instead, it means trusting users to help shape the narrative. By empowering millions of voices to note, flag, and discuss questionable content, Meta might create a more resilient information ecosystem—one that leverages collective intelligence instead of relying solely on external gatekeepers.
However, Rogan rightly challenged Zuckerberg on whether this form of democratized moderation can scale effectively. The Meta CEO didn’t dismiss the risk factors but reiterated his belief that the reward could be a healthier, more transparent, and ultimately more trustworthy platform. If Community Notes manage to reduce misinformation faster than third-party fact-checkers and encourage advertisers to stay on board, it could redefine how social media platforms handle content moderation.
The stakes are high. Meta, as one of the largest digital ecosystems on the planet, influences what billions of people see and share each day. If Zuckerberg’s approach succeeds, it might set a new standard for open yet responsible speech online. If it fails, advertisers could flee, misinformation could flourish, and the user experience might erode. The conversation with Joe Rogan offered a glimpse into Zuckerberg’s thinking: an embrace of collective effort tempered by careful oversight and continuous iteration. Whether or not this gamble pays off could shape not only Meta’s future but the broader evolution of social media in the years to come.