Meta · 2017 – 2023 · 5 years 9 months

Protecting two billion people from harm

For nearly six years, I led teams building and scaling the world's largest content moderation system, a combination of AI and human review that kept Facebook, Instagram, and WhatsApp safer for the people who depended on them.

2B+
users protected across Facebook, Instagram & WhatsApp
70%
faster response to emerging harm
50%
reduction in reviewer onboarding time
35K
person global workforce impacted

Content moderation at a scale no one had solved before

When I joined Meta's Central Integrity team, the scale of the problem was almost incomprehensible. Three platforms. Two billion users. Content in hundreds of languages, across every culture and context on earth. And a global policy team that was struggling to keep pace, new harms emerging constantly, updates taking weeks to propagate across a 35,000-person review workforce, and moderators themselves being exposed to graphic content that was taking a real toll.

The moderation system was enormous but it had grown organically, layered over years. It was brittle in places, slow to adapt, and difficult to improve. My job was to change that, not by redesigning everything at once, but by identifying the highest-leverage interventions and executing them with precision.

Making the system reliable enough to build on

As a Product Manager, I led a cross-functional platform team focused on reliability. The content moderation system classified billions of pieces of content, but error rates were high enough to matter at scale. We worked systematically to reduce them, improving the tooling developers used to label training data, tightening feedback loops between human review and model retraining, and making it easier to identify where the system was failing.

The result was a 20% reduction in error rates across the system, and, more importantly, a foundation of trust that let us move faster on everything that came next. We also paved the way for classifier-led review, which would prove critical as the volume of content continued to grow.

"The hard part wasn't building the technology. It was building systems that could keep up with the speed and scale of human behavior, and still protect the people on the other side of the screen."

Leading four teams to protect people from harm

As Product Lead, I took on a much larger scope, four cross-functional product teams, each focused on a different dimension of the moderation problem. Together, we worked to detect and remove harmful speech, graphic violence, and misinformation before it could spread, while also building better systems to protect the moderators doing the work.

One of the most painful realities of content moderation is that human reviewers are exposed to some of the worst content on the internet. We built a suite of features specifically designed to reduce that exposure, and cut the graphic content seen by reviewers by 30%.

At the same time, we tackled the policy update bottleneck that had slowed the entire system. When a new type of harm emerged, a new crisis, a new manipulation tactic, Meta needed to be able to update its response globally and quickly. We rebuilt the workflow that let policy teams push changes across a 35,000-person workforce, reducing the time to update global policies by 70%.

Reviewer onboarding was another lever. New moderators needed to be trained quickly without compromising quality. We redesigned the training pipeline, cutting onboarding time by 50%, meaning more coverage, faster, when it mattered most.

Systems thinking at human scale

Working in content moderation taught me that the hardest product problems aren't purely technical. They're sociotechnical, they sit at the intersection of AI capability, human judgment, organizational process, and global policy. Getting them right means holding all of those in tension at once.

I also learned what it means to lead through complexity. Managing four teams across different problem spaces required a disciplined approach to prioritization, a clear sense of where I could add unique value, and constant investment in the people doing the work. I conducted over 200 product manager interviews during my time at Meta and saw each of my direct reports through promotions, because building the team is inseparable from building the product.

The outcome

In nearly six years at Meta, I helped build and scale the infrastructure that kept billions of people safer online. The work was hard, often invisible, and genuinely consequential. Harmful content that never appears in a feed, a moderator who goes home without nightmares, a policy that updates before a crisis compounds, that's what success looks like in this space. It doesn't make headlines, but it matters.