USA News

She Joined Facebook to Fight Terror. Now She’s Convinced We Need to Fight Facebook.

For two years, Hannah Byrne was part of an invisible machine that determines what over 3 billion people around the world can say on the internet. From her perch within Meta’s Counterterrorism and Dangerous Organizations team, Byrne helped craft one of the most powerful and secretive censorship policies in internet history. Her work adhered to the basic tenet of content moderation: Online speech can cause offline harm. Stop the bad speech — or bad speakers — and you have perhaps saved a life.

In college and early in her career, Byrne had dedicated herself to the field of counterterrorism and its attempt to catalog, explain, and ultimately deter non-state political violence. She was most concerned with violent right-wing extremism: neo-Nazis infiltrating Western armies, Klansmen plotting on Facebook pages, and Trumpist militiamen marching on the Capitol.

In video meetings with her remote work colleagues and in the conference rooms of Menlo Park, California, with the MAGA riot of January 6 fresh in her mind, Byrne believed she was in the right place at the right time to make a difference.

And then Russia invaded Ukraine. A country of under 40 million found itself facing a full-scale assault by one of the largest militaries in the world. Standing between it and Russian invasion were the capable, battle-tested fighters of the Azov Battalion — a unit founded as the armed wing of a Ukrainian neo-Nazi movement. What followed not only shook apart Byrne’s plans for her own life, but also her belief in content moderation and counterterrorism.

Today, she is convinced her former employer cannot be trusted with power so vast, and that the systems she helped build should be dismantled. For the first time, Byrne shares her story with The Intercept, and why the public should be as disturbed by her work as she came to be.

Through a spokesperson, Meta told The Intercept that Byrne’s workplace concerns “do not match the reality” of how policy is enforced at the company.

Good Guys and Bad Guys

Byrne grew up in the small, predominantly white Boston suburb of Natick. She was 7 years old when the World Trade Center was destroyed and grew up steeped in a binary American history of good versus evil, hopeful she would always side neatly with the former.

School taught her that communism was bad, Martin Luther King Jr. ended American racism, and the United States had only ever been a force for peace. Byrne was determined after high school to work for the CIA in part because of reading about its origin story as the Nazi-fighting Office of Strategic Services during World War II. “I was a 9/11 kid with a poor education and a hero complex,” Byrne said.

And so Byrne joined the system, earning an undergraduate degree in political science at Johns Hopkins and then enrolling in a graduate research program in “terrorism and sub-state violence” at Georgetown University’s Center for Security Studies. Georgetown’s website highlights how many graduates from the Center go on to work at places like the Department of Defense, Department of State, Northrop Grumman — and Meta.

It was taken for granted that the program would groom graduates for the intelligence community, said Jacq Fulgham, who met Byrne at Georgetown. But even then, Fulgham remembers Byrne as a rare skeptic willing to question American imperialism: “Hannah always forced us to think about every topic and to think critically.”

Part of her required reading at Georgetown included “A Time to Attack: The Looming Iranian Nuclear Threat,” by former Defense Department official Matthew Kroenig. The book advocates for preemptive air war against Iran to end the country’s nuclear ambitions. Byrne was shocked that the premise of bombing a country of 90 million presumably killing many innocent people — to achieve the ideological and political ends of the United States would be considered within the realm of educated debate and not an act of terrorism.

That’s because terrorism, her instructors insisted, was not something governments do. Part of terror’s malign character is its perpetration by “non-state actors”: thugs, radicals, militants, criminals, and assassins. Not presidents or generals. Unprovoked air war against Iran was within the realm of polite discussion, but there was never “the same sort of critical thinking to what forms of violence might be appropriate for Hamas” or other non-state groups, she recalls.

As part of her program at Georgetown, Byrne studied abroad in places where “non-state violence” was not a textbook topic but real life. Interviews with former IRA militants in Belfast, ex-FARC soldiers in Colombia, and Palestinians living under Israeli occupation complicated the terrorism binary. Rather than cartoon villains, Byrne met people who felt pushed to violence by the overwhelming reach and power of the United States and its allies. Wherever she went, Byrne said, she met people victimized, not protected by her country. This was a history she had never been taught.

Despite feeling dismayed about the national security sector, Byrne still harbored a temptation to fix it from within. After receiving her master’s and entering a State Department-sponsored immersion language class in India, still hopeful for an eventual job at the CIA or National Security Agency, she got a job at the RAND Corporation as a defense analyst. “I hoped I’d be able to continue to learn and write about ‘terrorism,’ which I now knew to be ‘resistance movements,’ in an academic way,” Byrne said. Instead, her two years at RAND were focused on the traditional research the think tank is known for, contributing to titles like “Countering Violent Nonstate Actor Financing: Revenue Sources, Financing Strategies, and Tools of Disruption.”

“She was all in on a career in national security,” recalled a former RAND co-worker who spoke to The Intercept on the condition of anonymity. “She was earnest in the way a lot of inside-the-Beltway recent grads can be,” they added. “She still had a healthy amount of sarcasm. But I think over time that turned into cynicism.”

Unfulfilled at RAND, Byrne found what she thought could be a way to both do good and honor her burgeoning anti-imperial politics: Fighting the enemy at home. She decided her next step would be a job that let her focus on the threat of white supremacists.

Facebook needed the help. A mob inflamed by white supremacist rhetoric had stormed the U.S. Capitol, and Facebook yet again found itself pilloried for providing an organizing tool for extremists. Byrne came away from job interviews with Facebook’s policy team convinced the company would let her fight a real danger in a way the federal national security establishment would not.

Instead, she would come to realize she had joined the national security state in microcosm.

Azov on the Whitelist

Byrne joined Meta in September 2021.

She and her team helped draft the rulebook that applies to the world’s most diabolical people and groups: the Ku Klux Klan, cartels, and of course, terrorists. Meta bans these so-called Dangerous Organizations and Individuals, or DOI, from using its platforms, but further prohibits its billions of users from engaging in “glorification,” “support,” or “representation” of anyone on the list.

Byrne’s job was not only to keep dangerous organizations off Meta properties, but also to prevent their message from spreading across the internet and spilling into the real world. The ambiguity and subjectivity inherent to these terms has made the “DOI” policy a perennial source of over-enforcement and controversy.

A full copy of the secret list obtained by The Intercept in 2021 showed it was disproportionately comprised of Muslim, Arab, and southeast Asian entities, hewing closely to the foreign policy crosshairs of the United States. Much of the list is copied directly from federal blacklists like the Treasury Department’s Specially Designated Global Terrorist roster.

A 2022 third-party audit commissioned by Meta found the company had violated the human rights of Palestinian users, in part, due to over-enforcement of the DOI policy. Meta’s in-house Oversight Board has repeatedly reversed content removed through the policy, and regularly asks the company to disclose the contents of the list and information about how it’s used.

Meta’s longtime justification of the Dangerous Organizations policy is that the company is legally obligated to censor certain kinds of speech around designated entities or it would risk violating the federal statute barring material support for terrorist groups, a view some national security scholars have vigorously rejected.



Top/Left: Hannah Byrne on a Meta-sponsored trip to Wales in 2022. Bottom/Right: Byrne speaking at the NOLA Freedom Forum in 2024, after leaving Meta.
Photo: Courtesy of Hannah Byrne

Byrne tried to focus on initiatives and targets that she could feel good about, like efforts to block violent white supremacists from using the company’s VR platform or running Facebook ads. At first she was pleased to see that Meta’s in-house list went further than the federal roster in designating white supremacist organizations like the Klan — or the Azov Battalion.

Still, Byrne had doubts about the model because of the clear intimacy between American state policy and Meta’s content moderation policy. Meta’s censorship systems are “basically an extension of the government,” Byrne said in an interview.

She was also unsure of whether Meta was up to the task of maintaining a privatized terror roster. “We had this huge problem where we had all of these groups and we didn’t really have … any sort of ongoing check or list of evidence of whether or not these groups were terrorists,” she said, a characterization the company rejected.

Byrne quickly found that the blacklist was flexible.

Meta’s censorship systems are “basically an extension of the government.”

In February 2022, as Russia prepared its full-scale invasion of Ukraine, Byrne learned firsthand just how mercurial the corporate mirroring of State Department policy could be.

As an armed white supremacist group with credible allegations of human rights violations hanging over it, Azov had landed on the Dangerous Organizations list, which meant the unit’s members couldn’t use Meta platforms like Facebook, nor could any users of those platforms praise the unit’s deeds. But with Russian tanks and troops massing along the border, Ukraine’s well-trained Azov fighters became the vanguard of anti-Russian resistance, and their status as international pariahs a sudden liability for American geopolitics. Within weeks, Byrne found the moral universe around her inverted: The heavily armed hate group sanctioned by Congress since 2018 were now freedom fighters resisting occupation, not terroristic racists.

As a Counterterrorism and Dangerous Organizations policy manager, Byrne’s entire job was to help form policies that would most effectively thwart groups like Azov. Then one day, this was no longer the case. “They’re no longer neo-Nazis,” Byrne recalls a policy manager explaining to her somewhat shocked team, a line that is now the official position of the White House.

Shortly after the delisting, The Intercept reported that Meta rules had been quickly altered to “allow praise of the Azov Battalion when explicitly and exclusively praising their role in defending Ukraine OR their role as part of the Ukraine’s National Guard.” Suddenly, billions of people were permitted to call the historically neo-Nazi Azov movement “real heroes,” according to policy language obtained by The Intercept at the time.

Byrne and other concerned colleagues were given an opportunity to dissent and muster evidence that Azov fighters had not in fact reformed. Byrne said that even after gathering photographic evidence to the contrary, Meta responded that while Azov may have harbored Nazi sympathies in recent years, posts violating the company’s rules had sufficiently tapered off.

The odds felt stacked: While their bosses said they were free to make their case that the Battalion should remain blacklisted, they had to pull their evidence from Facebook — a platform that Azov fighters ostensibly weren’t allowed to use in the first place.

“Key to that assessment — which everyone at Facebook knew, but coming from the outside sounds ridiculous — is that we’re actually pretty bad at keeping content off the platform. Especially neo-Nazi content,” Byrne recalls. “So internally, it was like, ‘Oh, there should be lots of evidence online if they’re neo-Nazis because there’s so many neo-Nazis on our platform.’”

Though she was not privy to deliberations about the choice to delist the Azov Battalion, Byrne is adamant in her suspicion that it was done to support the U.S.-backed war effort. “I know the U.S. government is in constant contact with Facebook employees,” she said. “It is so clear that it was a political decision.” Byrne had taken this job to prevent militant racism from spilling over into offline violence. Now, her team was instead loosening its rules for an armed organization whose founder had once declared Ukraine’s destiny was to “lead the white races of the world in a final crusade … against Semite-led Untermenschen.”

It wasn’t just the shock of a reversal on the Azov Battalion, but the fact that it had happened so abruptly — Byrne estimates that it took no more than two weeks to exempt the group and allow praise of it once more.

She was aghast: “Of course, this is going to exacerbate white supremacist violence,” she recalls worrying. “This is going to make them look good. It’s going to make it easier to spread propaganda. Ultimately, I was afraid that it was going to directly contribute to violence.”

In its comments to The Intercept, Meta reiterated its belief that the Azov unit has meaningfully reformed and no longer meets its standards for designation.

KHARKIV REGION, UKRAINE - JUNE 28: Azov Regiment soldiers are seen during weapons training on June 28, 2022 in the Kharkiv region, Ukraine.The Azov Regiment was founded as a paramilitary group in 2014 to fight pro-Russian forces in the Donbas War, and was later incorporated into Ukraine's National Guard as Special Operations Detachment "Azov." The group, which takes its name from the Sea of Azov, has drawn controversy due to its far-right roots, which Russian President Vladimir Putin has tried to exploit to portray his war as a fight against "Nazis." Azov battalion members were among those forced to surrender to Russia at Mariupol's Azovstal steel plant last month, after holding out amid months of intense bombardment, during which time they were celebrated as heroes by their compatriots and Ukrainian President Volodymyr Zelensky. (Photo by Paula Bronstein/Getty Images)
Azov Regiment soldiers are seen during weapons training on June 28, 2022, in the Kharkiv region, Ukraine.
Photo: Paula Bronstein/Getty Images

Byrne recalled a similar frustration around Meta’s blacklisting of factions fighting the government of Syrian President Bashar al-Assad, but not the violent, repressive government itself. “[Assad] was gassing his civilians, and there were a couple Syrians at Facebook who were like, ‘Hey, why do we have this whole team called Dangerous Organizations and Individuals and they’re only censoring the Syrian resistance?’” Byrne realized there was no satisfying answer: National governments were just generally off-limits.

Meta confirmed to The Intercept that its definition of terrorism doesn’t apply to nation states, reflecting what it described as a legal and academic consensus that governments may legitimately use violence.

At the start of her job, Byrne was under the impression right-wing extremism was a top priority for the company. “But every time I need resources for neo-Nazi stuff … nothing seemed to happen.” The Azov exemption, by contrast, happened at lightning speed. Byrne recalls a similarly rapid engineering effort to tweak Meta’s machine learning-based content scanning system that would have normally removed the bulk of Azov-friendly posts. Not everyone’s algorithmic treatment is similarly prioritized: “It’s infuriating that so many Palestinians are still being taken down for false-positive ‘graphic violence’ violations because it’s obvious to me no one at Meta gives a shit,” Byrne said.

Meta pushed back on Byrne’s broader objections to the Dangerous Organizations policy. “This former employee’s claims do not match the reality of how our Dangerous Organizations policies actually work,” Meta spokesperson Ryan Daniels said in a statement. “These policies are some of the most comprehensive in the industry, and designed to stop those who seek to promote violence, hate and terrorism on our platforms, while at the same time ensuring free expression. We have a team of hundreds of people from different backgrounds working on these issues every day — with expertise ranging from law enforcement and national security to human rights, counterterrorism and academic studies. Our Dangerous Organizations policies are not static, we update them to reflect evolving factors and changing threat landscapes, and we apply them equally around the world while also complying with our legal obligations.”

Malicious Actors

But it wasn’t the Azov reversal that ended Byrne’s counterterror career.

In the wake of the attack on the Capitol, Meta had a problem: “It’s tough to profile or pinpoint the type of person that would be inclined to participate in January 6, which is true of most terrorist groups,” Byrne said. “It’s an ideology, it lives in your mind.”

But what if the company could prevent the next recruit for the Proud Boys, or Three Percenters, or even ISIS? “That was our task,” Byrne said. “Figure out where these groups are organizing, kind of nip it in the bud before they carry out any further real-world violence. We need to make sure they’re not in groups together, not friending each other, and not connecting with like-minded people.”

She was assigned to work on Meta’s Malicious Actor Framework, a system intended to span all its platforms and identify “malicious actors” who might be prone to “dangerous” behavior using “signals,” Byrne said. The approach, she said, had been pioneered at Meta by the child safety team, which used automated alarms to alert the company when it seemed an adult might be attempting inappropriate contact with a child. That tactic had some success, but Byrne recalls it also mistakenly flagged people like coaches and teachers who had legitimate reason to interact with children.

Posting praise or admiring imagery of Osama bin Laden is relatively easy to catch and delete. But what about someone interested in his ideas? “The premise was that we need to target certain kinds of individuals who are likely to sympathize with terrorism,” Byrne said. There was just one problem, as Byrne puts it today: “What the fuck does it mean to be a sympathizer?”

In the field, this Obama-era framework of stopping radicalization before it takes root is known as Countering Violent Extremism, or CVE. It has been criticized as both pseudoscientific and ineffective, undermining the civil liberties of innocent people by placing them under suspicion for their own good. CVE programs generally “lack any scientific basis, are ineffective at reducing terrorism, and are overwhelmingly discriminatory in nature,” according to the Brennan Center for Justice.

Byrne had joined Meta at a time when the company was transitioning “from content-based detection to profile-based detection,”said Byrne. Screenshots of team presentations Byrne participated in show an interest in predicting dangerousness among users. One presentation expresses concern with Facebook’s transition to encrypted messaging, which would prevent authorities (and Meta itself) from eavesdropping on chats: “We will need to move our detection/enforcement/investigation signals more upstream to surfaces we do have insight into (eg., user’s behaviors on FB, past violations, social relationships, group metadata like description, image, title, etc) in order to flag areas of harm.”

Meta specifically wanted the ability to detect and deter “risky interactions” between “dangerous individuals” or “likely-malicious actors” and their “victims” vulnerable of radicalization — without being able to read the messages these users were exchanging. The company hoped to use this capability, according to these meeting materials, to stop “malicious actors distributing propaganda,” for example. This would be accomplished using machine learning to recognize dangerous signals on someone’s profile, according to these screenshots, like certain words in their profile or whether they’d been members of a banned group.

Byrne said the plan was to incorporate this policy into a companywide framework, but she departed Meta too soon to know what ultimately came of this plan.

Meta confirmed the existence of the malicious actor framework to The Intercept, explaining that it remains a work in progress, but disputed its predictive nature.

Byrne has no evidence that Meta was pursuing a system that would use overtly prejudiced criteria to determine who is a future threat, but feared that any predictive system would be based on thin evidence and unconsciously veer toward bias. Civil liberties scholars and counterterror experts have long warned that because terrorism is so extremely rare, any attempt to predict who will commit it is fatally flawed because there simply is not enough data. Such efforts often regress, wittingly or otherwise, into stereotypes.

“I brought it up in a couple meetings, including with my manager, but it wasn’t taken that seriously,” Byrne said.

Byrne recalls discussion of predicting such radicalism risk based on things like who your friends are, what’s on your profile, who sends you message, and the extent to which you and your network have previously violated Meta’s speech rules. Given the fact enforcement of those rules has been shown to be biased along national or ethnic lines and plagued by technical errors, Byrne feared the worst for vulnerable users. “If you live in Palestine, all of your friends are Palestinians,” Byrne said. “They’re all getting flagged, and it’s like a self-licking ice cream cone.”

In the spring of 2022 investigators drawn from Meta’s internal Integrity, Investigations, and Intelligence team, known as i3, began analyzing the profiles of Facebook users whose profiles had run afoul of the Dangerous Organizations and Individuals policy, Byrne said. They were looking for shared traits that could be turned into general indicators of risk. “As someone who came from a professional research background, I can say it wasn’t a good research methodology,” Byrne said.

Part of her objection was pedigree: People just barely removed from American government were able to determine what people could say on online, whether or not the internet users lived in the United States. Many of these investigators, according to Byrne’s recollection and LinkedIn profiles of her former colleagues she shared with The Intercept, had arrived from positions at the Defense Department, FBI, and U.S. intelligence agencies, institutions not known for an unbiased approach to counterterror.

Over hours of interviews, Byrne never badmouthed any of her former colleagues nor blamed them individually. Her criticism of Meta is systemic, the sort of structural ailment she had hoped to change from within. “It was people that I personally liked and trusted, and I trusted their values,” Byrne said of her former colleagues on Meta’s in-house intelligence team.

Byrne feared implementing a system so deeply rooted in inference could endanger the users she’d been hired to protect. She worried about systemic biases, such as “the fact that Arabic language just wasn’t really represented in our data set.”

She worried about generalizing about one strain of violent extremism and applying it to drastically different cultures, contexts, and ideologies: “We’re saying Hamas is the exact same thing as the KKK with absolutely no basis in logic or reason or history or research.” Byrne encountered similar definitional headaches around “misinformation” and “disinformation,” which she says her team studied as potential sources of terror sympathy and wanted to incorporate into the Malicious Actor Framework. But like terrorism itself, Byrne found these terms simply too blunt to be effective. “We’re taking some of the most complex, unrelated, geographically separated, just different fucking things, and we’re literally using this word terrorism, or misinformation, or disinformation, to treat them as a binary.”

Private Policy, Public Relations

Toward the end of her time at Meta, Byrne began to break down. The prospect of catching enemies of the state had energized her at first. Now she faced the grim, gradual realization that she wasn’t accomplishing the things she hoped she would. Her work wasn’t making Facebook safer, nor the people using it. Far from manning the barricades against extremism, Byrne quickly found herself just another millennial in a boring tech job.

But while planning the Malicious Actor Framework, these feelings of futility gave way to something worse: “I’m actually going to be an active participant in harm,” she recalls thinking. The speech of people she’d met in her studies abroad were exactly the kind her job might suppress. Finally, Byrne had decided “it felt impossible to be a good actor within that system.”

Spiraling mental health struggles resulted in a leave of absence in the spring of 2023 and months of partial hospitalization. Away from her job, grappling with the nature of her work, Byrne realized she couldn’t go on. She returned at the end of the summer for a brief stretch before finally quitting on October 4. Her departure came just days before the world would be upended by events that would quickly implicate her former employer and highlight exactly why she fled from it.

For Byrne, watching the Israeli military hailed by her country’s leaders as it kills tens of thousands of civilians in the name of fighting terror exposes everything she believes wrong and fraudulent about the counterterrorism industry. Meta’s Dangerous Organizations policy doesn’t take lives, but she sees it as rooted in that same conceptual injustice. “The same racist, bullshit dynamics of ‘terrorism’ were not only dictating who the U.S. was allowed to kill, they were dictating what the world was allowed to see, who in the world was allowed to speak, and what the world was allowed to say,” Byrne explained. “And the system works exactly as the U.S. law intends it to — to silence resistance to its violence.”

In conversations, it seems most galling for Byrne to compare how malleable Meta’s Dangerous Organizations policy was for Ukraine, and how draconian it has felt for those protesting the war in Gaza, or trying to document it happening around them. Following the Russian invasion of Ukraine, Meta not only moved swiftly to allow users to cheer on the Azov Battalion, but also loosened its rules around incitement, hate speech, and gory imagery so Ukrainian civilians could share images of the suffering around them and voice their fury against it. Byrne recalls seeing a video on Facebook of a Ukrainian woman giving an invading Russian soldier seeds, telling him to keep them in his pockets so they’d flower from his corpse on the battlefield. Were it a Palestinian woman taunting to an Israeli soldier, Byrne said, “that would be taken down for terrorism so quickly.”

Today, Byrne remains conflicted about the very concept of content moderation. On the one hand, she acknowledges that violent groups can and do organize via platforms like Facebook — the problem that brought her to the company in the first place. And there are ways she believes Meta could easily improve, given its financial resources: more and better human moderators, more policy drafted by teams equipped to meet the contours of the hundreds of different countries where people use Facebook and Instagram.

While Byrne and her colleagues were supposed to be preventing harm from occurring in the world, they often felt like they were a janitorial crew responding to bad press. “An article would come out, all my team would share it, and then it would be like ‘Fix this thing’ all day. I’d be glued to the computer.” Byrne recalls “my boss’s boss or even Mark Zuckerberg just like searching things, and screenshotting them, and sending them to us, like ‘Why is this still up?’” She remembers her team, contrary to conventional wisdom about Big Tech, “expressing gratitude when there would be [media] leaks sometimes, because we’d all of a sudden get all of these resources and ability to change things.”

Militant neo-Nazi organizations represent a real, violent threat to the public, and they and other violent groups can and do organize using online platforms like Facebook, she readily admits. Still, watching the way pro-Palestinian speech has been restricted by companies like Meta since October 7, while glorifications of Israeli state violence flows unfettered, pushed her to speak out publicly about the company’s censorship apparatus. 

In her life post-Meta, galvanized by the ongoing Israeli bombardment of Gaza, Byrne has become active in pro-Palestinian protest circles and outspoken in her criticism in her former employer’s role in suppressing speech about the war. In February, she gave a presentation on Meta’s censorship practices at the NOLA Freedom Forum, a New Orleans activist group, providing an insider’s advice on how to avoid getting banned on Instagram.

She’s still maddened by the establishment’s circular logic of terrorism, which casts non-state actors as terrorists while condoning the same behaviors from governments. “The scales of acceptable casualties are astronomically different when we’re talking about white, state-perpetrated violence versus brown and black non-state-perpetrated violence.”

Unlike past Big Tech dissidents like Frances Haugen, Byrne doesn’t think her former employer can be reformed with tweaks to its algorithms or greater transparency. Rather, she fundamentally objects to an American company policing speech — even in the name of safety — for so much of the planet.

So long as U.S. foreign policy and federal law brands certain acts of violence beyond the pale depending on politics and not harm — and so long as Meta believes itself beholden to those laws — Byrne believes the machine cannot be fixed. “You want military, Department of State, CIA people enforcing free speech? That is what is concerning about this.”

Emma is a tech enthusiast with a passion for everything related to WiFi technology. She holds a degree in computer science and has been actively involved in exploring and writing about the latest trends in wireless connectivity. Whether it's…

What's your reaction?

Related Posts

1 of 178