It was somewhere between the calls to repeal the 19th Amendment and the declarations that I was a traitor who belonged in Guantanamo Bay that the trolls started to wear me down.

Several days before the onslaught began, I posted a dry Twitter video debunking a conspiratorial narrative that was gaining prominence among Trump supporters. The next week, while sitting in the waiting room of my doctor’s office, my iPhone grew hot as it processed a stream of tweets and direct messages telling me “Islam was right about women,” criticizing the size of my breasts, my chin dimple, and the symmetry of my face. According to the trolls, I was an “affluent white female liberal,” or “AWFL,” and part of a CIA psyop. The guest room that has served as my office since March, where I filmed the video, was actually a basement in Langley, they said. Next year, I would be “dealt with in the streets.” One tweet read chillingly: “I’d fix her.” When it’s happening to you, online abuse feels like a tornado of thousands of insects that, when swatted, will simply get angrier, or dirt that will get kicked up if you struggle.

I sent hundreds of reports to Twitter during the weeks I was targeted, all in vain. How could the artificial intelligence assisting with content moderation understand that the pictures of empty egg cartons were not nudges to go to the grocery store, but taunts meant to suggest that, as one of my abusers put it, “you birth babies, we build bridges,” and that my birthing years were dwindling?

The abuse I experienced—and my near total lack of recourse—is not unique. In fact, on the online misogyny scale, my experience wasn’t even particularly bad. I did not get any rape threats. Unlike more than 668,000 unwitting women, no one—to my knowledge, anyway—created deep fake pornography of me. I was not the subject of an involved sexualized disinformation campaign, the likes of which Vice President Kamala Harris and Representatives Alexandra Ocasio-Cortez and Ilhan Omar have endured.

But all of this is terrifyingly ubiquitous, and its impact on society is sprawling. Just before the United States saw its first woman vice president, treasury secretary, director of national intelligence, and more women and women of color serving in Congress than ever before, these figures were also being targeted for sex-based harassment meant to silence them. Over a two-month period in late 2020, I led a research team monitoring the social media mentions of 13 prominent politicians, including Harris, Ocasio-Cortez, and Omar. We found more than 336,000 instances of gendered and sexualized abuse posted by over 190,000 users. These widespread campaigns represent just a sliver of the abuse that women in public life deal with on a daily basis in the internet era.

Over half of the research subjects were also targeted with gendered and sexualized disinformation, a subset of online abuse that uses false or misleading sex-based narratives against women, often with some degree of coordination. These campaigns typically aim to deter women from participating in the public sphere. One such narrative suggested that several targets were secretly transgender. It implied not only that transgender individuals are inherently deceptive, but that this deception is responsible for the power and influence that women like Harris, Ocasio-Cortez, or New Zealand prime minister Jacinda Ardern hold. Women of color were subject to compounded attacks, playing to two of America’s greatest weaknesses: its endemic racism and misogyny.

The social media platforms, for their part, have not created infrastructures that support women enduring harassment and disinformation campaigns. Instead, they have created environments to cater to the needs and challenges that white, cisgender men face. They may as well adopt my abusers’ refrain— “If you can’t stand the heat, get out of the kitchen.” Platforms like Facebook and Twitter force women to report individual instances of harassment and disinformation, only to have them denied or ignored, despite the very real harm they inflict on victims’ lives and reputations. While platforms have improved at detecting some blatant gendered abuse—think of the top five profanities related to female body parts—they have been caught flat-footed at the burgeoning malign creativity that abusers employ. Harassers recognize that certain words and phrases might trigger platforms’ detection mechanisms, and so they use coded language, iterative, context-based visual and textual memes, and other tactics to avoid automated removal. The egg carton meme I received is just one example.