According to Futurism, 27-year-old anti-AI activist Sam Kirchner has been missing for about two weeks and is now considered armed and dangerous by San Francisco police. Kirchner, who helped start the Stop AI group last year, allegedly threatened to buy high-powered weapons and kill people at OpenAI, specifically targeting its employees. This threat prompted a lockdown at OpenAI’s offices last month. The situation escalated after Kirchner assaulted the current leader of Stop AI, Matthew “Yakko” Hall, following a disagreement over protest messaging and an attempt to access the group’s funds. His West Oakland apartment was found empty, and police have issued an internal alert. While his former Stop AI colleagues believe he’s more of a danger to himself, some of his final words were that the “nonviolence ship has sailed for me.”
When activism turns extreme
This is a pretty dark turn for a movement that’s supposed to be about non-violent protest. Kirchner’s story is a case study in how a genuine, albeit extreme, fear can curdle into something much more dangerous. He apparently saw AI as such an existential threat that he felt conventional activism was moving too slowly. That’s a scary headspace to be in. And it’s not happening in a vacuum. The rhetoric around AI, especially from some of its own creators, is incredibly apocalyptic. When you have CEOs like Sam Altman and Dario Amodei talking about potential catastrophe, and books with titles like “If Anyone Builds It, Everyone Dies” hitting bestseller lists, you’re going to attract people who take that message to its logical, terrifying endpoint. The philosopher Émile P. Torre nailed it when he told The Atlantic that this mindset “does incline people toward thinking, Well, maybe any measure might be justifiable.”
The broader AI skeptic landscape
Stop AI is just one group in a weird ecosystem of organizations worried about artificial intelligence. Their goal is a “permanent global ban on the development of artificial superintelligence.” Then you have groups like Pause AI, which wants a moratorium until safety can be guaranteed. And on the far, far extreme end, there are groups like the Zizians—an actual cult implicated in murders, though unrelated to AI. Kirchner’s meltdown shows how fluid the line can be between passionate activism and unhinged extremism. It’s one thing to protest outside an office; it’s another to allegedly threaten to shoot up the place. The scary part is that the core fear driving these groups isn’t considered totally fringe. It’s discussed seriously in boardrooms and on podcasts. That validation, even if it’s meant as a cautionary tale, can fuel someone who’s already on the edge.
Why the doomer narrative is a double-edged sword
Here’s the thing that the tech leaders probably don’t fully grasp: when you constantly talk about how your product could end the world, you’re not just drumming up regulatory interest or marketing a “serious” brand. You’re painting a target on your own back. As Sam Altman himself has warned, he expects “some really bad stuff to happen.” When you say that, and you’re a billionaire CEO, you sound like you’re playing with fire for profit. To someone who’s disenfranchised, angry, and truly believes the doom scenario, you become the villain in that story. You’re the unaccountable rich person dictating the future. The lockdown at OpenAI last month proves this isn’t a theoretical concern. It’s a direct, physical security risk that emerges from the very narrative these companies help propagate.
A cautionary tale with no easy answers
So where does this leave us? Kirchner is still missing. His friends are scared for him. The police are scared of him. And the entire debate about AI safety just got a lot more tense and real. It’s a tragic situation all around. It highlights the human cost of these abstract, world-ending debates. For the tech industry, it’s a stark reminder that words have consequences, especially when you’re talking about the fate of humanity. And for the activist community, it’s a brutal lesson in how to manage internal dissent and mental health before things spiral. Ultimately, this isn’t just a story about one missing activist. It’s a symptom of a society grappling with a technology it doesn’t understand, fueled by fear from the top down and the bottom up. That’s a volatile mix, and Sam Kirchner’s story might be the first violent flare-up, but it probably won’t be the last.
