Everyone’s arguing about which AI tool writes the best code or generates the best images. Meanwhile, the companies building these systems are racing to ship faster, raise more, and consolidate more power than any tech cycle before them. And a small group of researchers, journalists, and engineers have been trying to get your attention about why that should concern you.
Most people aren’t listening.
I’m not talking about AI taking your job. I’m talking about the people asking whether we even understand what we’re building, whether we can control it, and what happens when the people in charge care more about valuation than safety. These aren’t fringe voices. Some of them literally built the foundations of modern AI.
And they’re worried.
This post is my attempt to point you toward the people and organizations I think are worth your time. I don’t agree with every single take on this list. But these are the conversations that actually matter, and most people aren’t even aware they’re happening.
The race to the bottom
The AI industry has no meaningful oversight. No regulatory body audits these models before they ship. No accountability when something goes wrong. The companies building the most powerful systems anyone’s ever built are essentially self-governing, and they’re doing exactly what you’d expect self-governing corporations to do: whatever is fastest and most profitable.
Ronan Farrow and Andrew Marantz just published a piece in The New Yorker about Sam Altman, built on over a hundred interviews and internal documents. A board member quoted saying, “I don’t think Sam is the guy who should have his finger on the button.” It’s one company, but it’s not one company’s problem. The entire industry is in an arms race where safety is a line item, and speed is the product. The people on this list have been saying this for years. Testifying before Congress. Writing open letters. Quitting their jobs to speak freely. And still, nothing has changed structurally.
That has to change.
The ones sounding the alarm
Geoffrey Hinton is often called the godfather of deep learning. He spent decades building the neural network architectures that power the AI systems we use today. Then he left Google specifically so he could speak freely about the risks. Yoshua Bengio, the other godfather of deep learning, has done the same. He’s publicly called for regulation and warned about existential risk. When two of the three people most responsible for modern AI are telling you something is wrong, that’s not alarmism. That’s a signal.
Eliezer Yudkowsky has been writing about AI alignment for over twenty years. Long before “alignment” was a word anyone in Silicon Valley bothered with. He’s polarizing. He’s uncompromising. He will test your patience. But he’s been more right about the trajectory of AI development than most of the people who dismissed him, and his writing on the alignment problem is some of the clearest thinking on the subject that exists.
Roman Yampolskiy is an AI safety researcher whose core argument is blunt: we have no proof that AI can be controlled, and we’re building it anyway. Not “we haven’t figured it out yet.” We have no proof it’s even possible. That distinction matters, and most people haven’t sat with it long enough.
Connor Leahy brings something different to this conversation: urgency without the academic distance. He founded Conjecture and now leads Control AI, and he’s one of the younger voices in this space pushing hard on governance and policy. He’s not waiting for consensus. He’s building the case for action now.
Timnit Gebru co-authored a research paper on the risks of large language models while at Google. Google fired her for it. Let that sit for a second. One of the companies building these systems fired a researcher for documenting the harms. She went on to found the DAIR Institute and has continued the work, but her story tells you everything you need to know about how seriously these companies take internal criticism.
The ones making it make sense
The researchers above are doing critical work, but let’s be honest, most people aren’t going to read a paper on AI containment theory. That’s where communicators come in.
Tristan Harris made tech ethics a mainstream conversation with The Social Dilemma. Through the Center for Humane Technology, he’s been one of the most effective people at translating abstract tech risks into something your parents can understand. His focus has shifted heavily toward AI, and nobody’s been better at framing these issues for a general audience.
Hannah Fry is a mathematician who communicates the implications of AI and algorithms without dumbing anything down. She holds complexity and clarity at the same time, which is harder than it sounds, and her work is a good starting point if the more technical voices feel overwhelming.
Karen Hao is a journalist who has done some of the best investigative reporting on AI’s real-world impact, from bias in facial recognition to the human labor behind “automated” systems. Her work at MIT Technology Review and The Atlantic has shown what happens when these systems meet actual people, and it’s not the clean story the press releases tell.
Where to start
If you’re reading this and thinking “okay, but what do I actually do,” here are some places to start:
Watch The AI Doc: Or How I Became an Apocaloptimist. Daniel Roher’s documentary came out earlier this year and features several of the people on this list. It’s not perfect, but it’ll get you up to speed faster than anything else on this list.
The Human Movement is focused on public awareness and grassroots mobilization. If you’ve never engaged with AI safety beyond the occasional headline, this is a good front door.
Human Compatible AI is Stuart Russell’s research center at UC Berkeley, working on the technical side of building AI systems that are actually aligned with human values. Russell wrote the textbook on AI, the one used in most university courses, and this is where he’s putting his energy.
Control AI tackles policy and governance. If you want to understand what regulatory frameworks might actually look like and why they matter now, not later, start here.
This isn’t a doom post
I use AI every day. I build with it. I’m not anti-AI, and I’m not going to pretend otherwise.
But I am paying attention to the people who understand these systems at a fundamental level and are saying, clearly and repeatedly, that we need to slow down and think about what we’re doing. The fact that most people can’t name a single person on this list bothers me. These aren’t obscure academics yelling into the void. They’re some of the most accomplished people in AI, ethics, and tech journalism, and the companies building these systems are not listening to them.
The least you can do is start listening. And if you’re already listening and have others to add to the list, let me know!
Leave a Comment