Sam Altman made a claim in a January 2025 blog post. The OpenAI CEO wrote that his company is “now confident we know how to build AGI as we have traditionally understood it”—and that OpenAI was already shifting its ambitions beyond that, toward superintelligence. He predicted that 2025 could see the first AI agents materially join the workforce, changing how companies operate at scale. It was a confident, almost matter-of-fact declaration about what has long been considered one of the most consequential technological thresholds in human history.Dario Amodei, CEO of rival AI lab Anthropic, broadly agrees with where things are headed. But speaking to Indian podcaster Nikhil Kamath, Amodei said the reaction—or lack of one—from governments and the wider public is deeply troubling. “It is surprising to me that we are, in my view, so close to these models reaching the level of human intelligence,” he said. “And yet there doesn’t seem to be a wider recognition in society of what’s about to happen.“
Dario Amodei compared the situation to a tsunami that nobody believes Is real
Amodei didn’t mince words. He said society is effectively dismissing a threat it can already see coming. “It is as if this tsunami is coming at us, and you know it’s so close we can see it on the horizon, and yet people are coming up with these explanations for ‘oh, it’s not actually a tsunami—that’s just a trick of light.'” He said government hasn’t done enough to address the risks, and that an ideology pushing for maximum acceleration—without adequate caution—has taken hold in parts of the industry.That last part is notable coming from Amodei. He is not a doomer by instinct. He wrote “Machines of Loving Grace” in 2024—a lengthy, optimistic essay imagining AI compressing decades of medical progress into years, eliminating diseases and extending lifespans. But he told Kamath that enthusiasm for the technology’s benefits hasn’t been matched by an “appropriate realization of risk,” and certainly not by meaningful action.
Sam Altman defines the goal; Amodei worries about what happens when we get there
Altman’s framing is largely forward-looking and bullish. In his “Reflections” post, he wrote that “superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own.” He has also acknowledged AGI is a “sloppy term,” but his direction of travel is clear—OpenAI wants to build something smarter than humans, and it believes it’s close.Amodei shares the timeline. He doesn’t share the calm. His concern, as expressed to Kamath, is that the window for society to prepare is shrinking fast—and that the people best positioned to sound the alarm are the same ones racing to build the thing.