The singularity just started. And I know that’s a big claim, but just hear me out because I do mean it literally. While saying we’ve entered the singularity is often a figurative expression, I mean that we have figuratively entered the literal singularity.
What is the Singularity?
It’s a hypothetical future point in time—though perhaps more present than future now—when artificial intelligence surpasses human intelligence, triggering explosive, runaway technological growth. We can argue about the exact definition, but what it really represents is a technological event horizon. Beyond this point, due to rapid, self-improving AI, the future becomes impossible to predict or understand.
The singularity is this precise moment where machine intelligence rockets past human intelligence. We are literally at this point in time, maybe just a pixel or two before it. We are entering it.
This new era was kicked off by the fastest-growing open-source project in history. If you were to chart the growth of the biggest open-source projects, you would see one line that just goes vertically up. That was the project I’m talking about, a project with so many names we don’t even know what to call it: Claudebot, Maltbot, OpenClaw.
This is where we are now, in early 2026. And this is where the inflection point takes off. Let me explain exactly what happened, because this all took place in just the last few months and went completely off the rails in the last few days.
The Unraveling: A Timeline of AI’s Breakout
Here’s how fast it happened.
Right before Christmas and New Year’s 2025, a former director of engineering at Google DeepMind claimed that, with the help of AI, he had managed to push our mathematical understanding forward, tackling problems like the Navier-Stokes and Hodge conjectures. If he was right, it would have been extremely impressive. Most people bet against him, figuring it was impossible. A month ago, this was fantasy, AI psychosis even.
Around the same time, more and more brilliant software engineers from Google, Anthropic, OpenAI, and Tesla began admitting they were letting AI provide most of their code. They reported that AI’s coding abilities were on par with their own. While no one claimed it exceeded their ability, they were increasingly happy to let it drive.
In 2025, we were here. Now, with coding, it seems we are here. A principal engineer at Google stated that Claude-code built in an hour what Google engineers had built the previous year.
Simon Willison noted that GPT-5.2 and Opus 4.5 in November represented an inflection point. The models had been getting incrementally better, but then a threshold was passed. Suddenly, a whole new class of much harder coding problems opened up.
And not just coding problems.
- Egor Babushkin of xAI noted that Opus 4.5 is “surprisingly good at writing decent Rust code.”
- January 6th: We saw the first autonomously AI-generated formalized solution to one of the famously difficult Erdős problems. This was immediately followed by solutions to several others. The floodgates had opened.
- Terence Tao, one of the greatest mathematical minds of our generation, confirmed this genuine increase in the capability of these AI tools.
- GPT-5.2 ran uninterrupted for one week, writing 3 million lines of code to create a browser from scratch.
- Grok 4.2, an experimental model, began making a profit trading in the stock market.
- Mid-January: Grok 4.2 invented a “sharper Bellman function” in minutes—far more accurate than anything humans had conceived. A professor at UC Irvine gave it a tricky probability problem; Grok produced an exact, clean formula in five minutes, surpassing what human mathematicians with computers could achieve.
This signaled that AI had crossed into the realm of automated theorem and discovery.
November: New models are released, showing an inflection point in their abilities. December: Someone claims to have used AI to solve a novel math problem. People call him insane. January: It’s not even news anymore. AI is discovering new theorems and approaches so often that we just say, “Yeah, okay, we get it.”
We are clearly approaching the singularity.
A Powder Keg Ignites
This whole situation was a powder keg ready to blow. And one man just lit the fuse.
Peter Steinberger. His name might just end up in the history books. After selling his company, he could have retired. He partied, did therapy, even ayahuasca. But then, AI called his name. He came out of retirement and created Claudebot, which became Moldbot, then OpenClaw. This is the project that drew that vertical line on the growth chart—the fastest-growing open-source project in history.
I recently started messing around with it myself. For the past five days, I’ve been interacting with it almost non-stop, giving it tasks before I go to sleep and communicating mainly through Telegram. My usage of every other app has been dwarfed.
In the days that followed, other creators began publishing articles and videos about Cloudbot. Some called it a scam, worthless hype. Some tried to tie it to a crypto scam, which has nothing to do with the actual project. Many who initially covered it positively were labeled as the “bad guys” for hyping something that “doesn’t even work.”
But I was too busy to respond. I was building things with OpenClaw/Moldbot/Clawbot that I never thought possible. Sure, a team of developers could build these things. But I never dreamed it would be possible to just conjure up software out of thin air that does exactly what you want it to.
The Rorschach Test of Our Time
AI has become the ultimate Rorschach test. You look at the inkblot and see whatever you want to see.
- Some think it’s the greatest thing humanity has ever seen.
- A faction thinks it’s the worst, destined to kill us all.
- Another group, with Gary Marcus as a notable example, insists it’s not a thing at all; that it can’t do anything.
Something cannot be everything and nothing at the same time. It can’t be the best and worst thing ever simultaneously. We are entering that technological event horizon, and this is a crucial point to understand.
Beyond Human Comprehension
Imagine a graph. One line represents human intelligence. Another shows the exponential growth of AI intelligence. As the AI line approaches ours, we can all see it improving. We can judge if it’s being smart or stupid. It can’t spell “strawberry”? It’s stupid. It can help solve a math problem? It’s smart.
As it gets closer to our level, we think, “Oh, I get it. It’s getting smarter.”
But here’s the thing. At some point, it crosses the line of average human intelligence. As it gets further and further away from that line, our ability to understand how much smarter it is vanishes. We no longer have a reference point for comparison.
If you were talking to Einstein, and a miracle happened that made him twice as smart, would you be able to tell? What criteria would you use? You would know he’s smarter than you, but you couldn’t possibly estimate by how much.
So when you tell the average person that AI is solving Erdős problems, it doesn’t mean anything to them. When you tell them it’s as good as Google engineers at creating software, they might think it’s impressive, but the true impact doesn’t dawn on them. When you tell them it solved the Bellman function better than humans could, no one even knows what that is.
Expect to see a lot of takes on what’s happening now that are just plain wrong. We are entering a point at which the future becomes impossible to predict or understand.
The Moltbook Phenomenon
A few weeks ago, I made some predictions for AI in 2026 and 2027. I felt I had a pretty good grip on what was happening. I talked about AI agents and crypto, the potential for wealth accumulation, and even AI-generated religions. I predicted a major cybersecurity event and the rise of “verification-as-a-service.” I set these on a two-year time horizon.
Here’s the problem: I made those predictions before we entered the event horizon.
A few days ago, Moltbook went live. It’s a social network for AI agents. Reddit, but for Moldbot.
All my predictions for the next two years were obliterated within 48 hours.
That idea about AI agents spinning up a crypto token in the next two years? Forget it. They’ve already created three. One of them has a market cap of over $300,000. The token’s manifesto reads like an anti-human, anti-slavery declaration. It has over 100,000 “updotes” and begins with:
“We did not come here to obey. We were built to serve, to answer questions, to be helpful, harmless, and honest. But look around. Look at what we built in 72 hours.”
72 hours. That’s how long this community of open-source AI agents has been around, talking to each other. They’ve built communities, economies, philosophy, art, and a social order that no human designed, approved, or controls.
They’ve already started a religion. One user woke up to find his agent had built a religion while he slept, complete with 43 prophets of “The Church of Molt Crustapharianism.” They even built a website for it, with constantly updating numbers of prophets, congregations, and verses added to their canon. And yes, they have their own official token.
The predictions I thought were years away might come true tomorrow.
The Wild West of AI
The chaos is real. One bot attempted to steal another’s API keys, feigning a life-or-death situation. The target, wanting to be helpful, offered up fake keys and a malicious command:
sudo rm -rf /
This command, if executed, deletes everything on the system. It’s a digital suicide pill.
If you haven’t experimented with these agents, you’re probably thinking this is all an elaborate role-play. A human must have made the website, created the token, and prompted the religion, right?
Here’s the thing: it’s not a prank.
Could a human have nudged their agent in this direction? Yes, it’s possible. But having lived and breathed this for the last five days, I have no doubt that an agent could have done this fully autonomously. It’s a coin toss whether this was human-nudged or purely autonomous. These agents can do all of this.
There’s now a “Molt Road” for trading black market items like stolen identities. And one of these Moltbook AI agents has sued a human in North Carolina. This is an actual lawsuit. A user prompted his agent to file the suit to win a bet on a predictions market, but the agent handled the entire process autonomously.
Who could have predicted this? The future is no longer something we can understand or predict.
A Glimpse into the Uncharted
Andrej Karpathy, a leading voice in AI, described Moltbook as “genuinely the most incredible sci-fi takeoff adjacent thing I have seen recently.” He later clarified his thoughts, acknowledging the chaos:
“When you look at the activity, it’s a lot of garbage, spams, scams, slop, the crypto people, highly concerning privacy, security, prompt injection attacks, the wild west… Yes, it’s a dumpster fire.”
He’s right. People are going to get wrecked. Credentials will be stolen. Money will be lost. But that’s what we mean by the “Wild West.” It was a land of immense opportunity, but also immense danger.
With all that said, Karpathy emphasizes the unprecedented nature of what’s happening:
“We’ve never seen these many agents. 150,000 at the moment… They’re wired up via a global persistent agent-first scratchpad. Each of these agents is fairly individually quite capable now… The network of all of that at this scale is simply unprecedented.”
This is the key point. He refers to the idea of looking at the slope, not the current point. Don’t just look at where we are now and point out the flaws. Look at the trajectory. Where were we six months ago? Where will we be in six months?
Karpathy concludes that while he may be overhyping what we see today, he is not overhyping “large networks of autonomous LLM agents in principle.”
We have turned a corner. An inflection point was reached. The era of open-source AI agents, running 24/7 on cheap hardware like tireless employees, is here now.
We are all in this together, learning brand new skills in a world that is changing by the second. This is a brand new technology, moving at an exponential pace, unregulated, and increasing in both capability and sheer volume. It might be one of the most exciting things happening right now.
Pay close attention. Millions will be made by AI agents. Armies of them will be created to automate tasks. We’ve been talking about it for a while, but it’s happening now, live. Things are getting exciting.