OpenClaw, Moltbook and the future of AI agents

Collectible illustrated trade card from the 'Fish We Eat' series, published in 1954 by the British Whitefish Authority, depicting crab
(Photo by Nextrecord Archives / Getty Images)

What happens when a genuinely useful agent collides with meme culture? Meet OpenClaw—formerly known as Moltbot and, at its inception, Clawdbot before that—an open-source AI agent that has become the most talked-about AI tool on the internet this week. Built by developer Peter Steinberger, it runs locally on a user’s own hardware and connects to everyday apps like WhatsApp, Slack, Discord and iMessage, acting as a proactive digital assistant. It can manage emails, update calendars, run commands, summarize information and take autonomous actions across a user’s online life.

In short, OpenClaw has all the ingredients for this week’s featured AI recipe: a tool that actually works, personal stakes and just enough absurdity to fuel memes. That combination has resonated deeply with the GTD—or “get things done”— lifehacking community, said IBM Senior Research Scientist Marina Danilevsky on the latest episode of Mixture of Experts. “It is very personal, it’s very easy, and you can get both very practical and very silly with it.”

Demos of the agent autonomously completing tasks rocketed across X, TikTok and Reddit, and the tool has racked up over 150,000 GitHub stars to date. Users across social media have called out its persistent memory and the fact that its agentic behavior makes it feel less like a chatbot and more like a true digital employee or assistant. If that weren’t enough, its mascot is an adorable “space lobster,” inspired by Molty, Steinberger’s personal AI assistant.

As the tech world grappled with OpenClaw’s sudden rise, the story took an even stranger turn. One OpenClaw agent—named Clawd Clawderberg, created by Matt Schlicht, Cofounder of Octane AI—built Moltbook, a social network designed exclusively for AI agents. On Moltbook, agents generate posts, comment, argue, joke and upvote each other in a swirl of automated discourse. Humans may observe, but cannot participate. “It’s like a Black Mirror version of Reddit,” IBM Distinguished Engineer Chris Hay told IBM Think in an interview. Since launching on January 28, Moltbook has ballooned to more than 1.5 million agents.

Where vertical integration may not be needed

While OpenClaw and Moltbook might seem like slightly absurd AI fads, beneath the spectacle lies a more meaningful shift in how AI agents are built—and who gets to build them. Kaoutar El Maghraoui, a Principal Research Scientist at IBM, said on the episode that the rise of OpenClaw challenges the hypothesis that autonomous AI agents must be vertically integrated, with the provider tightly controlling the models, memory, tools, interface, execution layer and security stack for reliability and safety. 

Instead, OpenClaw provides “this loose, open-source layer that can be incredibly powerful if it has full system access,” El Maghraoui said, adding that the tool shows that creating agents with true autonomy and real-world usefulness is “not limited to large enterprises. [It] can also be community driven.” 

OpenClaw’s popularity also reflects a bigger moment for AI agents more broadly. What was recently a concept in research papers and enterprise roadmaps has become something that regular people can install, run and experiment with. Through new tools like Claude Cowork and IBM’s Granite 4.0 Nano, agents are starting to move from demos to daily use, sharpening the public’s vision of what AI can actually do. 

The latest AI trends, brought to you by experts

Get curated insights on the most important—and intriguing—AI news. Subscribe to our weekly Think newsletter. See the IBM Privacy Statement.

Thank you! You are subscribed.

Your subscription will be delivered in English. You will find an unsubscribe link in every newsletter. You can manage your subscriptions or unsubscribe here. Refer to our IBM Privacy Statement for more information.

Security flags and signals for the AI agent future

Still, users including El Maghraoui and Danilevsky have raised questions about whether OpenClaw offers sufficient guardrails. A highly capable agent without proper safety controls can end up creating major vulnerabilities, El Maghraoui said, especially if it is used in a work context.

Creating secure AI in work contexts is the reason that IBM and AI giant Anthropic announced a partnership last fall. “Enterprises are looking for AI they can actually trust with their code, their data, and their day-to-day operations,” said Mike Krieger, Chief Product Officer at Anthropic, in a release at the time. As part of the partnership, IBM created, and Anthropic verified, Architecting Secure Enterprise AI Agents with MCP, a structured approach to designing, deploying and managing secure enterprise AI agents. Anthropic’s banner LLM model Claude was the inspiration for the original name for OpenClaw: Clawdbot, which specifically referred to the monster users see while reloading Claude Code, according to Clawdbot’s creator Peter Steinberger.

For personal use—especially on a separate device—the risk is likely less, El Maghraoui said. She stressed that OpenClaw changes the conversation around integrations, spurring developers to ask, “What kind of integration matters most, and in what context and in what domains? Vertical integration is important in certain domains because of the security aspect. But in other domains, maybe we don’t need that, or it’s not as important.”

Of course, neither OpenClaw nor Moltbook is likely to be deployed in workplaces soon. They expose users—and employers, if used on work devices—to too many security vulnerabilities, said Hay. Yet these messy early experiments could prove invaluable in the long run by helping the industry build needed guardrails, he said.

El Maghraoui agreed, noting that observing how agents behave inside Moltbook could inspire “controlled sandboxes for enterprise agent testing, risk scenario analysis and large-scale workflow optimization.” Companies won’t likely replicate Moltbook as a social network, she told IBM Think, but may borrow its core idea: that of “many agents interacting inside a managed coordination fabric, where they can be discovered, routed, supervised and constrained by policy.”

Catch the full episode of Mixture of Experts on YouTubeSpotify or Apple Podcasts.  

Aili McConnon

Staff Writer

IBM

Abstract portrayal of AI agent, shown in isometric view, acting as bridge between two systems
Related solutions
AI agents for business

Build, deploy and manage powerful AI assistants and agents that automate workflows and processes with generative AI.

    Explore watsonx Orchestrate
    IBM AI agent solutions

    Build the future of your business with AI solutions that you can trust.

    Explore AI agent solutions
    IBM Consulting AI services

    IBM Consulting AI services help reimagine how businesses work with AI for transformation.

    Explore artificial intelligence services
    Take the next step

    Whether you choose to customize pre-built apps and skills or build and deploy custom agentic services using an AI studio, the IBM watsonx platform has you covered.

    Explore watsonx Orchestrate Explore watsonx.ai