Contents:
We’ve had four cases associated with OpenClaw in the last few weeks.
This is what one MSP this told me. Not a researcher or a vendor trying to sell me a solution, but somebody that’s already dealing with the consequences, whilst most boards are still deciding whether to pay attention or not
For those who aren’t aware, OpenClaw is the fastest growing open-source project in GitHub history, 201,000 stars, 2 million visitors in a single week.
Businesses and developers adopted at a pace which made evaluation impossible. Thus, the security failures that followed were no surprise to anyone who was truly paying attention.
This is why I believe AI adoption has become competitive pressure rather than a strategic decision.
That distinction will define which businesses survive their first serious incident.
Speed Over Understanding
OpenClaws growth is truly impressive, and it didn’t happen slowly over years. It was an explosion.
OpenClaws was originally built to connect WhatsApp to an AI model, allowing it to manage inboxes, book reservations and control smart home devices.
However, it captured the imagination of developers and businesses almost overnight.
Within weeks OpenClaw had spawned a social media network specifically for AI agents and was even going to create an AI version of Silk Road. All this happens faster than understanding and certainly with no chance for any governance to take place.
This isn’t a bashing of OpenClaw. It’s not a story that’s unique to them, we’ve seen this pattern before and I’m sure we’ll see it many times again.
A shiny new capability arrives, genuine excitement surrounds it and it feels like it’s going to change the world. Somewhere between this excitement and deployment, all questions regarding security remain unheard.
This situation reminded me of a conversation with a previous colleague.
We were in the back of a taxi, and they were talking about how they’d uploaded some data to an AI chatbot and it has created some fantastic insights and a document for them.
I questioned whether we should be uploading that type of data.
The response and the look I got was clear. I was a party pooper and to be honest, I felt like one. Who wants to listen to that person and I get it.
The MSP that I mentioned is not an outlier. This is an early warning.
The real question for every board is whether they want to hear it now or will discover it for themselves later.
The Cost of Speed
The security flaws that we’re seeing in OpenClaw are not the result of a nation state that has spent many months working through exploits and chaining them together.
These are entirely predictable consequences of building fast and deploying fast without stopping to ask basic questions.
There were over 30,000 instances of OpenClaw found sitting exposed on public Internet without any authentication in place. Security researchers discovered vulnerabilities in 93% of them.
Gartner described it as an unacceptable cyber security risk and as a result, some organisations did act and blocked its traffic entirely.
This is what boards should be thinking about.
None of this required any sophistication. All it took was an organisation not asking basic questions before deployment, a culture where excitement led the way and drowned out the voice of governance.
This isn’t a technology failure, but it is a leadership failure.
Why AI Leaders Need You to Feel Urgent
I want to place a thought in your mind and it’s about the pressure that we all feel to adopt AI faster, to commit now because if we don’t our competitors will gain a competitive advantage.
Where does this pressure to adopt AI come from?
In May 2024 Elon Musk told us that artificial general intelligence (AGI), will be here next year. In December 2025 Sam Altman said ‘my guess is we’ll hit AGI sooner than most people in the world think’ and Dario Amodei has stated that, unless something goes wrong, AGI is coming in 2026 or 2027.
These are three of the most influential voices in artificial intelligence.
The latter two predictions may turn out to be right, clearly Elon Musk was wrong. They are all pushing the same narrative, adopt now or fall behind.
This narrative serves all of them commercially and that’s what we must recognise. Of course, they need to say AGI is coming soon and I understand they genuinely can’t give us an exact date because of the nature of their work.
But the momentum, the belief and the urgency that they are creating is all part of the product. They need investors to be confident; they need consumers to choose their product and that to some extent dictates this narrative.
If the boards making decisions on adoption are driven by emotion rather than strategy, their thoughts will be “are moving fast enough”, rather than “do we fully understand what we’re moving into”.
The governance, the security maturity, and the organisational structure required to adopt AI safely are not arriving at the same speed as the technology. This gap appears only to be growing and that’s exactly what happened with OpenClaw.
Your Decision Is Everyone’s Risk
Some people are going to be reading this blog and think that OpenClaw only concerns developers and small businesses, not something that impacts them. The reality is, it might not be OpenClaw that impacts you but there will be something else, whether it’s now or later.
Look at Jaguar Land Rover – the immediate impact was significant. The fallout came through the supply chain, with suppliers unable to operate and small businesses losing contracts overnight. Some businesses stopped working, including a transport company who had no passengers to transport.
None of those organisations had any say or influence over the decision that caused this damage, they simply existed within the same ecosystem.
And that’s exactly how I see OpenClaw, it’s a supply chain risk.
The ClawHub marketplace is part of the OpenClaw ecosystem. It was used to distribute over 300 malicious tools carrying malware, that were designed to steal API keys, authentication tokens and login credentials.
It only takes one person inside a business to install one of those tools and suddenly credentials connecting that person to an enterprise client are compromised.
The enterprise board never knew OpenClaw was running in their supply chain until the damage was already done.
Small and medium-sized businesses are under the same pressures as large organisations, with the same urgency and the same fears but they don’t have the budget and resource to have security teams or the governance infrastructure in place.
Ultimately, they are caught in the crossfire.
Most organisations have no visibility over what AI tools your supply chain is currently running. You are not asking the question because it has not yet occurred to you to ask it.
So let me re-frame this.
Your security posture is only as strong as the least prepared organisation connected to you right now.
Strategy Over Speed
In all the cybercrime cases I was involved in as a cyber detective, working with organisations in the hours and days following an initial compromise, very few of them were actually prepared.
I’m not criticising them. This is simply an observation about the absence of any preparation.
Some did have strong tactical controls in place. They could restore backups and get the system back online relatively quickly.
Others had fantastic communication structures and incident response plans in place. Their people knew their roles and responsibilities and were able to communicate to their supply chains, partners and staff with enough clarity to maintain a degree of control over the situation.
But I can’t think of any organisation that had effective controls across the whole business. None had thought through what a crisis actually looks like across every function, not just IT.
Being prepared doesn’t mean simply being technically ready, knowing the right questions to ask of your technical teams, and having the confidence to ask them.
It means understanding that a reassurance is not the same as proof. It means pushing back when the answer sounds like comfort rather than evidence.
The more senior leaders who question and challenge, the more that culture of scrutiny spreads across the organization. Not in a negative way, but more of a critical friend. Governance isn’t a burden placed on leadership, but it is the thing that makes leadership real.
Of course, there are compliance frameworks that can support non-technical leaders in asking the right questions about cyber risk. So, the barrier has never been your capability, it’s been your decision to engage.
You Cannot Outsource Accountability
Those that survive their first serious AI related incident will not be the ones who adopted fastest, but the ones who understood that speed without a strategy is simply exposure with a better marketing budget.
The MSP I spoke of earlier wasn’t a prediction; it was a lived experience.
OpenClaw is not a future risk sitting on a horizon that boards can monitor from a comfortable distance. It is already inside organisations, already inside supply chains, and already generating incidents while leadership teams are still debating whether AI governance belongs on the board agenda.
The hype isn’t going to slow down. The competition between AI platforms is only going to intensify and the pressure on boards to adopt faster, deploy broader and commit bigger is only going to grow.
And there will always be another tool, another marketplace, an ecosystem which is built for speed rather than the safety, waiting to become the next case study.
The question was never whether AI would create serious incidents, but whether your organisation would be ready when it did.
So, before you approve the next day adoption proposal or accept the next reassurance from a technical team, before you mistake speed for strategy, I’m going to leave you with three questions:
- When the first serious incident occurs for your organisation, will you be ready?
- Will you ask the right questions?
- Will you make defensible decisions?
Because the answers to those questions will shape everything that comes next.
If you liked this article, follow us on LinkedIn, Reddit, X, Facebook, and Youtube.
