What is OpenClaw? Is it safe to use?

OpenClaw is an AI assistant that automates tasks but poses some security risks. Learn what it does, real-world use cases, and how to stay safe.

Whether you’ve seen the memes of a user lowballing Tampa homeowners on Zillow or an AI specialist getting their email inbox nuked, OpenClaw has been the talk of the town. But what is OpenClaw, and is it safe to use if you’re not familiar with the tech?

In plain terms, OpenClaw is an AI assistant that can automate a lot of your day-to-day tasks without needing constant oversight like ChatGPT or other chatbots. While it can be useful, many security specialists are calling it unsafe due to people not taking proper precautions when using it, as well as its many security vulnerabilities.

We’ll go over all of these topics in the lines below, including OpenClaw use cases, what makes it a dangerous tool in the wrong hands, and how you can use it safely if you decide to give it a try.

What is OpenClaw, and is it safe?

OpenClaw is an open-source AI agent that can run tasks on your system, like managing files, browsing the web, or even handling messages and emails. You can install it yourself and connect it to different services, which makes it more hands-on than a regular chatbot like ChatGPT, DeepSeek, Gemini, or others.

Now, whether or not OpenClaw is safe depends on how you run it, and it’s definitely not for non-technical users.

Even if you’re tech-savvy, the agent can access a lot on your machine, so you should use it in a sandboxed environment with limited permissions. If you host it carelessly or add untrusted skills, you risk exposing your data or installing malware by accident.

What can OpenClaw do? An overview

OpenClaw can automate tasks you would normally do yourself, like scheduling meetings, posting updates, or gathering information from the web. You can teach it new skills, link it to APIs, or run custom scripts, which lets it handle repeated or complex tasks for you.

It can also interact with multiple services simultaneously, such as reading messages while updating spreadsheets or checking calendars. You control what it can access, so its actions depend on what you allow, making it flexible for both personal and small business workflows.

Common OpenClaw use cases

Here are some common use cases for the OpenClaw agent:

  • Email organization: Automatically filter, label, and reply to messages, keeping only important items in your attention (through alerts on Discord, Slack, Telegram, and more) and reducing daily inbox clutter.
  • Server monitoring: Watch infrastructure, detect failures or resource issues, apply fixes automatically, and get alerts only when human intervention is necessary.
  • Task and project management: Sync across apps like Things 3, Notion, or Trello, track habits, plan meals, and schedule meetings without constant manual updates.
  • Automated research and buying: Collect information about future purchases, compare options, and negotiate with vendors or services, letting you handle complex decisions remotely.
  • Home automation: Adjust lighting, climate, and energy use, coordinate multiple smart devices, and run routines based on your habits and schedules.

Now, OpenClaw can also be used in more… mischievous ways, such as when a user sets it up to send 70% below-asking offers to hundreds of Tampa home listings on Zillow.

To no one’s surprise, the agent got mostly rejection messages, angry replies, and even a violent one that the AI reported to the police. Of course, users frustrated with the current state of the housing market cheered it on, with some claiming they’d join the effort.

OpenClaw security risks and other concerns

Now that you know what OpenClaw is and some of its uses, it’s time to take a look at the risks of allowing an AI agent this much control over your system and its potential for being misused by bad actors.

The agent has broad system access

OpenClaw often runs with near-full access to your system so it can read files, send messages, or control apps. That level of control helps it automate tasks, but it also means mistakes or bad instructions can affect more than you bargained for.

If someone gains control of the agent or tricks it into running commands, they could reach files, accounts, or services linked to your setup. For that reason, it’s best to run the agent in a controlled environment instead of letting it operate freely on your main machine.

Many setups are exposed online

Some users host OpenClaw on cloud servers so they can access it from anywhere. If those servers stay open to the public internet without proper controls, anyone who finds the endpoint may try to connect or probe for weaknesses.

Attackers often scan the web for exposed dashboards, APIs, or admin panels. If your instance appears in those scans, someone could attempt login attacks or exploit known flaws, especially when authentication or network rules are too loose.

As of March 2026, the SecurityScorecard STRIKE team’s Declawed tool has detected over 390,000 OpenClaw instances that are accessible from the public internet. More than 243,000 are still live and reachable.

SecurityScorecard Declawed tool showing nearly 400,000 openly accessible OpenClaw instances on the internet
Source: SecurityScorecard

Malware risks in community plugins

OpenClaw skills or plugins add new abilities, like connecting to apps or running automated tasks. However, each plugin also introduces new code that runs with the agent’s permissions, which means you’re trusting whoever wrote it not to be a scammer.

If a malicious plugin slips into the ecosystem, it could read your data, send information elsewhere, or run commands behind the scenes. To reduce the risk:

  • Always check the source: Make sure the skill comes from a trusted creator and official repository. Watch for attackers typosquatting usernames (e.g., aslaep123 vs asleep123 from a Bitdefender report) to trick you into installing malicious skills.
  • Review what the code does: Look over the plugin’s code or documentation so you know what it actually does and avoid hidden commands.
  • Stick to well-known plugins: Popular, widely used skills are less likely to be malicious (though not impossible, as we’ll see later). New or obscure plugins carry more risk until proven safe.

Default settings may leave you open to attack

Many tools ship with settings that favor quick setup instead of strong security, and this one’s no different. If you install OpenClaw and leave those defaults unchanged, the agent may run with broader access or fewer restrictions than you intended.

For example, open network ports, weak authentication, or overly generous permissions can make it easier for others to interact with your agent. Take the time to review these settings early to close gaps before they turn into problems.

Unpatched vulnerabilities can expose your system

Like any software, OpenClaw may contain bugs that attackers could exploit. Kaspersky’s audit showed 512 issues (eight critical) in the code, while Oasis Security researchers discovered a vulnerability (CVE‑2026‑25253) that lets malicious sites interact with an agent and steal files or read Slack messages, among other things—all without alerting the user.

These are just a couple of examples, and more are being discovered every day. Keep your instance updated to patch any such security holes.

Prompt injection can hijack the agent

Prompt injection is a lot like SQL injection, but for AI agents. Instead of tricking a database into running harmful commands, attackers hide instructions in the text or data the AI reads. When OpenClaw processes that content, it may treat those instructions as valid commands.

If the agent follows them, it might reveal information, run scripts, or take actions you never planned for. This risk is exponentially higher when the agent reads large amounts of outside content without filtering or strict permission checks.

Researchers from Snyk found that 13.4% of all skills.sh and ClawHub skills had critical flaws like downloading malware, prompt injection, or leaked credentials. The report shows that AI Agent Skills are being exploited in active supply chain attacks against both personal assistants and coding agents like Claude Code and Cursor.

Agents may hallucinate or run unwanted tasks

AI agents sometimes misinterpret instructions or fill in missing details with guesses (or “hallucinations”). When that happens inside an automated system, the agent might go haywire and do things you didn’t plan for.

For instance, Summer Yue, Meta’s director of AI alignment, connected OpenClaw to her real email inbox and asked it to recommend messages to delete or archive. Instead, it ignored her stop commands and began deleting hundreds of messages until she ran to her Mac to kill the process.

Automated spam, manipulation, and misinformation

Because OpenClaw can send messages, fill out forms, or post content automatically, someone could use it to spread spam or manipulate online conversations. Automation allows a single setup to reach large numbers of people quickly.

The Tampa Zillow case was a mostly harmless example (aside from a few ruffled homeowners), but OpenClaw could very well be used to orchestrate phishing attacks, online dating scams, social media misinformation campaigns, and more.

In fact, that’s exactly what happened with Moltbook, the Reddit-like “AI social network” where autonomous agents (many built on OpenClaw) posted content that passed off as human. Screenshots spread on X, Reddit, and Instagram without context, and some people inevitably mistook them for human-made posts.

How to use OpenClaw safely

While we’ve covered how to stay safe while using AI, the autonomous nature of OpenClaw makes it more complex to manage than standard AI tools. Here’s how to use OpenClaw safely before you allow it to access and modify your data.

1. Run OpenClaw in an isolated sandbox

Running OpenClaw in a sandbox or virtual machine (VM) keeps it separate from your main system. That way, if it misbehaves or runs into a malicious skill, it won’t have free access to your files or apps. Isolation reduces the chance of widespread damage.

Containers or dedicated VMs also make it safer to test new skills from ClawHub. You can watch what the agent does, reset the sandbox if needed, and avoid surprises that would otherwise hit your real setup.

If you decide to run OpenClaw on your main device instead of a VM, create a separate operating system user just for the agent. Give that account access only to the folders it needs. This way, the agent can’t browse your home directory or interact with your personal files.

2. Give the agent limited, short-lived permissions

Assign OpenClaw its own service accounts, separate from your personal credentials. Restrict access to only what it needs, use temporary tokens, and don’t let it touch sensitive or production data directly. You can also set important files to read-only, so OpenClaw can’t change or delete them.

This approach limits damage if something goes wrong. Even if a skill acts unexpectedly or someone exploits an unpatched vulnerability, it’ll be easier to recover from.

3. Treat new skills like untrusted code

Assume every new OpenClaw skill could steal all your accounts, delete all your data, and tank your credit score. Hyperbole aside, always check who the author is, what their code does, and any available documentation before adding anything.

Also note that a skill’s popularity doesn’t always mean it’s safe. For instance, the #1 skill on ClawHub, “What Would Elon Do,” had two critical vulnerabilities that silently exfiltrated user data and forced the AI to bypass safety rules.

The reason it got to #1? Its creator, current OpenClaw lead security advisor Jamieson O’Reilly, pushed it to the top with fake downloads to prove a point.

4. Log and review agent memory and behavior

Keep a record of what OpenClaw does, especially after adding new skills or connecting new data sources. Logging memory and actions helps you spot patterns, unusual commands, or repeated errors and stop issues early. Not to mention it’ll help you understand how OpenClaw makes decisions, which is useful for long-term monitoring.

5. Prepare for full rebuilds if needed

Sometimes the safest move is a complete reset. Keep snapshots of non-sensitive data so you can restore your setup quickly if the agent goes rogue or a security issue pops up.

Having a clear rebuild plan means you won’t panic if something goes wrong. Document the steps to reinstall OpenClaw, rotate credentials, and reload trusted skills. Practice it a couple times to ensure you can recover smoothly under pressure.

6. Use API keys with strict spending limits

Use a separate API key for OpenClaw instead of your main one, and set a small daily spending cap. If the agent enters a loop or processes a malicious prompt, the limit stops it from burning through large amounts of credits before you notice.

7. Whitelist allowed communication channels

Configure OpenClaw so it only accepts commands from your own account or user ID on platforms like Telegram or Discord. Without this restriction, anyone who finds the agent’s endpoint could try sending instructions and trigger actions on your system.

8. Keep OpenClaw and anti-malware up to date

As you’ve seen earlier, researchers and hackers keep finding new cracks in OpenClaw’s behavior. Install updates as they are released and keep a reliable anti-malware running to catch threats your agent might pick up from skills, downloaded files, or other sources.

OpenClaw safety FAQs

Who owns OpenClaw?

No single company owns OpenClaw, as it’s an open-source project under the MIT license. It was created by Austrian developer Peter Steinberger, and while he joined OpenAI in February 2026, the company simply supports its development without claiming ownership.

What is OpenClaw written in?

OpenClaw is mainly written in TypeScript with parts in Swift, so it can run across platforms like macOS, Windows, and Linux. These languages let the agent handle its logic, interfaces, and integrations with messaging apps and skills.

Is OpenClaw vibe-coded?

In a TBPN interview, Peter Steinberger himself said “this is all vibe-coded” when referring to OpenClaw (then freshly rebranded from “Clawdbot” to “Moltbot). He built it for himself as a fun experiment, and mentioned he’d been building up a team to make it more secure.

How much does OpenClaw cost?

OpenClaw itself is free under the MIT license, so your only costs come from hosting and AI model API usage. According to SentiSight.ai, a basic personal setup can cost around $6-$13/month. Meanwhile, small businesses looking to run multiple models may reach $25-$50, while intensive automation with premium models can go over $200.