Google Restricts Antigravity AI Access for OpenClaw Users Amid Usage Surge
Google has restricted access to its Antigravity AI coding platform for users routing Gemini tokens via the OpenClaw framework, citing a massive increase in 'malicious usage'. The move, which led to account suspensions for some AI Ultra subscribers, has sparked debate over terms of service and fair use in AI development.
Key Highlights
- Google restricted Antigravity AI access for OpenClaw users.
- Action taken due to 'malicious usage' and overwhelming compute load.
- Affected users were routing Gemini AI tokens through OpenClaw framework.
- Many developers surprised by suspensions without prior warning.
- Similar crackdown by Anthropic on third-party tool usage occurred earlier.
- Debate continues over Google's definition of 'malicious usage'.
Google has implemented restrictions on users accessing its AI coding platform, Antigravity, specifically targeting those who were routing Gemini AI tokens through the open-source agent framework, OpenClaw. This move, which began around February 22-23, 2026, has resulted in the suspension of accounts for some developers, including those subscribed to Google AI Ultra.
Varun Mohan, a Google DeepMind engineer and former Windsurf CEO, publicly addressed the situation, stating that Google had observed a "massive increase in malicious usage of the Antigravity backend" which significantly degraded the quality of service for legitimate users. Mohan explained that these users were exploiting Antigravity as a proxy for third-party platforms, overwhelming Google's compute resources. Google has clarified that while access has been blocked for Antigravity, other Google services were not affected, and the company is working on a path for a subset of users to regain access, acknowledging that some may not have been aware of terms of service violations.
However, the crackdown has been met with considerable criticism from the developer community. Many affected users, including those paying $250 per month for AI Ultra subscriptions, reported losing access without any prior warning or clear explanation. They argue that their usage of OpenClaw was within perceived limits and not explicitly prohibited by Google's terms of service, questioning Google's characterization of their activities as "malicious." Some developers also expressed concern that account restrictions on AI products, which are integrated with the broader Google account infrastructure, could potentially impact access to services like Gmail and Workspace.
OpenClaw, developed by Peter Steinberger, is an autonomous AI personal assistant launched in November 2025. It allows developers to connect various AI models, including Google's Gemini and Anthropic's Claude, through alternative interfaces to execute tasks on their own machines. The framework gained rapid popularity due to its open-source nature and ability to automate complex workflows across multiple platforms. Peter Steinberger himself, who recently joined OpenAI on February 14, 2026, criticized Google's actions as "draconian" and suggested he might remove support for Antigravity from OpenClaw.
This incident is not isolated within the AI industry. Just two days prior to Google's actions, Anthropic, another major AI model provider (for Claude), updated its terms of service to explicitly ban OAuth token usage from its Claude subscriptions in third-party tools, including OpenClaw. This indicates a growing trend among AI providers to tighten control over how their models are accessed and used, particularly when third-party wrappers or harnesses consume compute resources in ways not accounted for by subscription pricing.
Google Antigravity, introduced on November 18, 2025, alongside Gemini 3, is Google's agentic development platform, designed to enable AI agents to autonomously plan, execute, and verify complex coding tasks. It was launched in public preview, offering generous rate limits on Gemini 3 Pro for individuals. The core issue appears to stem from OpenClaw's token routing, which allowed users to effectively burn through more tokens than subscription pricing intended, overwhelming Antigravity's backend.
The broader implications of these restrictions highlight the ongoing challenges in defining fair use and managing resource consumption in the rapidly evolving landscape of AI development. While Google cites the need to ensure service quality for all users and uphold its terms of service, the lack of warning and clear communication has led to significant frustration and a feeling of distrust among the developer community. This event underscores the critical need for transparent policies and robust communication from AI platform providers as developers increasingly integrate AI agents into their workflows.
Frequently Asked Questions
What is Google Antigravity and OpenClaw?
Google Antigravity is an AI-powered integrated development environment (IDE) launched by Google for 'agent-first' software development, enabling AI agents to autonomously handle coding tasks. OpenClaw is an open-source autonomous AI personal assistant that allows users to connect to various AI models, including Google's Gemini, through third-party interfaces to automate tasks on their local machines.
Why did Google restrict access for OpenClaw users on Antigravity?
Google cited a 'massive increase in malicious usage' of the Antigravity backend by users routing Gemini tokens through OpenClaw. This activity overloaded Google's compute resources, degrading service quality for other users, and was deemed a violation of terms of service regarding product usage.
Were affected users given a warning before their accounts were restricted?
Many affected developers, including those paying for Google AI Ultra subscriptions, reported that their accounts were suspended without any prior warning or clear explanation from Google.
Is this a permanent ban, and will users get their access back?
A Google DeepMind spokesperson indicated that the move is an effort to bring usage in line with the platform's terms of service, not necessarily a permanent ban. Google stated that they are working on a path for a subset of users, who might not have been aware of the terms of service violations, to regain access.
Are other AI companies facing similar issues with third-party tools?
Yes, Anthropic, another major AI model provider (for Claude), updated its terms of service just two days prior to Google's actions to explicitly ban OAuth token usage from its subscriptions in third-party tools, including OpenClaw, due to similar issues of unintended resource consumption.