Google Suspends Users of OpenClaw Many users of Google's open-source Antigravity AI platform have had their access to the platform suspended This article explores openclaw users google. . OpenClaw, provoking criticism due to strict enforcement of terms of service (ToS).
The action targets developers who were using OpenClaw's OAuth plugin to access subsidized Gemini model tokens, resulting in service degradation and backend spikes. Find out more about exploitation Tools for digital forensics Launched in November 2025, GitHub Copilot OpenClaw quickly gained over 219,000 GitHub stars and allowed local AI agents to handle tasks like web browsing and email management. Users bypassed official channels to gain access to high-end models like the Gemini 2.5 Pro at a discounted price by authenticating through Google's developer-facing Gemini AI platform, Antigravity.
By using Antigravity infrastructure to power non-Antigravity products, this third-party integration broke Google's terms of service and resulted in "malicious usage" patterns that automated systems detected. According to the Suspension Wave Reports, which first appeared on Google's AI Developer Forum in mid-February 2026, AI Ultra subscribers who paid $249.99 a month experienced abrupt 403 errors and unannounced full account restrictions. Accusations of overreach were raised because impacted users lost access to not only Antigravity and Gemini CLI but also, in certain situations, Gmail, Workspace, and even decades-old account histories.
Former Windsurf CEO and Google DeepMind product lead Varun Mohan publicly addressed the spike on X (formerly Twitter), saying it "tremendously degraded the quality of service for our users." In the face of capacity limitations, he stressed giving priority to "actual users" while providing a restricted reinstatement path for uninformed violators.
Citing token arbitrage and debugging problems from unusual traffic, the crackdown is reminiscent of Anthropic's recent ToS update that specifically forbids third-party OAuth in programs like OpenClaw. The fallout is exacerbated by cybersecurity experts pointing out OpenClaw's flaws, which include supply chain attacks and over 21,000 exposed instances that are susceptible to infostealers focusing on configuration files. Find out more Protection against data breaches Services for penetration testing Exploit China's industry ministry highlighted the two-edged sword of agentic AI by warning of the dangers posed by improperly configured systems that allow for cyberattacks and data breaches.
OpenClaw creator Peter Steinberger, who recently joined OpenAI to develop personal agents, criticized Google's "draconian" bans and declared that Antigravity support would be discontinued. Competitive divisions are growing as the community shifts to successful forks like Nanobot and IronClaw and OpenAI supports third-party harnesses.
Developers must balance convenience against ToS and security risks as a result of this turf war, which portends a move toward closed AI ecosystems. Such enforcements serve as a reminder to users that subsidized access frequently conceals backend costs, and that ban waves have the potential to completely eradicate productivity tools in an instant, especially as AI agents become more prevalent. X, LinkedIn, and LinkedIn for daily ZeroOwl.
To have your stories featured, get in touch with us.


.webp%3Fw%3D696%26resize%3D696%2C0%26ssl%3D1&w=3840&q=75)

.webp&w=3840&q=75)



%2520(1).webp&w=3840&q=75)
%2520(1).webp&w=3840&q=75)


