The End of Cheap AI Subscriptions: Why $20 Plans Are Unsustainable
AI subscription flat-rate plans are becoming unsustainable due to agentic tools; companies like GitHub and Anthropic are shifting to usage-based models.
The era of dirt-cheap AI subscriptions may be coming to an end. For months, users have enjoyed powerful AI features for as little as $20 per month, but the economics no longer add up. Major providers like GitHub, Anthropic, and OpenAI are rethinking their pricing models as agentic AI tools consume vast computing resources. Below, we answer key questions about this shift and what it means for consumers.
What exactly is the "$20 AI subscription era" and why is it ending?
Until recently, companies like OpenAI, Anthropic, and Google offered flat-rate plans—ChatGPT Plus at $20/month, Claude Pro at $20/month, and similar tiers—that gave users unlimited access to powerful AI capabilities. These plans included agentic tools such as coding assistants, desktop automation, and multi-step agents that could build apps, design websites, or manipulate files with a single prompt. However, the underlying costs of running these agentic systems are far higher than the simple chat interactions these plans were originally designed for. The compute resources required for agents to operate autonomously over many steps quickly make flat-rate pricing untenable. As a result, providers are now realizing that unlimited access for $20 is impossible to sustain, mirroring the shift from all-you-can-eat buffet to a pay-per-plate model.

How have AI companies like GitHub, Anthropic, and OpenAI responded?
GitHub, owned by Microsoft, was the first to take decisive action. It switched all its flat-rate plans for Copilot to usage-based pricing, publicly stating that the previous model was “broken, busted, and unsustainable.” Anthropic has been more cautious but has hinted at changes: the company’s Head of Growth admitted that Claude Pro and Max plans "weren't built" for agentic tools like Claude Code and Claude Cowork—they were designed only for chat. Anthropic is now testing the removal of Claude Code from its Pro plan and adjusting usage allowances to find an economically viable balance. Meanwhile, OpenAI’s Sam Altman has struck a defiant tone, challenging rivals to downgrade their plans. Yet industry observers expect ChatGPT Plus and Pro will eventually follow suit, as the cost of running advanced agents such as Codex cannot be covered by a flat $20 monthly fee.
What are "agentic" AI tools and why do they break the flat-rate model?
Agentic AI tools are systems that can autonomously perform a series of tasks without constant human intervention. Examples include Claude Code (coding assistant), Claude Cowork (desktop automation), OpenAI Codex, and Google’s NotebookLM. Unlike simple chatbots that respond to one query at a time, agents can parse complex goals, write code, edit files, browse the web, and iterate on results—all in the background. This level of autonomy requires continuous access to large language models (LLMs), often calling the AI backend hundreds of times per session. The cumulative compute cost is an order of magnitude higher than for chat interactions. Flat-rate plans, which assume moderate usage, cannot recover these costs when a single user’s agentic session consumes resources equivalent to thousands of chat queries. Therefore, the economic model under today's flat fee is fundamentally flawed.
What specific changes have been announced or hinted at by these companies?
GitHub has already moved all Copilot plans to usage-based billing, charging per active user per month plus additional compute credits. Anthropic is testing two adjustments: first, removing Claude Code from the $20/month Pro plan and moving it to a separate pay-as-you-go system; second, tightening usage caps on both Pro ($20) and Max ($100/$200) plans. OpenAI has not announced changes yet, but internally they are reportedly evaluating tiered agentic add-ons and higher prices for heavy users. Google has similarly started limiting the usage of NotebookLM’s agentic features on its free and lower-tier plans. These changes aim to align cost with consumption, ensuring that only those who use agentic tools heavily pay more, while casual chat users can remain on affordable flat rates.

Is the $200/month ChatGPT Pro plan also at risk?
Yes, possibly even more so than the $20 plan. The $200/month ChatGPT Pro plan was introduced to offer unlimited access to OpenAI’s most advanced models and agentic capabilities. However, the compute costs for agents like Codex—especially when used for full-stack app development or prolonged research—can easily exceed the monthly fee for a single power user. If Anthropic and GitHub are any indication, OpenAI will likely have to either introduce usage caps, raise the price further, or switch to a token-based billing system for agentic features. The current $200 flat rate may become a relic as providers realize that even premium tiers are not immune to the cost explosion caused by autonomous agents.
What does the future of AI pricing look like for consumers?
Consumers should expect a hybrid pricing model: a low-cost base subscription for chat and light AI tasks (e.g., $15–$20/month), plus additional costs for agentic features based on compute consumption. This might take the form of “credits” or “tokens” that are depleted by agent actions. For example, a user could pay $20/month for 1 million tokens and purchase extra tokens as needed. Heavy users—developers, designers, researchers—will face higher bills, potentially $100–$500/month, while casual users may see little change. Some providers may offer “agentic bundles” (e.g., $50/month for coding agents). The key takeaway: the era of unlimited, cheap AI magic is over, but the benefits of agentic tools will remain accessible—just at a more realistic price.