Anthropic
Contents7
❗Article Status Notice: This Article is a stub
This article is underdeveloped, and needs additional work to meet the wiki's Content Guidelines and be in line with our Mission Statement for comprehensive coverage of consumer protection issues. Learn more ▼
Issues may include:
- This article needs to be expanded to provide meaningful information
- This article requires additional verifiable evidence to demonstrate systemic impact
- More documentation is needed to establish how this reflects broader consumer protection concerns
- The connection between individual incidents and company-wide practices needs to be better established
- The article is simply too short, and lacks sufficient content
How you can help:
- Add documented examples with verifiable sources
- Provide evidence of similar incidents affecting other consumers
- Include relevant company policies or communications that demonstrate systemic practices
- Link to credible reporting that covers these issues
- Flesh out the article with relevant information
This notice will be removed once the article is sufficiently developed. Once you believe the article is ready to have its notice removed, please visit the Moderator's noticeboard, or the Discord (join here) and post to the #appeals channel, or mention its status on the article's talk page.
| Basic information | |
|---|---|
| Founded | 2021 |
| Legal Structure | Private |
| Industry | Artificial Intelligence |
| Also known as | |
| Official website | https://anthropic.com |
Anthropic PBC is a private for-profit American artificial intelligence (AI) startup founded in 2021. Anthropic is mainly known for their family of large language models (LLMs) known as Claude.
Consumer impact summary
Overview of concerns that arise from the conduct towards users of the product (if applicable):
- User Freedom
- User Privacy
- Business Model
- Market Control
Add your text below this box. Once this section is complete, delete this box by clicking on it and pressing backspace.
Incidents
This is a list of all consumer-protection incidents this company is involved in. Any incidents not mentioned here can be found in the Anthropic category.
Claude Code HERMES.md billing flaw (2026)
- Main article: Anthropic Claude Code HERMES.md billing flaw
In April 2026, a technical flaw in Claude Code triggered by the string "HERMES.md" in git commit messages bypassed subscription plans, routing users to pay-as-you-go API rates and charging one account over $200. Anthropic refused to issue a refund, categorizing the overcharge as an un-refundable technical error.
Price crackdown against third-party tool usage (2026)
During April 3rd, 2026, Boris Cherny, head of Claude Code, posted on Twitter (now X) announcing Claude subscriptions will "no longer support third-party tools", such as OpenClaw because it puts an "outsized strain" on Anthropic's systems. The changes took effect on April 4th, and now to use third-party tools the user must pay a separate fee from subscription or use a separate Claude API key through Anthropic's developer platform. It is rumored this action was done to prevent Claude users from using tools from competitors, as OpenClaw is supported by OpenAI. [1][2][3]
Products
- Claude
- Claude Code
- Cowork
See also
References
- ↑ https://x.com/bcherny/status/2040206441756471399 - Archived
- ↑ Lee, Lloyd (3 Apr 2026). "Anthropic says Claude subscriptions will no longer support OpenClaw because it puts an 'outsized strain' on systems". Business Insider. Archived from the original on 2026-04-04. Retrieved 5 Apr 2026.
- ↑ Ha, Anthony (4 Apr 2026). "Anthropic says Claude Code subscribers will need to pay extra for OpenClaw usage". TechCrunch. Archived from the original on 2026-04-04. Retrieved 5 Apr 2026.