Home Wiki

OpenAI

View on consumerrights.wiki ↗

Work in progress
This article has been flagged for additional work. Treat its claims as provisional.
Stub
This article is a stub. The wiki community is still building it out.
Verification concerns
Editors have raised concerns about the verifiability of one or more claims.
Contents12
  1. Consumer-impact summary
  2. Incidents
  3. Web Crawlers ignoring robots.txt (2025)
  4. ChatGPT Atlas and prompt-injection vulnerability (2025)
  5. Funding of the Parents & Kids Safe AI Act and creation of a child safety organization (2026)
  6. Products
  7. DALL-E models:
  8. GPT models:
  9. Sora models:
  10. Misc.:
  11. See also
  12. References

Article Status Notice: This Article is a stub


This article is underdeveloped, and needs additional work to meet the wiki's Content Guidelines and be in line with our Mission Statement for comprehensive coverage of consumer protection issues. Learn more ▼

Issues may include:

  • This article needs to be expanded to provide meaningful information
  • This article requires additional verifiable evidence to demonstrate systemic impact
  • More documentation is needed to establish how this reflects broader consumer protection concerns
  • The connection between individual incidents and company-wide practices needs to be better established
  • The article is simply too short, and lacks sufficient content

How you can help:

  • Add documented examples with verifiable sources
  • Provide evidence of similar incidents affecting other consumers
  • Include relevant company policies or communications that demonstrate systemic practices
  • Link to credible reporting that covers these issues
  • Flesh out the article with relevant information

This notice will be removed once the article is sufficiently developed. Once you believe the article is ready to have its notice removed, please visit the Moderator's noticeboard, or the Discord (join here) and post to the #appeals channel, or mention its status on the article's talk page.

OpenAI
Basic information
Founded 2015
Legal Structure Private
Industry Artificial Intelligence, Technology
Also known as
Official website https://openai.com/

OpenAI[1] is an American Artificial intelligence (AI) focused company. Founded in December 2015, OpenAI is best known for their ChatGPT chatbot, also known for the Generative Pre-Trained Transformer (GPT) family of large language models, the DALL-E series of text-to-image models, and Sora, a text-to-video model. With a reported revenue of $10B in FY2025 [2] and approximately 5.5B visitors per month[3], OpenAI has positioned itself has a leader in the Generative AI industry.

Consumer-impact summary

Overview of concerns that arise from the conduct towards users of the product (if applicable):

  • User Freedom
  • User Privacy
  • Business Model
  • Market Control

Add your text below this box. Once this section is complete, delete this box by clicking on it and pressing backspace.


  • Misleading advertising. ChatGPT terms of service say it should not be used to make decisions about people. However their advertising claims it is "PhD level" and makes other claims that seem to imply it is reliable. Many people use ChatGPT as if its output were meaningful, reliable, or a substitute for interaction with a person.
  • Credits (money) expire automatically with no notification, and the credit balance interface makes this process confusing. You must maintain a positive account balance, and you are auto-billed a fixed amount if it goes negative. Accounts can be banned and credits confiscated for typing the wrong things in chat, with no recourse.
  • A mobile phone number in a friendly country is required to better track your identity.

This is what OpenAI says as part of their data usage policy:

We share content with a select group of trusted service providers that help us provide our services. We share the minimum amount of content we need in order to accomplish this purpose and our service providers are subject to strict confidentiality and security obligations. We do not use or share user content for marketing or advertising purposes.

Incidents

Add one-paragraph summaries of incidents below in sub-sections, which link to each incident's main article while linking to the main article and including a short summary. It is acceptable to create an incident summary before the main page for an incident has been created. To link to the page use the "Hatnote" or "Main" templates.

If the company has numerous incidents then format them in a table (see Amazon for an example).


Add your text below this box. Once this section is complete, delete this box by clicking on it and pressing backspace.


This is a list of all consumer-protection incidents this company is involved in. Any incidents not mentioned here can be found in the OpenAI category.

Web Crawlers ignoring robots.txt (2025)

In 2025, Jonathan Bailey from PlagiarismToday posted an article going into how ChatGPTs web crawlers were ignoring the sites Robots.txt file.[4] PlaigarismToday had blocked OpenAI's web crawlers in August of 2023, yet the latest ChatGPT model at the time provided data from articles that were posted the day before on the website, even though OpenAI wasn't supposed to be scraping those webpages. This can be problematic for smaller websites, due to OpenAI's aggressive approach to web crawling, with their crawlers reportedly in a single week sending in more than 29 thousand requests to a wiki known as The Cutting Room Floor.

ChatGPT Atlas and prompt-injection vulnerability (2025)

In 2025, Brave posted an article about vulnerabilities that have agentic web browsers, such as ChatGPT Atlas, that consists of adding hidden malicious prompts in files, text or another media. Those prompts, combined with weak safeguards of the AI agents, can make them to expose and leak sensitive data of the user.[5]

Funding of the Parents & Kids Safe AI Act and creation of a child safety organization (2026)

In January 2026, OpenAI partnered with Common Sense Media and funded approximately $10 million dollars to support the California Parents & Kids Safe AI Act bill. The bill proposes to add age verification, improvement of parental controls, and to prohibit targeted advertising to underage users. On January 8, OpenAI created an organization, named Parents & Kids Safe AI Coalition.

In March 2026, the OpenAI's organization sent emails to child safety coalitions, only mentioning it was sponsored by Common Sense Media. The purpose of these e-mails was to convince the child safety coalitions to support the bill.[6][7]

Products

DALL-E models:

GPT models:

Sora models:

Misc.:

See also

References