Home Wiki

Artificial intelligence

View on consumerrights.wiki ↗

Marked irrelevant
This article has been flagged as off-topic for the wiki.
Work in progress
This article has been flagged for additional work. Treat its claims as provisional.
Contents9
  1. Why is it a problem
  2. Unethical training of data
  3. Privacy concerns of AI
  4. Privacy concerns of online AI models
  5. Unethical maintenance of data centers
  6. Hidden directives
  7. Further reading
  8. External links
  9. References

⚠️ Article status notice: This Article's Relevance Is Under Review

This article has been flagged for questionable relevance. Its connection to the systemic consumer protection issues outlined in the Mission statement and Moderator Guidelines isn't clear.

Learn more ▼

If you believe this notice has been placed in error, or once you have made the required improvements, please visit the Moderators' noticeboard or the #appeals channel on our Discord server: Join Here.

Notice: This Article's Relevance Is Under Review

To justify the relevance of this article:

  • Provide evidence demonstrating how the issue reflects broader consumer exploitation (e.g., systemic patterns, recurring incidents, or related company policies).
  • Link the problem to modern forms of consumer protection concerns, such as privacy violations, barriers to repair, or ownership rights.
If you believe this notice has been placed in error, or once you have made the required improvements, please visit either the Moderator's noticeboard, or the #appeals channel on our Discord server: Join Here. There may be a discussion about this article on its talk page.

Article Status Notice: Inappropriate Tone/Word Usage

This article needs additional work to meet the wiki's Content Guidelines and be in line with our Mission Statement for comprehensive coverage of consumer protection issues. Specifically it uses wording throughout that is non-compliant with the Editorial guidelines of this wiki.

Learn more ▼

How You Can Help: If this is a non-Theme article (See: Article types):

  • Persuasive language should not be used in the Wiki's voice. Avoid loaded words, or the causing of unnecessary offense, wherever possible.
  • No direct attacks on named individuals or companies. Malice may be attributed to bad and proven offenders, but only through the use of quotation and citation - never in the Wiki's voice.

If this is a Theme article:

  • Where argumentation is used make sure it is clear and direct but not inflammatory. Avoid strong language, or causing unnecessary offense.
  • No direct attacks on named individuals or companies. Malice may be attributed to bad and proven offenders, in a formal and calm manner.

This notice will be removed once sufficient documentation has been added to establish the systemic nature of these issues. Once you believe the article is ready to have its notice removed, visit either the Moderator's noticeboard, or the Discord (join here) and post to the #appeals channel.

Artificial intelligence (AI) is a field of computer science that produces systems designed to solve problems that humans typically solve using intelligence. In the consumer and industry space, it is commonly referred to as chatbots or large language models (LLMs), which have been a main focus of industry since the November 2022 launch of OpenAI's ChatGPT, with tens of billions of dollars in funding allocated to producing more popular LLMs. This is also a significant focus on text-to-image models, which "draw" an image using a written prompt, and less commonly, text-to-video models, which extend the text-to-image concept across several smooth video frames.

AI is not a new concept; it has been of interest since the 1950s. AI is a catch-all term, encompassing many areas and techniques.

Generative artificial intelligence models are trained through vast amounts of existing human-generated content. LLMs gather statistics on word patterns, which allows the model to generate sequences of words that seem similar to what a person might have written. However, an LLM does not understand anything; they cannot reason. They generate randomly modulated pattern of tokens. In this way, they function similarly to autocomplete.

People reading sequences of tokens sometimes perceive things they think are true. Sequences that do not make sense to the reader, or that are false, are called hallucinations. LLMs are typically trained to produce output that is pleasing to people, exhibiting dark patterns. For example, they produce output which seems confidently written, use patterns which praise the user (sycophancy), and employ emotionally manipulative language.

People are accustomed to interacting with others, and many overestimate the abilities of things that exhibit complex, person-like patterns. Promoters of “AI” systems take advantage of this tendency, using suggestive names (like “reasoning” and “learning”) and grand claims (“PhD level”), which make it harder for people to understand these systems.

From November 2022 to 2025, venture capitalists and companies invested hundreds of billions of dollars into AI but received minimal returns. When companies seek returns, consumers can expect that products may be orphaned, services may be reduced, customer data may be sold or repurposed, costs may rise, and companies may reduce staff or fail. Historically, AI has had brief periods of intense hype, followed by disillusionment, and “AI winters.”[citation needed]

The current well-funded industry of artificial intelligence tools has led to the rampant and unethical use of content. Startups aiming to develop AI services have been rapidly scraping the internet for content to train future models, and members of the field are concerned that they are approaching the limit of publicly available content to train from.[1]

Why is it a problem

Unethical training of data

Further reading: Artificial intelligence/training

Users' work is sometimes silently used in training without their explicit consent, as was the case for Adobe's AI policy.

Privacy concerns of AI

AI can be and has been used to generate deepfakes of people with and without their consent. Deepfakes are media generated with the likeness of an individual. Deepfake media can range from harmless to harmful. The latter includes child pornography, revenge porn, blackmail, etc. Since the rampant rise of consumer AI, deepfakes have become even more prevalent, with some websites explicitly specializing in them.[citation needed]

Privacy concerns of online AI models

There are several concerns with using online AI models like ChatGPT, not only because they are proprietary, but also because there is no guarantee of where your data will be stored or used. Recent developments in local AI models offer an alternative to online AI models, which can be downloaded from platforms like HuggingFace and used offline. Common models to run include Llama (Meta), DeepSeek (DeepSeek), Phi (Microsoft), Mistral (Mistral AI), Gemma (Google).

In some cases, AI models can be hijacked for malicious purposes. Demonstrated with Comet (Perplexity), users can run arbitrary prompts to the browser's built-in AI assistant by hiding text in the HTML comments, non-visible webpage text, or simple comments on a webpage.[2] These arbitrary prompts can then be exploited to obtain sensitive information or gain unauthorized access to high-value accounts, such as those for banking or gaming libraries.[3] See Prompt injection.

Unethical maintenance of data centers

Due to heavy investments into and increased use of generative AI and LLMs, many data centers have been constructed to host LLMs. These data centers consume large amounts of power and water, in order to power and cool the computer systems running the models. Residents that live in cities where AI data centers have been constructed have complained of an increase in their electricity bills despite no change in their personal usage.[citation needed] According to a research video by Benn Jordan, these data centers (as well as fracking operations and natural occurrences) cause a high amount of sound pollution, which can cause various symptoms.[4]

Hidden directives

Most AI apps include an initial "root"/"system" prompt given to the AI, which is hidden from the user. Some corporations go to great lengths to keep those prompts hidden, and to avoid leaking it to the user. Some projects attempt to bring back transparency to these tools, in spite of the restrictions.[5]

Further reading

References

  1. Tremayne-Pengelly, Alexandra (16 Dec 2024). "Ilya Sutskever Warns A.I. Is Running Out of Data—Here's What Will Happen Next". Observer. Archived from the original on 26 Nov 2025.
  2. "Tweet from Brave". X (formerly Twitter). Aug 20, 2025. Archived from the original on 21 Mar 2026. Retrieved Aug 24, 2025.
  3. "Tweet from zack (in SF)". X (formerly Twitter). Aug 23, 2025. Archived from the original on 21 Mar 2026. Retrieved Aug 24, 2025.
  4. https://www.youtube.com/watch?v=_bP80DEAbuo (Archived)
  5. https://github.com/elder-plinius/CL4R1T4S