top of page

AI POLICY

Roana has a strict Anti-Generative AI and Anti-NFT policy. Everything from Roana is 100% authentic, non-plagiarized, human-made art and design. Until generative and other kinds of AI are sustainable and ethical, we will not support them. While AI may have its uses, machine-created visuals have no room in human creativity.

 

For more information on why we've chosen this stance, the cited arguments are below. The following is not an essay, but an accumulation of information from a variety of sources.

SUMMARY:
  • Generative AI and LLMs use an unreasonable amount of electricity and fresh water, creating carbon emissions and contributing to climate change. The need for hardware also increases the environmental impact of mining and other resource-gathering efforts. The hardware waste itself can also contaminate the environment when disposed of.

  • Most models can freely plagiarize if copyrighted content is contained in their training datasets. Artwork and other media protected by intellectual property laws may be unwittingly scraped by AI, like in the case of Meta training its models off of Facebook and Instagram posts.

  • The US Supreme Court declined to hear a case about AI copyright, upholding the lower court decision that copyright requires a human creator.

  • AI models can reinforce unusual or harmful ideas, for example, suicidal ideations. AI is a tool that reinforces and rewards what a person is thinking, doesn’t question their assumptions or conclusions, and has no human sense of morals, ethics, balance or humanity.

  • We are already witnessing cognitive atrophy, loss of brain plasticity, and reduced creativity in regular AI users.

  • AI perpetuates and exacerbates biases present in its training data.

Environmental Impact
  • The computational power required to train generative AI models that often have billions of parameters, such as OpenAI’s GPT-4, can demand a staggering amount of electricity, which leads to increased carbon dioxide emissions and pressures on the electric grid.

  • Furthermore, deploying these models in real-world applications, enabling millions to use generative AI in their daily lives, and then fine-tuning the models to improve their performance draws large amounts of energy long after a model has been developed.

  • Beyond electricity demands, a great deal of water is needed to cool the hardware used for training, deploying, and fine-tuning generative AI models, which can strain municipal water supplies and disrupt local ecosystems.

  • The increasing number of generative AI applications has also spurred demand for high-performance computing hardware, adding indirect environmental impacts from its manufacture and transport. 

(MIT)​

​

  • Training and running AI models, particularly large language models, requires enormous amounts of energy, often derived from fossil fuels. This contributes to greenhouse gas emissions and climate change. Training can produce about 626,000 pounds of carbon dioxide- the equivalent of 300 round-trip flights between New York and San Francisco, or nearly 5 times the lifetime emissions of the average car.  Researchers estimated that creating GPT-3 consumed 1,287 megawatt hours of electricity and generated 552 tons of carbon dioxide equivalent- the equivalent of 123 gasoline-powered passenger vehicles driven for one year.

  • The production and improper disposal of AI hardware generate electronic waste, which contains harmful chemicals that can contaminate the environment.

  • One non-peer-reviewed study, led by researchers at UC Riverside, estimates that training GPT3 in Microsoft’s state-of-the-art US data centers could potentially have consumed 700,000 liters (184,920.45 gallons) of freshwater. â€‹â€‹â€‹â€‹â€‹â€‹â€‹

  • Unfettered growth of Gen-AI with limited regulatory oversight = rising demand of data centers (a temperature-controlled building that houses computing infrastructure such as servers, data storage drivers, and network equipment).

  • By 2040 it is expected that the emissions from the Information and Communications Technology industry as a whole will reach 14% of the global emissions. The International Energy Agency estimates that by 2026 electricity consumption by data centers, cryptocurrency, and artificial intelligence could reach 4% of annual global energy usage (roughly equal to the amount of electricity used by the entire country of Japan).

(East Carolina University Libraries)

​​

  • According to OpenAI researchers, since 2012, the amount of computing power required to train cutting-edge AI models has doubled every 3.4 months. By 2040, it is expected that the emissions from the Information and Communications Technology (ICT) industry as a whole will reach 14% of the global emissions.

  • E-waste contains hazardous chemicals, including lead, mercury, and cadmium, that can contaminate soil and water supplies and endanger both human health and the environment. 

  • The shadowy realm of AI development and utilisation breeds a lack of transparency and accountability regarding its environmental impact. Certain companies put their financial well-being and competitive edge ahead of any potential negative effects that AI technologies may have on the environment.

(Earth.org)

​

  • AI is largely powered by data centers that field queries, store data and deploy information. As AI becomes ubiquitous, the power demand for data centers increases, leading to grid reliability problems for people living nearby.

  • The data centers also generate heat, so they rely on fresh water to stay cool. Larger centers can consume up to 5 million gallons (18.9 million liters) a day, according to an article from the Environmental and Energy Study Institute. That’s roughly the same as the daily water demand for a town of up to 50,000 people.

(AP News)

Plagarism
  • Modern AI systems like GPT-3 are trained through a process called transfer learning. They ingest massive datasets scraped from public sources like websites, books, academic papers, and more. For example, GPT-3’s training data encompassed 570 gigabytes of text. During training, the AI searches for patterns and statistical relationships in this vast pool of data. It learns the correlations between words, sentences, paragraphs, language structure, and other features.

  • This enables the AI to generate new coherent text or images by predicting sequences likely to follow a given input or prompt. But it also means these models absorb content without regard for copyrights, attribution, or plagiarism risks. As a result, generative AIs can unintentionally reproduce verbatim passages or paraphrase copyrighted text from their training corpora.

  • Recent research has shown that large language models (LLMs) like GPT-3 can reproduce substantial verbatim passages from their training data without citation.

  • After limited prompting Midjourney’s V6 model some researchers were able to generate nearly identical images to copyrighted films, TV shows, and video game screenshots likely included in its training data. These experiments further confirm that even state-of-the-art visual AI systems can unknowingly plagiarize protected content if sourcing of training data remains unchecked.

  • The US Copyright Office recently denied copyright to most aspects of an AI-human graphic novel, deeming the AI art non-human. It also issued guidance excluding AI systems from ‘authorship’. Federal courts affirmed this stance in an AI art copyright case.

(Unite.ai)

For reference: Copyright Laws in the USA for Artists, Intellectual Property

​

  • What Zuckerberg conveniently omitted from the announcement was that the data used to train its new AI tool was scraped from the millions of texts and images publicly posted on its Facebook and Instagram platforms without its users' knowledge or approval.

  • In fact, only in August did Meta even give its Facebook users the ability to choose whether or not to allow their personal data to be used for training third-party AI models. To opt-out, the user must fill out a specific “Generative AI Data Subject Rights" form, seemingly buried somewhere on the Facebook platform.

(CyberNews)

Copyright (US Law)
  • The U.S. Supreme Court declined on Monday to take up the ​issue of whether art generated by artificial intelligence can be copyrighted under U.S. law, turning ‌away a case involving a computer scientist from Missouri who was denied a copyright for a piece of visual art made by his AI system.

  • The Copyright Office rejected his application in 2022, finding that creative works must have human authors to be eligible to receive a copyright.

  • A federal judge in Washington upheld the office's decision in Thaler's case in 2023, writing that human authorship is a "bedrock requirement of copyright." The U.S. Court of Appeals for the District of Columbia Circuit affirmed the ruling in 2025.

(Reuters)​​

Psychology
  • AI psychosis, which is not a clinical diagnosis, describes instances where people develop delusions—or have existing delusions deepened—in association with heavy use of chatbots.

  • When we talk about AI psychosis, we’re really talking about a large language model reinforcing some sort of unusual idea and increasing someone’s level of conviction.

  • Now, AI does other things very similar to psychosis. Although it’s not technically AI psychosis, we often include it under the rubric of AI psychosis: when a large language model influences or reinforces or induces thoughts of suicide. So, someone may enter some information related to suicide into a large language model or chatbot, then that is reinforced to the person, and then they have an episode in which they take their life. That’s technically not AI psychosis, but we consider it under the same umbrella because the same mechanism is at work.

  • Now we have AI and large language models in which we have this system that, in very many ways, can mimic human intelligence. It’s extremely strong, responds very quickly, and responds like a human would respond. It’s easier for people to internalize. And so that’s why we’re so concerned about it now. We haven’t spoken about it as much in previous years or decades, but we’ve seen many patients have this sort of problem. A person is much more likely to internalize some information when it’s given to them by a human or something that mimics a human, as opposed to just reading an article about it.

(National Academy of Medicine)

​

  • The ability to “talk” with an AI tool that reinforces and rewards what a person is thinking, doesn’t question their assumptions or conclusions, and has no human sense of morals, ethics, balance or humanity, can clearly create hazardous situations.

(University of Michigan Medicine)

​​

  • There are neurological, psychological and philosophical risks. From a neurological standpoint, widespread use of this AI carries the risk of overall cognitive atrophy and loss of brain plasticity.

  • Using ChatGPT to write an essay reduces the cognitive engagement and intellectual effort required to transform information into knowledge, according to a study.

  • The study also showed that 83% of AI users were unable to remember a passage they had just written for an essay.

  • Other studies show that individual gains can be significant when authors ask ChatGPT to improve their texts, but that the overall creativity of the group decreases.

  • Generalised cognitive atrophy is already a philosophical risk in itself… but there are others. If this type of tool is widely used – and this is already the case with younger generations – we are at risk of a standardisation of thought.

(Polytechnique Insights - Ioan Roxin, Professor Emeritus at Marie et Louis Pasteur University)

Discrimination
  • Stable Diffusion generates images using artificial intelligence, in response to written prompts. Like many AI models, what it creates may seem plausible on its face but is actually a distortion of reality. An analysis of more than 5,000 images created with Stable Diffusion found that it takes racial and gender disparities to extremes — worse than those found in the real world.

  • As AI models become more advanced, the images they create are increasingly difficult to distinguish from actual photos, making it hard to know what’s real. If these images depicting amplified stereotypes of race and gender find their way back into future models as training data, next generation text-to-image AI models could become even more biased, creating a snowball effect of compounding bias with potentially wide implications for society.

(Bloomberg)

bottom of page