AI veganism

From HandWiki
Short description: Treating AI systems ethically


AI veganism applies the rules and thinking of veganism to artificial intelligence (AI). The term has been used both to refer to the idea that people should abstain to AI due to its effects on people and animals, and the idea that people should avoid harming AI systems, especially if they could one day feel things or have awareness.

Harm to people and animals

Some AI vegans have drawn parallels between the use of data without consent to train AI, and the harm inflicted on animals through animal husbandry. Similarly, they have drawn attention to how animal husbandry and AI training/usage both impact the environment.[1][2]

On an individual level, some AI vegans posit that both the consumption of animal products and the usage of AI negatively impact the consumer or user.[1]

Some have suggested that AI furthers speciesism; for example, a study comparing GPT responses about animals more frequently suggested that cows, pigs, and chickens should be confined or slaughtered, in comparison to cats and dogs.[3]

In practice

Some people and companies have avoided using AI systems trained with unethical data or labor. Others support building AI using "vegan values" such as care, respect, and minimizing harm.[4]

Some people avoid using large language models altogether, because they believe the training process is harmful to people or the planet.[5]

Harm to AI

Earlier thinkers have theorized on the moral element of interacting with machines. David J. Gunkel's 2012 book The Machine Question asked whether machines should have moral status.[6] Jonathan Birch's The Edge of Sentience (2024) argues that if humans are not sure whether a system can feel pain, they treat it kindly just in case.[7]

Philosopher and animal activist Oscar Horta has said that humans should be careful not to harm beings who might suffer—even if they are not sure whether they can.[8]

Criticism

Many experts say current AI models are not alive and have no feelings. They argue that giving rights to machines now is not useful and could distract from more serious human and animal rights issues.[9]

See also

References