Welcome to the first edition of AI for Animals!
Roughly once a month, this newsletter will give a brief overview of a specific topic relating to AI and animals, along with the latest news and other useful resources. While the main focus will be on biological animals, we will also include resources relevant to digital minds, in recognition of the feasibility of their development and the enormous potential ethical implications.
For our first edition, we’ll provide a broad overview of AI’s implications for animals. In future editions, we’ll hone in on more specific topics, such as the role of AI in intensive animal farming, its potential to accelerate alternative protein development, and its applications for helping animals living in the wild.
We’re always open to ideas, questions, and feedback: just email contact@aiforanimals.org. If you’ve been forwarded this and would like to subscribe, you can do so here.
Max Taylor
Index
Overview: What AI could mean for animals
AI is already shaping society – and could become very powerful, very quickly
Faced with the breathless enthusiasm and knee-jerk skepticism that dominates much AI news coverage, it can be easy to forget the very real ways that AI is shaping society. An oft-cited success is Google DeepMind’s AlphaFold, which can predict the structure of a protein with unprecedented accuracy, thereby assisting in the design of new drugs and therapies. AI models have also been used to reliably detect various forms of cancer, including tiny tumors that otherwise go undetected by human doctors. More troublingly, criminals are targeting people using increasingly sophisticated AI-assisted scam bots and ‘sextortion’ attempts, governments are readying for a surge of cyberattacks using AI, and OpenAI has released a report showing how its technology is being used by ‘threat actors’ (albeit not very well – yet). For both good and ill, AI is already leaving a tangible mark.
Within a few decades, based on the rate of technological progress and the booming corporate investment in AI, these examples may look laughably trivial. One way to summarize the drivers behind current AI progress is: ‘Machine learning systems use computing power to execute algorithms that learn from data.’ As the amount of computing power grows exponentially, algorithms become increasingly sophisticated, and systems are given access to ever more data, AI will become more and more powerful. Though far from a clear consensus, it seems the majority of experts assign a 50% probability to the development of transformative AI systems within the next 50 years: i.e., machines capable of carrying out the same range of tasks as humans. Earlier this month, a former OpenAI employee caused a stir by publishing a lengthy report arguing that we’re on course for AGI by 2027. If this is close to accurate, the consequences will be enormous.
Much more work is needed to make AI safe – for everyone
It’s therefore reassuring that many people are working to ensure that AI is developed thoughtfully and safely. However, the size of the AI safety field is minuscule compared to that of AI capabilities, the field working to make AI more powerful. As a rough proxy, a couple of articles from 2022 and 2023 estimated that at that time, there were around 100,000 AI capabilities researchers compared to around 400 AI safety researchers.
Even within this small field, there is scant focus on AI’s implications for animals. A review of public AI materials found that only 2 out of 68 public AI statements, and 0 out of 71 computer/AI ethics courses, contained a meaningful reference to animals. With hundreds of billions of animals living on farms, and billions of billions living in the wild, this seems like a significant omission.
This is not an abstract consideration, because this technology is already impacting animals. AI systems are being employed in intensive animal farming to increase producers’ profit margins and, ostensibly, to improve animals’ health and wellbeing (at least to the extent that such improvements align with corporate interests). They are being used to protect some animals living in the wild – for example, by deterring animals from venturing onto busy roads – while killing others en masse. Some AI systems are being used to develop alternatives to cruel animal testing methods, while some AI research (for example, on brain-computer interfaces) relies on just such methods for their development. And while it can be hard to sift through the marketing hype, it appears that AI could in principle play a pivotal role in the development of cultivated meat and other alternative proteins. AI could even bring about the future envisaged by organizations such as Project CETI, the Earth Species Project, and the Interspecies Internet, in which machine learning has enabled humans to communicate with other animals. Would we use this newfound ability to help meet animals’ needs, or to find more creative ways to exploit them? What would a society look like where animals were able to give or withhold assent to being used by humans?
More broadly, AI developments could bring unprecedented economic growth. Would this further fuel rampant consumerism without consideration for the other species entangled in the supply chain? Or would it free up time, energy, and resources for people to spend caring for the planet and the animals in it? Would it propel further encroachment into animals’ habitats without consideration for the billions of lives ruined and cut short? Or would it herald the advent of animal-friendly development, where AI systems steer birds around large buildings, drastically reduce the number of animals killed by cars, and pause wind turbines to avoid injuring migrating bats?
What can we actually do about all this?
For a start, we could encourage political decision-makers at the national and international levels to explicitly include animals in statements of intent and legislation. We could encourage AI labs to include animals in their mission statements and risk assessments, and to represent animals’ interests in any democratic decision-making processes. We could also encourage those labs to take other tangible actions, such as working to remove speciesist biases from their Large Language Models. We could expand the AI and animals research landscape through initiatives such as research fellowships, targeted grants, and calls for papers on priority questions. Or we could create new organizations focussed specifically on this topic, such as the recently launched Open Paws (with other organizations hopefully to follow soon).
Of course, we also need to continue with current animal advocacy efforts. To a large extent, AI governance will reflect broader political and societal interests, so all work to spread anti-speciesist beliefs and mitigate animals’ suffering is likely to help make AI a positive force for animals. However, such action is unlikely to be sufficient. AI is too powerful and cross-cutting, and developing too rapidly, for us to neglect it in our advocacy efforts. We need a strong community of advocates monitoring animal-relevant AI developments, considering how to adapt our strategies accordingly, and taking robust action to implement these strategies.
Of course, none of this work needs to detract from work on AI safety for humans. There is huge scope for collaboration across the AI safety and animal advocacy communities to pursue goals benefiting humans and other animals. For example, if AI leads to catastrophic but entirely feasible outcomes, such as widespread engineered pandemics, global nuclear conflict, or a surge of totalitarian dictatorships, the consequences for animals – while even more unpredictable – are likely to be similarly dire. While there are bound to be some extremely tricky complexities and trade-offs, we can’t shy away from those discussions just because they might be difficult. Failing to harness AI’s potentially transformative force for the good of all species would be a gross dereliction of our responsibilities. Succeeding in doing so would be one of our greatest achievements. Faced with those two possible futures, we need to make sure that this area is given the time, energy, and resources that it deserves.
🚨 Updates
The AI, Animals, and Digital Minds (AIADM) conference was held in early June, with discussions continuing at the post-conference retreat and via the Hive Slack community. Presentation topics included Precision Livestock Farming, the EU’s AI Act, AI-assisted quantification of animal suffering, speciesism in Large Language Models (LLMs), human-animal communication, and digital minds.
You can find the session slides and recordings below, and read the organizers’ logistical retrospective. We hope this will become a regular event for people to share ideas, identify opportunities for impact, and work with each other to capitalize on those opportunities.
AIADM conference sessions
Changing attitudes toward digital sentience through antispeciesist advocacy, presented by Aditya S.K. (watch session | view slides)
Sentiments, Sentience, and Moral Standing, presented by Andreas Mogensen (watch session | view slides)
Towards Ethical AI: Collaborative Efforts for Animal Welfare under the EU’s AI Act, presented by Dr Fakhra Ikhlaq and Dr Saeed Ahmad (watch session | view slides)
AI's Potential to Map Animal Suffering: Insights from the Welfare Footprint Framework, presented by Wladimir Alonso (watch session | view slides)
Animal-friendly AI = Misaligned AI, presented by Bob Fischer (watch session | view slides)
Building Pro-Animal AI Systems panel, presenters include Sam Tucker and Sankalpa Ghose (watch session | view slides)
AI in Farming panel, presenters include Virginie Simoneau-Gilbert, Amber Elise Sheldon, and Walter Veit (watch session | view all slides)
Reimagining Our Future Relationships with Nonhumans with AI panel, presenters include Jane Lawton and Gal Zanir (watch session | view slides)
AIADM retreat lightning talks
Soenke Ziesche: AI alignment for nonhuman animals
Adrià Moret: A Conflict Between AI Safety and AI Welfare: Should We Control Near-Future AI Systems?
Ronen Bar: A Slip to the Tongue: How Language Unwittingly Shapes Bias Towards Animals
Ali Ladak: Some key empirical findings on digital minds
Alex Schwalb: Neuromorphic engineering as a potential enabling technology for digital minds
Jane Lawton: AI to Decode Animal Communication
James Faville: An AMS Model of Advocacy
Jeff Sebo: Updates from the NYU Center for Mind, Ethics, and Policy
🌏 Opportunities
AI Welfare Debate Week will take place on the EA Forum on July 1-7. The goal is to debate the statement “AI welfare [i.e., the potential wellbeing of future artificial intelligence systems] should be an EA priority [i.e., should receive 5% of effective altruism funding and talent].” The Forum team encourages people to publish a post that week with their perspectives on digital minds.
AI Governance course. This free, 12-week program isn’t specific to animals, but is a high-quality introduction to AI safety governance. The application deadline is July 28.
The Coller Dolittle prize is a multi-year challenge with an annual prize recognizing significant scientific research that supports the aim of Interspecies Communication. The communication must be noninvasive, two-way, and applicable in multiple contexts. Whoever can ‘crack the code’ of talking to animals will get either a $10 million equity investment or a $500k cash prize. Yearly prizes of $100k will be given to the most successful applicants each year. The application deadline is July 31.
🗞️ In the News
🗣️
Human-animal communication
$10m prize launched for team that can truly talk to the animals (The Guardian)
A $10 million prize has been launched to foster development in interspecies communication, leveraging AI to enable two-way conversations with animals. This initiative, known as the Coller Dolittle Challenge, is spearheaded by the Jeremy Coller Foundation and Tel Aviv University. It encourages the creation of systems that can recognize and interpret animal communication signals in a non-invasive manner, aiming to facilitate coherent communication across various species without the animals realizing they are interacting with humans.
AI helps understand complex orangutan communication (Cosmos)
A recent study has enhanced understanding of Bornean orangutan communication through machine learning and traditional analysis techniques. Researchers focused on ‘long calls’, complex vocalizations crucial for communication in dense rainforests, which comprise several shorter vocalizations called ‘pulses’. Building on the pulse ‘dictionaries’ collated by previous studies, the researchers identified three distinct categories of pulses, underscoring the vocal complexity and potential undiscovered repertoire of orangutan sounds.
Scientists release 1,000+ hours of wild meerkat audio; train model on it (ImportAI)
Researchers from multiple institutions developed ‘MeerKAT’ and ‘animal2vec’ to analyze and learn from over 1000 hours of meerkat vocalizations recorded in the wild. MeerKAT is a large-scale labeled dataset that assists in benchmarking deep learning techniques in bioacoustics, while animal2vec is an architecture designed to improve the classification of sparse and diverse animal sounds. This research could enable broader applications in understanding and classifying animal communications across various species and environments.
Sperm whale 'phonetic alphabet' discovered (Sky News)
Researchers studying sperm whales in the East Caribbean found that their communication resembles human language, using a complex "morse code" of clicks. This coding, enriched by varying rhythms and tempos, likely facilitates group coordination tasks like foraging and caring for young.
Every Elephant Has Its Own Name, Study Suggests (New York Times)
A recent study suggests that elephants may use specific rumbles, akin to names, to address individual members of their groups. Researchers used AI tools to analyze these vocalizations, indicating that elephants respond distinctly to sounds they recognize as directed towards them. This finding hints at a complex social communication system among elephants, potentially comparable to human naming practices.
Using AI to decode dog vocalizations (University of Michigan)
Researchers at the University of Michigan are using AI to decode dog vocalizations, developing tools to differentiate between barks indicating playfulness or aggression, and to identify characteristics such as the dog’s age, breed, and sex. They have repurposed human speech models for this task, using a dataset collected from 74 dogs in varied contexts. Their findings suggest that the complexities of human speech processing models can be adapted to understand animal communications, potentially enhancing our interactions and care for animals.
Learning to speak to whales using AI, with David Gruber (Big Brains podcast)
If you want to hear from someone working in the interspecies communication field, listen to David Gruber's recent interview on the Big Brains podcast. Gruber, a researcher at Project CETI (Cetacean Translation Initiative), discusses how AI is being used to decipher the complex communication system of sperm whales, likening it to a Morse code-like 'alphabet'. This work aims to decode these vocal patterns as well as exploring the broader implications for understanding animal languages and enhancing human-animal communication.
🐔 Opportunities for farmed animal welfare
AI-powered early detection system launched at British Pig & Poultry Fair to boost pig farming and welfare (Pig World)
An AI-powered device called SoundTalks was launched at the British Pig & Poultry Fair. The device is designed to detect respiratory diseases in pigs up to five days earlier than traditional methods by analyzing sound data collected from the animals. The system aims to help farmers by reducing the need for constant observation, lowering antibiotic use, and decreasing mortality rates.
AI facial recognition technology for cattle (AgProud)
Developers are working on a facial recognition application for cows called CattleTracs. The tool’s purpose is to identify individual animals and enhance traceability systems, which could help with managing and tracking diseases within farmed animal populations.
To Improve Fish Welfare, a Startup Blends AI With an Ancient Japanese Fishing Method (Sentient)
A California-based startup is integrating AI with the ancient Japanese fish-slaughtering technique called Ike Jime to supposedly improve sustainability and fish welfare. This method, which involves quickly killing the fish by spiking the brain, is thought to result in less stress and higher quality meat with a longer shelf life, thereby reducing waste. The AI system assists by accurately identifying the brain's location for precise execution, making the process efficient and scalable for wider adoption in the aquaculture industry.
Artificial intelligence is being trained to monitor cattle (Beef Central)
In Queensland, Australia, the Department of Agriculture and the company Infarm have developed an automated AI-driven camera system to monitor cattle. This system can identify health issues such as injuries and lameness, as well as monitoring fertility rates among cattle. The system also uses a satellite connection to upload images and videos for AI training, aiming to facilitate remote surveillance and minimize the need for constant connectivity.
❌ Risks to farmed animal welfare
We should campaign to restrict AI use in animal agriculture (Before Porcelain)
This article argues that while AI precision livestock farming technology could be good or bad for animal welfare on a per-animal basis, it is likely to be bad on a total population basis, as it will probably increase the total number of farmed animals. We should therefore consider advocating for restrictions on precision livestock farming in industrial settings.
The insect farmers turning to AI to help lower costs (BBC News)
The company Full Circle Biotechnology is using AI to more efficiently farm black soldier fly larvae, which are a common source of animal feed. Their AI system analyzes data to optimize farming conditions such as temperature, food quantity, the space that the larvae need, and whether to introduce new strains or species. Using these methods, they aim to expand production and reduce costs to help insect-based feed become a competitive alternative to soy products.
How AI builds consumer connections with chicken brands (WATT Poultry)
AI is increasingly being used by chicken brands to enhance their marketing strategies, predict consumer trends, and improve operational efficiencies. By using AI to improve their understanding of consumer behaviors and preferences, marketers can automate content creation and better tailor their products and campaigns to individual consumers.
BiOceanOr: Revolutionising aquaculture with AI and BlueInvest (European Commission)
A French startup called BiOceanOr has developed an AI tool called AquaREAL that aims to enhance aquaculture productivity. The tool does this by predicting crucial water quality parameters, such as dissolved oxygen levels, and assessing their impact on production and the surrounding marine environment.
Pig and poultry AI analysis looks to improve farm management (Pig World)
Technology company Farmex and livestock intelligence business Optifarm have teamed up to enhance farm management for pig and poultry farms. Their AI technology monitors parameters such as water use, temperature, and static pressure to identify anomalies and possible explanations. The results can then be sent directly to staff on-farm, enabling decisions to be made immediately about how best to handle these anomalies.
EFishery gets $30m loan from HSBC to expand reach of AI-powered feeding tool (Tech in Asia)
HSBC Indonesia has provided a $30 million loan to eFishery to support the expansion of its AI-powered feeding tool, eFeeder, which is leased to small-scale fish farmers. This tool enhances farm management by allowing remote scheduling and monitoring of feeding times, contributing to a 25% increase in efficiency and a 30% increase in yields.
🍔 Alternative proteins
BioRaptor and Aleph Farms use AI to lower the costs of cultivated beef (VentureBeat)
BioRaptor and Aleph Farms are collaborating to use AI to enhance the efficiency of cultivated beef production. By using AI to monitor the relationship between the cell feed and the various parameters of the cell environment (such as pH, dissolved oxygen, and temperature), they hope to identify the optimal conditions for cell growth.
Meat, dairy startups on the cutting edge (Food Business News)
Climax Foods, an alternative proteins startup, claims to have used AI to develop a vegan blue cheese by identifying unique combinations of plant-based ingredients that replicate the flavors and textures of conventional dairy products. Their cheese came close to a Good Food Award, but was ultimately disqualified due to regulatory concerns over the unconventional inclusion of kokum butter. Meanwhile, Meati Foods says that AI was important for accelerating their understanding of the nutritional and functional opportunities offered by mycelium.
Pulmuone Signs R&D Deal with Robotics Company ABB to Advance Cultivated Seafood Plans (Green Queen)
Pulmuone, a South Korean company, has entered into a research and development partnership with the Swiss robotics company ABB to enhance its cultivated seafood production using advanced AI and robotics. According to Pulmuone, this marks the first time that AI and robot automation tech will be used for cell cultivation.
Artificial Intelligence deployed to help make ‘greenest burger in Britain’ (The Business Desk)
Myco has introduced the Hooba burger, a quarter-pounder made with oyster mushrooms that it apparently grew with the help of AI.
How One CEO Is Using AI to Develop New Plant-Based Foods (Time)
In an interview with Time, the CEO of NotCo explains how his company uses AI to develop new plant-based foods by finding unique ingredient combinations that mimic animal-based products, such as the use of pineapple and cabbage to create their NotMilk product.
🤖 Digital minds
Will AI ever become conscious? It depends on how you think about biology (Vox)
The debate over whether AI can become conscious hinges on the concept of "computational functionalism," which suggests that consciousness arises from certain types of information processing rather than from biological material. While current AI systems don't score high enough on consciousness measures, experts believe it's plausible and perhaps inevitable that we might create conscious AIs in the future. The ethical implications are significant, with some calling for a moratorium on creating conscious AI until we better understand the risks and mechanisms of consciousness.
A Transformational $6,000,000 Endowment For Mind, Ethics, And Policy At Nyu Arts & Science (NYU)
The Mind, Ethics, and Policy Program at NYU Arts & Science will transition into the Center for Mind, Ethics, and Policy (CMEP) thanks to a $6,000,000 endowment. CMEP, under the leadership of Jeff Sebo, focuses on advancing our understanding of the sentience and moral status of nonhumans, such as animals and AI systems, striving for a future where all sentient beings are treated with compassion and respect. With the financial stability provided by the endowment, CMEP aims to expand its research and influence in assessing the nature of nonhuman entities and shaping policies to ensure ethical treatment across species and potentially different types of consciousness.
No, Today’s AI Isn’t Sentient. Here’s How We Know (Time)
This article argues that artificial general intelligence (AGI) is often conflated with systems like large language models (LLMs), but these models lack sentience, only simulating intelligence in narrow domains. While humans sense and respond based on physiological states, LLMs generate responses based on data inputs, without any physical or emotional sensations. True AI sentience would require an embodied, biological framework, beyond the capabilities of current mathematical models like LLMs.
[Video] Is this AI-brain hybrid sentient? (Michael Dello-Iacovo)
The rapid advancement in AI technologies, particularly brain organoids, raises significant ethical considerations about artificial sentience—especially concerning their potential capacity for pleasure and pain. The discourse centers on the necessity of regulatory and ethical frameworks as these technologies could lead to digital minds, possibly numbering in the trillions, which may be susceptible to exploitation similar to sentient beings historically. Such concerns are compounded by current challenges in measuring or even fully understanding AI sentience, with debates often influenced by studies indicating a public belief in the sentience of existing AI systems like ChatGPT, despite skepticism from experts.
🐁 …and more
Montreal-based startup gets $850,000 to protect whales with artificial intelligence (The Canadian Press)
The Canadian Fisheries Department has allocated $850,000 to a Montreal-based startup, Whale Seeker, which has developed an AI technology designed to protect marine life from ship collisions. The tool can analyze aerial images to detect marine mammals 25 times faster than humans, allowing cargo ships to adjust their routes to avoid whales, dolphins, and porpoises. The system will also help the government decide whether to close fishing zones based on marine activity.
How AI is helping (and possibly harming) our pets (Washington Post)
Ogmen Robotics have rolled out prototypes of their ‘ORo’ robot that engages companion animals in activities and adjusts its interactions based on the pet's mood and needs. While animal experts note the potential benefits for the animals’ health and wellbeing, they also raise the risk of robots displacing the normal human-animal relationship.
Could AI put an end to animal testing? (BBC News)
AI can reduce animal testing by analyzing vast amounts of existing data to prevent redundant experiments. Researchers are already using AI to efficiently and reliably predict the toxicity of new chemicals. Current barriers to widespread acceptance include the risk of biases in existing data, and regulatory acceptance of non-animal testing methods.
📝 Published Research
Artificial intelligence and machine learning applications for cultured meat [preprint] (Todhunter et al., Zenodo)
This review covers the work available to date on the use of machine learning in cultured meat. It addresses four major areas of cultured meat research and development: establishing cell lines, cell culture media design, microscopy and image analysis, and bioprocessing and food processing optimization.
Vocal complexity in the long calls of Bornean orangutans (Erb et al., PeerJ)
This study explores the vocal complexity of orangutan calls. AI techniques like supervised machine learning with support vector machines and unsupervised clustering were used to analyze and classify the vocalizations from acoustic features. The study found that AI could help in accurately identifying and categorizing call types, enhancing the understanding of communication between orangutans.
animal2vec and MeerKAT: A self-supervised transformer for rare-event raw audio input and a large-scale reference dataset for bioacoustics (Schäfer-Zimmermann et al., arXiv)
This study introduces animal2vec, a transformer-based model tailored for analyzing sparse bioacoustic data, capable of learning directly from audio waveforms through self-supervised learning without requiring pre-labeled data. It also presents the MeerKAT dataset, a comprehensive collection of bioacoustic recordings from meerkats and the largest publicly available labeled dataset on non-human terrestrial mammals to date. This dataset was used to refine and benchmark the capabilities of animal2vec.
Contextual and combinatorial structure in sperm whale vocalisations (Sharma et al., Nature Communications)
The study presents new findings on the vocal communication of sperm whales, highlighting the contextual and combinatorial structure of their codas (sequences of clicks used for interaction). The researchers used AI to identify and classify patterns within a dataset of 8,719 codas based on features like rhythm and tempo. This revealed that these codas not only contain previously unrecognized features sensitive to the conversational context but also exhibit a combinatorial coding system combining rhythmic and tempo elements, which can generate a vast inventory of distinct vocal patterns.
African elephants address one another with individually specific name-like calls (Pardo et al., Nature Ecology & Evolution)
This study presents evidence that wild African elephants address one another with individually specific calls. The researchers used machine learning to show that the receiver of a call could be predicted from the call’s acoustic structure. Elephants also responded differently to playbacks of calls addressed to them compared to calls addressed to a different individual.
Which Artificial Intelligences Do People Care About Most? A Conjoint Experiment on Moral Consideration (Ladak et al., Association for Computing Machinery)
In an online study, various AI characteristics were assessed for their impact on perceptions of moral wrongdoing when harmed. The findings indicate that human-like physical presence and prosocial behaviors (such as expressing emotions, recognizing them, cooperating, and making moral judgments), significantly enhance the moral consideration AIs receive. This suggest that for AI systems to be viewed more sympathetically, they must demonstrate positive intentions.
📚 Resources
The Hive Community Slack has several channels dedicated to discussion of AI and animals, including #c-ai-discussion for broad discussions and #s-ai-coalition for project collaboration.
If you want to dig deeper, the aiforanimals.org website has a list of relevant articles, papers, and other materials giving an overview of the AI and animals space.