Ever wonder what your furry friend is trying to say when they bark? Well, soon you might not have to wonder anymore! Thanks to advances in Artificial Intelligence (AI), understanding your dog’s bark could become a reality. Researchers are delving into the world of canine communication, using AI to analyse barks and decode their meanings. This breakthrough could revolutionise how we interact with our four-legged companions, bringing us closer to understanding their thoughts and feelings. Let’s dive into the details.
Also read: World’s first AI Beauty Pageant: List of 10 finalists, judging criteria & more
Researchers at the University of Michigan are exploring possibilities of AI to develop tools that can distinguish whether a dog’s bark signifies playfulness or aggression.
Also read: Digital Experience Assurance: AI that can predict & fix internet outages
According to a post by the University of Michigan, these models can also extract additional details from animal vocalisations, including the animal’s age, breed, and sex.
Through a partnership with Mexico’s National Institute of Astrophysics, Optics and Electronics (INAOE) Institute in Puebla, the study reveals that AI models initially designed for human speech can serve as a foundation for training new systems tailored to target animal communication.
Developing AI models to analyse animal vocalisations faces a significant hurdle due to the scarcity of publicly available data. Unlike human speech, which has abundant resources for recording, gathering data on animal vocalisations is challenging.
As a result of this data shortage, techniques for analysing dog vocalisations have proven difficult to develop, and the ones that do exist are limited by a lack of training material. To address this issue, researchers have repurposed an existing model originally designed for human speech analysis.
The researchers gathered recordings of dog barks from 74 dogs of different breeds, ages, and genders, in various situations. Humberto Pérez-Espinosa, working with INAOE, led this data collection effort. Abzaliev then used these recordings to tweak a machine-learning model, a computer program that finds patterns in large sets of data. They picked a model called Wav2Vec2, which was originally taught using human speech data.
Using this model, the researchers were able to generate representations of the acoustic data collected from the dogs and interpret these representations. They discovered that Wav2Vec2 not only did well at four different tasks but also performed better than other models specifically trained on dog bark data, with accuracy levels reaching up to 70%.
Understanding the nuances of dog vocalizations could greatly improve how humans interpret and respond to the emotional and physical needs of dogs, thereby enhancing their care and preventing potentially dangerous situations, the researchers said.