Please consider supporting us by disabling your content blocker.
loader

Google trains AI on 300M sounds to detect diseases via smartphone audio

Google trains AI on 300M sounds to detect diseases via smartphone audio

Updated: Aug 30, 2024 03:57 PM EST

Exploring Bioacoustics

In a significant advancement in artificial intelligence, Google is employing a novel method that utilizes audio signals to anticipate initial symptoms of sickness. By utilizing 300 million audio samples, including coughs, sniffles, and labored breathing, Google has trained its AI foundation model to identify signs of diseases such as tuberculosis.

Google has partnered with Salcit Technologies, an AI startup focused on respiratory healthcare in India, to incorporate this technology into smartphones, potentially transforming healthcare access for high-risk communities in areas with limited resources.

What is Bioacoustics?

Bioacoustics, which combines biology and acoustics, showcases the growing use of AI to extract vital information from sounds produced by humans and animals. Google has previously invested in startups using AI to identify diseases based on scent, demonstrating its commitment to digitizing human senses.

Resolving Health Issues

Tuberculosis, responsible for nearly 4,500 deaths and approximately 30,000 new infections daily, highlights the importance of early detection. Google’s AI was trained using a massive dataset of 300 million audio clips, including coughs and breathing sounds from around the world, sourced from publicly available materials.

By analyzing subtle differences in cough patterns, the AI system can identify early signs of tuberculosis, facilitating timely intervention and treatment. This tool can be deployed in remote locations to screen for the disease, enhancing the accuracy of tuberculosis diagnosis and lung health assessments.

What is Swaasa?

Salcit is merging Google’s AI model with its machine learning technology, Swaasa, named after the Sanskrit word for breath. This collaboration is expected to significantly enhance the monitoring of respiratory health, especially in areas with limited access to healthcare professionals.

Conclusion

The use of AI to detect diseases through sound represents a significant breakthrough with the potential to revolutionize healthcare delivery. As AI models like HeAR become more advanced, they could expand beyond tuberculosis detection to identify other respiratory illnesses and cardiovascular conditions through sound analysis.

Developing such tools is crucial in a world where healthcare accessibility remains a challenge for millions. Utilizing smartphones’ existing infrastructure, these AI-based solutions can be rapidly deployed in urban and rural areas, improving healthcare inclusivity and accessibility.