image of smart speaker
The tool is essentially an app for a smart speaker or a smartphone that allows it to detect the signature sounds of cardiac arrest and call for help. (Credit: Sarah McQuate/University of Washington)

A new tool can monitor people for cardiac arrest while they’re asleep without touching them. A new skill for a smart speaker — like Google Home and Amazon Alexa — or smartphone lets the device detect the gasping sound of agonal breathing and call for help.

Agonal breathing is present for about 50 percent of people who experience cardiac arrests, according to 911 call data, and patients who take agonal breaths often have a better chance of surviving.

The researchers gathered sounds of agonal breathing from real 911 calls to Seattle’s Emergency Medical Services. Because cardiac arrest patients are often unconscious, bystanders recorded the agonal breathing sounds by putting their phones up to the patient’s mouth so that the dispatcher could determine whether the patient needed immediate CPR. The team collected 162 calls between 2009 and 2017 and extracted 2.5 seconds of audio at the start of each agonal breath to come up with a total of 236 clips. The team captured the recordings on different smart devices — an Amazon Alexa, an iPhone 5s and a Samsung Galaxy S4 — and used various machine learning techniques to boost the dataset to 7,316 positive clips.

For the negative dataset, the team used 83 hours of audio data collected during sleep studies, yielding 7,305 sound samples. These clips contained typical sounds that people make in their sleep, such as snoring or obstructive sleep apnea.

From these datasets, the team used machine learning to create a tool that could detect agonal breathing 97 percent of the time when the smart device was placed up to 6 meters away from a speaker generating the sounds.

Next the team tested the algorithm to make sure that it wouldn’t accidentally classify a different type of breathing, like snoring, as agonal breathing.

For the sleep lab data, the algorithm incorrectly categorized a breathing sound as agonal breathing 0.14 percent of the time. The false positive rate was about 0.22 percent for separate audio clips, in which volunteers had recorded themselves while sleeping in their own homes. But when the team had the tool classify something as agonal breathing only when it detected two distinct events at least 10 seconds apart, the false positive rate fell to 0 percent for both tests.

The team envisions this algorithm could function like an app, or a skill for Alexa that runs passively on a smart speaker or smartphone while people sleep.