Amazon’s Alexa is now going to giving out health advice to UK NHS patients, after a partnership between the two organizations. It comes as Alexa and Google Assistant were used by the University of Washington to monitor cardiac arrest in sleeping patients, and device advances from Fitbit and Omron – a flurry of activity in the connected healthcare sector.
Amazon’s NHS deal has garnered the most attention, a new partnership instigated by the Department of Health for use by NHS England. Alexa is essentially just scanning the NHS website, responding to requests from users about symptoms and conditions, but this hasn’t stopped the privacy advocates raising concerns.
Big Brother Watch has already sounded the alarm, where director Silkie Carlo said “any public money spent on this awful plan rather than frontline services would be a breathtaking waste. Healthcare is made inaccessible when trust and privacy is stripped away, and that’s what this terrible plan will do. It’s a data protection disaster waiting to happen.”
Given this level of vitriol, it becomes difficult to find exactly what has prompted this backlash, given that the current integration is little more than website-scraping with a chatbot interface. Amazon isn’t getting access to patient records, nor is it acting as a middleman that might end up handling very sensitive medical data. At the moment, in the current scope, it does little more than read the NHS’ pretty impressive web resources aloud – perhaps a better choice than the myriad of other sources it used to scrape, from an English perspective at least, due to the NHS getting to control the advice.
The likes of Carlo might have started with good intentions, but they are collectively moving towards a point where they are operating on just pure knee-jerk emotional reactions – triggered by keywords and brand associations, rather than actual thought-out discourse. Sure, if it transpires that Amazon has been lying, that it is actually building health profiles on users and selling data onto third-parties, then by all means, pile on and flay them – but currently, we can’t find the justification for such hostility.
The NHS has set up a unit to promote the use of more digital technologies across its operations, called NHSX. AI-enabled scanning, as discussed in our Cleveland Clinic and Siemens piece, is on the cards, as are electronic prescriptions for medication.
Current public sentiment, based on our slightly depressing trawl through the comment sections, seems mixed. Many seem technophobic, while others still think the peak of chatbots is at the level of Siri’s debut, with quite a few hoping that it reduces wait times in the emergency departments. Most seem to be missing the point that this is little more than website-scraping though.
Sentiment should rapidly change if such tools can prove their value to patients. Researchers from the University of Washington have created an application that uses Amazon Echo and Google Home smart speakers to monitor sleepers for signs of cardiac arrest. The tool listens for agonal breathing, listening for people gasping for breath, and can also then call for help – after all, it’s not much use knowing you had a cardiac arrest through the night if you’ve already shuffled off this mortal coil.
The researchers note that over 500,000 people in the USA die annually from cardiac arrest, the symptoms of which include sudden unresponsiveness, gasping for air, and then the cessation of breathing. The bedroom is the most common location for cardiac arrest, outside of hospitals, and so, having a system in place to listen out for one of the tell-tale signs could help improve the odds of surviving.
The proof-of-concept detected the agonal breathing (gasping) 97% of the time, and was trained using 911 calls. The gasping is triggered by low oxygen levels, and its guttural quality is apparently a very unique market, which helps make it possible to spot. The false-positive rate is apparently 0.22%, which means it should very rarely issue the alarm for someone not suffering agonal breathing, but the researchers tweaked the model to achieve a 0% rating when requiring two agonal events at least ten seconds apart.
“A lot of people have smart speakers in their homes, and these devices have amazing capabilities that we can take advantage of,” said Shyam Gollakota, an associate professor in the UW’s Paul G Allen School of Computer Science & Engineering. “We envision a contactless system that works by continuously and passively monitoring the bedroom for an agonal breathing event, and alerts anyone nearby to come provide CPR. And then if there’s no response, the device can automatically call 911.”
As for the training data, 162 samples collected from 911 calls between 2009 and 2017 to Seattle’s emergency line were recorded by a range of devices, specifically an Amazon Echo, an iPhone 5s, and a Samsung Galaxy S4. This created a total of 7,316 positive clips that could be used to train the ML model. The negative data, used to teach the model what was not agonal breathing, comprised 83 hours of audio gathered in sleep studies, resulting in 7,305 sample clips of snoring, grunting, and other nocturnal noises.
Sound Life Sciences will be the company that tries to monetize the system, as a spin-out from the University of Washington. The firm should be entering a lucrative market, although it is a market that needs to collectively work out the business models – which are complicated by the interactions between people, their healthcare providers, insurance firms, and governments.
Two other announcements illustrate the developments in the sector. Omron Healthcare has just expanded its Alexa-enabled blood pressure cuff, adding a slew of new ways to query the personalized reports from the device. Elsewhere, Fitbit and Cardiogram partnered to combine Fitbit device data with Cardiogram’s heart-health tracking platform – the latest integration for the wearable maker that is determined to expand into the healthcare marketplace.