While IBM is still struggling to convince the world that its Watson project has been a healthcare success, Siemens and the Cleveland Clinic have published research that illustrates how AI tools can be used to personalize radiation therapy for cancer patients.
Published in The Lancet Digital Health, the study shows how the research team built an AU framework from computerized tomography (CT) scans, and electronic health records (HER). They say that this framework is the first to use medical scans to inform radiation dosages, and will help move the field towards properly personalized treatments for specific illnesses.
Siemens is one of the biggest providers of healthcare equipment, and so its progression into the software and services side of things is logical. To this end, it comes at the problem of getting AI into healthcare from a different angle than IBM, and arguably from a more stable foundation than Big Blue’s software and consultancy approach.
To this end, Siemens’ progression is very similar to the one that many in the other IoT markets are making, from equipment to services, and then likely progressing to equipment-as-a-service. Siemens is, of course, involved in the industrial and energy markets, where many of those transitions are rolling out, but while the IoT has largely proven itself capable, and is now getting on with the grind of actually doing business, AI still has to (largely) move past a whole lot of skepticism.
IBM hasn’t helped in that regard. It made a lot of noise about how much of a gamechanger Watson would be in the healthcare sector, and was so confident that it spent $2.6bn on Truven in 2016 and $1bn for Merge in 2015, as well as an undisclosed amount for Explorys. This trio of acquisitions, costing north of $4bn, was meant to propel Big Blue into the lead, but IBM has run into difficulties.
Things have been very quiet on the Watson healthcare front for about a year, following an investigation by Stat News that found Watson AI was giving incorrect diagnoses to cancer doctors. Leaked internal documents suggested that the Watson system was being trained on synthetic medical records, rather than real patient records. This was a very big problem for IBM, which has said publicly that the training used real data, with one doctor from Florida’s Jupiter Hospital noting that “this product is a piece of shit,” and that the hospital had “bought it for marketing and with the hopes that you would achieve the vision. We can’t use it for most cases.”
IBM’s Watson sounded like a great fit for hospital work, able to automate a lot of the image scanning and studying functions that doctors are currently required to do, and in turn freeing up time that those doctors could spend with patients. However, it appears that technology companies are running into problems surrounding privacy, and don’t have as much access to medical records and patients as they would like.
IBM laid off between 50% and 70% of its health division, just a few months prior to the Stat News report. It hasn’t commented on whether this was just trimming the fat from a bloated division, swollen by the acquisitions, or if it was a sign of severe internal struggles to try and balance the books. Since these redundancies, IBM really has toned down the hype, and Qualcomm is another high-profile casualty.
Siemens, meanwhile, has been much more subdued on the hype-train front. The Cleveland Clinic notes that radiation therapy is currently delivered on a uniform basis, using a consistent dosage that apparently does not account for differences in the individual tumor characteristics or the patient’s circumstances.
The pair say that this reduces the chance of treatment failure to less than 5%, down from the typical cumulative incidence of failure at 3 years being 13.5%. For some types of tumors, we were told, like large squamous cell carcinomas, the failure rate can be as high as 25%, and therefore, this system can provide a five-fold reduction.
To this end, the AI framework was developed to help tackle the variations at play, to provide the individualized radiation doses. It was constructed from the scans and records of 944 lung cancer patients, feeding the pre-treatment scans into a deep-learning model that would then analyze the scan and predict the treatment outcomes in an image signature. This signature was then combined with the patient’s EHR files, via some ‘mathematical modeling,’ to discern the clinical risks at play, which would then generate the required dose of radiation.
The researchers say that this hybrid approach is much better suited for smaller datasets, such as those found in hospitals that are much smaller than the ones used in marketing, e-commerce, or ridesharing modeling. While the 944 scans is apparently one of the largest datasets for lung cancer, the researchers say that other adopters will be able to tune the framework for their own data resources.
This should mean that the framework can be put to use on other forms of cancer, although this would require re-training the model with other datasets. It does seem that there won’t be a single gargantuan framework covering all radiation therapy, but with enough specialized frameworks for specific cancers or families of cancer, there might be enough scope for combining these together in a uniform model at some point down the line.
“The development and validation of this image-based, deep-learning framework is exciting because not only is it the first to use medical images to inform radiation dose prescriptions, but it also has the potential to directly impact patient care,” said Dr Mohamed Abazeed, a radiation oncologist at Cleveland Clinic’s Taussig Cancer Institute and a researcher at the Lerner Research Institute. “The framework can ultimately be used to deliver radiation therapy tailored to individual patients in everyday clinical practices.”