Your browser is not supported. Please update it.

5 April 2019

Google botches AI ethics board, DeepMind shows off medical prototype

While DeepMind was once drawing criticism for over-reaching an agreement with the NHS, hoovering up more data than the government was expecting, it is now showing off the fruits of its labors – a prototype retinal scanner that should have generated a lot more buzz than it seems to have done. This is because Alphabet-sibling Google has managed to shoot itself in the foot, thanks to the controversial make-up of its new AI ethics panel.

The Advanced Technology External Advisory Council (ATEAC) has infuriated a sizable part of its workforce (1.796 Googlers so far), who have filed a letter to call for the removal of Kay Coles James, due to her anti-carbon-regulation, anti-immigrant and anti-LGBTQ sentiment as president of a conservative thinktank, as well as Dyan Gibbens, the CEO of Trumbull, a defense firm developing autonomous systems. One wonders if an AI system might have been able to spot this potential hurdle ahead of time, but there we are.

Announced at MIT’s EmTech Digital show, the ATEAC panel comprised eight members, who were meant to capture a diverse array of perspectives. With quarterly meetings, and reporting, it was meant to provide much-needed steering feedback. Instead, it has riled up a good chunk of the Google workforce, who have precedent for protesting Google’s work with the Chinese government (Dragonfly) and its involvement in UASF (Project Maven) defense contracts – influence that seems to have shifted the needle inside Google.

Of course, there’s a counter argument that booting everyone out of a panel who doesn’t share the same opinion or outlook (groupthink, if you will), means that the panel isn’t exactly going to consider a diverse enough set of opinions and thinking to do anything particularly useful. This then opens the door for a debate on political allegiance, ethical stances, and moral outlook, to determine the ideal panelist, which is both outside the scope of Riot’s writing and a conversation perhaps best done in a pub.

One member of ATEAC, professor Allessandro Acquisti, has already bailed, meaning Google is going to be looking for a replacement. If it caves to the demand of its staff, that might stretch to looking for three replacements on an 8-person board – not a great look for a company that wants to portray itself as an industry leader.

So then, was the inclusion of these two meant to appease the US government, given their public allegiances tie in somewhat with the current regime’s own sentiments? Perhaps, given James’ apparent ties to the Trump organization, but good luck getting Google leadership to admit as much. ATEAC was announced at pretty much the same time as Google CEO Sundar Pichai was meeting with President Trump, who noted that the “meeting ended very well,” shortly afterwards, by the official communications channel of the United States government – microblogging platform Twitter.

With this in mind, if the addition was meant to curry favor, does ousting the two from ATEAC even sound palatable to Google? Do the undersigned Googlers have enough clout to alter their leadership’s position? Are we perhaps reading far too much into this, and perhaps should fold away our tinfoil bonnets?

That remains to be seen, but this is another example of Google angering its workforce. The company that was all about “do no evil,” now has a track record for shady practices, and given its monopolistic position that we will liken to the Empire of Rome, it seems that Google’s undoing has to come from internal forces, rather than external competition (barbarians).

So then, DeepMind must be pretty miffed that this is the backdrop against which it has unveiled a rather cool new prototype. The device claims to take around 30-seconds to deliver a detailed diagnosis for a number of conditions affecting the eye, with the accuracy of a top specialist.

It generates an ‘urgency score,’ on which a hospital can decide whether this particular patient needs the attention of the specialist, to handle the care from that point on. This should free up specialist time to deal with the most high-priority cases, for which their skills are actually needed, rather than having them deal with low-priority cases too. In theory, you should get more out of your specialists using such a system – which has scope for being expanded into other diagnostic areas.

It uses DeepMind’s AI and ML algorithms, to look for diabetic retinopathy, glaucoma, and age-related macular degeneration. The research behind the system was published in Nature Medicine, back in August, while the device itself was on show in London this past week. DeepMind has been working with Moorfields Eye Hospital in London for the last three years on this project.

So, the next step is to get the device production ready and regulatory compliant. The latter step is much trickier than the former usually, and companies like DeepMind, which want to jump into the healthcare sector, face perhaps the most onerous and hostile regulatory environment.

But with good reason – you don’t want to be seen by a machine that misses important symptoms or orders that you be rushed into emergency surgery for a procedure you don’t actually need. Those sorts of decisions still need to be left up to humans, even if tools like this from DeepMind take over the bulk of the diagnostic imaging and testing.

In time, enough data will be collected that such decisions could be made by a machine, but that seems a long way away from being technically feasible – never mind something that would be accepted by government regulators.