Telecom operators, like companies in many other sectors, have increasingly been developing AI-driven chatbots to help automate customer interactions in order to reduce cost, and potentially improve levels of service. But none would claim, as Google engineer Blake Lemoine recently did, that their chatbots were fully sentient. Lemoine’s claim centered on the LaMDA (Language Model for Dialog Applications) chatbot technology, with which the engineer conducted an ‘interview’ to address the issue of sentience. The problem arises in trying to prove that something like LaMDA is not just following along on its statistically driven automated responses. As these neural networks are black boxes, outsiders cannot divine what is actually happening within them while they are operating. One thing seems…