Stanford University researchers have illustrated that forcing a blackbox AI system to comply with data protection and privacy regulations is going to be onerous and potentially impossible. As outlined in a recently published paper, the process often requires completely retraining the neural network model itself, which is going to be both expensive and time consuming. Consequently, the scope of complying with something like GPDR could be ruinous for an AI system that has been deployed. If it transpired that the training data contained information that the developer was later instructed to delete, such as faces, voices, personal data, or health records, it might require a complete overhaul – starting from scratch. The paper was authored by four Stanford academics, and…