With the explosion of artificial intelligence offerings, guardrails have rapidly been outrun, raising concerns and corporate pledges for safety in the field. Apple recently signed a voluntary pact first put forward by the White House in July 2023, joining the likes of OpenAI, Meta, Google, Microsoft, Amazon, Adobe and Nvidia, among others—bringing the total number of signatories to sixteen. The guidelines, which are not legally enforceable, call for companies to test their AI systems for potential risks—like security vulnerabilities or discriminatory biases—and release the results to government agencies. Apple is planning on integrating OpenAI’s ChatGPT into the voice assistants of its latest generation of iPhones, and hopes to alleviate uneasiness surrounding that move by joining the government-sponsored initiative. These concerns…