Apple has banned employees from using ChatGPT and other artificial intelligence tools over fears of revealing sensitive information. The Wall Street Journall reported.
According to an internal document seen by both the outlet and people familiar with the matter, Apple has restricted the use of the prompt-driven chatbot along with Microsoft’s GitHub Copilot (which uses AI to automate software code).
The company fears the AI ​​programs could leak Apple’s confidential data, the source said.
OpenAI (the inventor of ChatGPT) stores all chat history from interactions between the chatbot and users to train the system and improve accuracy over time, and for OpenAI moderators to review for possible Terms of Service violations of the company to undergo service.
Related: Leaked Walmart memo warns against employees sharing company info with ChatGPT
While OpenAI released an option last month for users to turn off chat history, the new feature still allows OpenAI to view conversations on “abusewhere conversations are stored for up to 30 days before being permanently deleted.
A spokesman for Apple said: WSJ that employees who want to use ChatGPT should use their own internal AI tool instead.
Apple isn’t the first major company to ban the use of ChatGPT. Earlier this year it was JP Morgan Chase, Goldman Sachs and Verizon forbidden the use of the AI-supported chatbot for employees in the face of similar fears of data leakage.
Earlier this week, OpenAI CEO Sam Altman spoke before Congress about the urgent need for government regulation of AI development, calling it “crucial.”
Related: “If This Technology Goes Wrong, It Can Go Really Wrong”: OpenAI CEO Sam Altman Speaks to Lawmakers About AI Risks, Says Government Intervention Is “Crucial”