
Apple says it’s found a way to make its AI models better without training on its users’ data or even copying it from their iPhones and Macs. In a blog post first reported on by Bloomberg, the company outlined its plans to have devices compare a synthetic dataset to samples of recent emails or messages from users who have opted into its Device Analytics program.
Apple devices will be able to determine which synthetic inputs are closest to real samples, which they will relay to the company by sending “only a signal indicating which of the variants is closest to the sampled data.” That way, according to Apple, it doesn’t access user data, and the data never leaves the device. Apple will then use the most frequently picked fake samples to improve its AI text outputs, such as email summaries.
Currently, Apple trains its AI models on synthetic data only, potentially resulting in less helpful responses, according to Bloomberg’s Mark Gurman. Apple has struggled with the launch of its flagship Apple Intelligence features, as it pushed back the launch of some capabilities and replaced the head of its Siri team.
But now, Apple is trying to turn things around by introducing its new AI training system in a beta version of iOS and iPadOS 18.5 and macOS 15.5, according to Gurman.
Apple has been talking up its use of a method called differential privacy to keep user data private since at least 2016 with the launch of iOS 10 and has already used it to improve the AI-powered Genmoji feature. This also applies to the company’s new AI training plans as well, as Apple says that introducing randomized information into a broader dataset will help prevent it from linking data to any one person.