Uncover the challenges and opportunities this alliance presents.
When OpenAI CEO Sam Altman attended Apple’s annual developer conference this week, he mingled with current and former executives, including Apple co-founder Steve Wozniak. Nearly an hour later, Apple announced a much-anticipated partnership with OpenAI to integrate ChatGPT technology into its devices later this year.
Despite his presence, Altman, who has become a prominent figure in generative AI since ChatGPT’s launch 18 months ago, was not featured in Apple’s formal presentation. He did not appear on stage or join Apple CEO Tim Cook and other executives in a private press event about privacy, security, and the partnership.
“I was not surprised Sam Altman did not appear on stage,” said Ben Wood, an analyst at market research firm CCS Insight, in an interview with CNN. “Apple needed to manage the message carefully. OpenAI is just a tool to address broader AI-powered inquiries that are not central to the Apple experience. Including him in the livestream would have caused unnecessary confusion.”
Earlier this week, Apple showcased several AI-powered features for the iPhone, iPad, and Mac, most of which are driven by Apple’s own proprietary technology, called Apple Intelligence.
The company will incorporate OpenAI’s popular ChatGPT tool in a limited capacity, primarily when Siri requires additional assistance in responding to inquiries.
Inviting Altman but not having him appear publicly reflects Apple’s cautious approach to the partnership. OpenAI and other AI companies face ongoing concerns from researchers, industry experts, and government officials regarding misinformation, biases, copyright, privacy, security, and more. This deal comes as the industry rapidly evolves, with regulators, companies, and consumers still determining how to engage with the technology responsibly.
Apple hopes that a significant push into AI could boost iPhone sales, which have stagnated without major upgrades for years. Consumers are now taking longer to upgrade their devices, influenced by an uncertain economic environment, especially in China.
The company is also under regulatory scrutiny in Washington and was recently surpassed by chipmaker Nvidia as the second-largest public company in the US. However, within 60 hours after Apple’s Monday event, its stock price (AAPL) surged by up to 10%, increasing Apple’s market capitalization by over $300 billion, surpassing Nvidia, and positioning Apple to compete with Microsoft for the highest market value.
This timing is particularly notable: Apple usually takes its time to research, develop, and perfect new technologies before integrating them into its products. However, the rapid adoption of generative AI worldwide is likely pushing the company to quickly incorporate the latest popular technology into its smartphones.
“Apple needed to present an AI narrative, and Apple Intelligence should help calm anxious investors and reassure them that Apple is staying competitive with its rivals,” Wood added. “The partnership with ChatGPT is a significant development that enhances Apple’s AI capabilities, and improvements like a much better Siri will be appreciated by users.”
Nevertheless, the partnership could expose Apple to some risks, as it does not control OpenAI’s models or how it handles user inputs. Aligning with a company and technology that have not yet fully earned public trust could present challenges for Apple in the future.
OpenAI CEO Sam Altman, center, attends an Apple event in Cupertino, Calif., Monday, June 10, 2024.
A limited partnership
Although Apple has been developing its own AI program for years, partnering with OpenAI helps address competitive gaps.
When a user asks a question beyond Siri’s capabilities, ChatGPT can assist. In a demo following the keynote, Apple demonstrated to CNN how a user could upload a picture of vegetables at a farmer’s market and ask for dinner suggestions. Siri might suggest that ChatGPT is better suited for the question and prompt the user to consent to use the service.
Using ChatGPT as a complementary service could reduce risks for Apple. It’s also possible that Apple could partner with other AI companies in the future, such as Google’s Gemini or specialized providers in fields like healthcare.
“I think Apple will take a pragmatic approach to the OpenAI partnership,” Wood said. “If Apple finds that the relationship with OpenAI negatively impacts the overall user experience or creates security and data integrity issues, it might implement additional safeguards or seek other ways to deliver AI-powered content.”
A focus on privacy and security
Apple has emphasized its commitment to keeping user data private and secure with its proprietary technology, stating that most AI functions will be performed on the device itself, keeping inputs away from remote servers.
“As we build these incredible new capabilities, we want to ensure that the outcome reflects the principles at the core of our products. It has to be powerful enough to help with the things that matter most to you,” Cook said during the keynote. “It has to be intuitive and easy to use. It has to be deeply integrated into your product experiences.”
“And, of course, it has to be built with privacy, from the ground up,” he added.
Apple has stated it will not share any personal user information with OpenAI, meaning inquiries made through ChatGPT won’t be linked to an Apple user’s account. Additionally, Siri will prompt users for consent each time it wants to redirect a question to ChatGPT.
Wood believes the consent prompts and other safeguards that Apple is implementing “reflect its nervousness.”
Reece Hayden, a principal analyst at ABI Research, told CNN that Apple’s approach is smart because it provides customers with a choice regarding their data.
“By providing a phased approach that blends ChatGPT and native capabilities, users will worry less about the partnership,” he said. “Apple can also continue to highlight their own AI capabilities and mitigate some of the risks of being associated with OpenAI, which remains in a state of flux.”
Industry concerns
Companies like OpenAI have acknowledged the significant risks posed by AI, including manipulation and potential loss of control that could threaten human existence. However, many experts, researchers, and AI employees believe more should be done to educate the public about these risks and protective measures. Recently, a group of OpenAI insiders called for greater transparency regarding the concerns over the technology they’re developing.
Therefore, it was not surprising when some industry observers, including Elon Musk, quickly reacted to Apple’s partnership with OpenAI.
In a post on X Monday, Musk stated he would ban Apple devices at his companies — including Tesla, SpaceX, and X — if Apple proceeded with its AI plans. He said integrating OpenAI at the operating system level would be “an unacceptable security violation.”
While concerns over employee use of AI models are currently a global discussion across various industries, Gartner analyst Annette Zimmermann said Musk’s reaction is somewhat misdirected and not specific to iPhones.
“Any employee with a smartphone should follow company policies and avoid entering private information into the open domain of ChatGPT,” she said. “This is not specific to the iPhone or Tesla.”
Andrew Cornwall, a senior analyst at Forrester, told CNN he doubts that Apple users will become loyal to ChatGPT, as many people will not use the service unless Apple cannot provide a suitable response.
“When users do query ChatGPT, Apple will track the prompts and gather metrics to improve its own models,” he said. “Apple may switch providers or use more than one third party until it perfects its own model. At that point, Apple will shut the garden gate.”