Apple Desires AI to Operate Natively on Its Devices Rather Than Through the Cloud

The most recent findings from Apple’s study on mobile device support for massive language models provide the strongest indication that the iPhone manufacturer intends to compete with its Silicon Valley competitors in the field of generative artificial intelligence.

With their work named “LLM in a Flash,” the authors address a “current computational bottleneck” in the computing field. What they called it “paves the way for effective inference of LLMs on devices with limited memory” was really their method.

The process by which apps like ChatGPT draw on massive data repositories known as big language models to answer user questions is called inference. Big data centers often power chatbots and LLMs, which are significantly more powerful than an iPhone.

Although the report was published on December 12, it gained further attention after being highlighted late on Wednesday on Hugging Face, a prominent site for AI researchers to promote their work.

This is Apple’s second generative AI publication this month; the first was an effort to make its bespoke chips compatible with image-generating models like Stable Diffusion.

Apple Wants Ai to Run Directly

After the worst year in a decade for the smartphone market—shipments fell an estimated 5%, according to Counterpoint Research—device and chipmakers are pinning their hopes on new artificial intelligence technologies to help revitalize the industry.

Even though Apple introduced Siri, a virtual assistant, in 2011, the company has remained mostly isolated from the generative AI fervor that has enveloped Silicon Valley in the year following OpenAI’s groundbreaking chatbot ChatGPT.

Despite bringing on board John Giannandrea, Google’s senior AI executive, in 2018, Apple was still seen by many in the AI world as falling behind its Big Tech competitors.

According to Apple’s research, the company will prioritize AI that can be executed directly on an iPhone, in contrast to Microsoft and Google, who have mostly concentrated on providing generative AI services, such as chatbots, through their expansive cloud computing platforms.

Competitors of Apple’s, like Samsung, are preparing to introduce a new “AI smartphone” in 2019. According to Counterpoint, there will be over 100 million AI-focused smartphones sold in 2024, and by 2027, 40% of all new devices will have these capabilities.

Qualcomm CEO Cristiano Amon said that adding AI to smartphones would provide users a new experience and turn around falling mobile sales. Qualcomm is the biggest mobile chipmaker in the world.

“You’re going to see devices launch in early 2024 with a number of generative AI use cases,” he told the Financial Times in a recent interview. “As those things get scaled up, they start to make a meaningful change in the user experience and enable new innovation which has the potential to create a new upgrade cycle in smartphones.”

Devices will also be able to perform new types of picture editing, and more advanced virtual assistants will be able to predict user actions like texting and meeting scheduling. The new Gemini LLM was introduced this month by Google, and it will be able to operate “natively” on Pixel handsets.

Because smartphones do not have access to the massive computational resources and energy that are accessible in data centers, there are significant technological hurdles to overcome when attempting to run the kind of complex AI models that power ChatGPT or Google’s Bard on a mobile device.

A faster response time and the ability to use AI helpers offline are also possible outcomes of fixing this issue. A major differentiation for Apple in recent years, privacy, is likely to be improved by ensuring that searches are answered on an individual’s own device without transferring data to the cloud.

Our experiment is designed to optimize inference efficiency on personal devices,” the academics behind it added. While developing its method, Apple used models such as Falcon 7B, a scaled-down version of an open-source LLM created by Abu Dhabi’s Technology Innovation Institute.

Artificial intelligence researchers have been increasingly focused on optimizing LLMs to operate on devices powered by batteries. Even though academic papers don’t reveal Apple’s future product features, they do provide a unique look inside the company’s top-secret research facilities and the most recent technological advances.

Our work not only provides a solution to a current computational bottleneck but also sets a precedent for future research,” wrote Apple’s researchers in the conclusion to their paper.

“We believe as LLMs continue to grow in size and complexity, approaches like this work will be essential for harnessing their full potential in a wide range of devices and applications.”

You may be interested in our other articles about technology if you enjoyed this one:

Leave a Comment