Since the launch of ChatGPT by OpenAI, the realm of generative AI has become a competitive arena for tech giants like Google, Microsoft, and Amazon.
Remarkable strides have been made in a short time, with these companies pushing generative artificial intelligence (AI) into a mainstream force and driving innovation across the tech landscape.
Top 10 Bitcoin Holders: Who Owns the Most BTC? (2024)
While other tech giants have long been engaged in this fierce competition, Apple has primarily remained a spectator in this arena — despite integrating AI into some products, the company hasn’t prominently displayed developments in generative AI.
But there are signs this may all be about to change.
Key Takeaways
- Apple has been remarkably quiet on the AI front — but if you look around, you may find hints they are cooking something special at their headquarters in Cupertino, California.
- Apple is investing in or working on AI on many fronts, from frameworks, hardware acquisitions, in-house AI models, and research into running AI natively on small devices.
- Apple’s AI model ‘Ajax’ and open-source framework MLX also point to an in-house desire to enter the generative AI world.
- Reports also suggest Apple is investing billions of dollars into purchasing AI servers in NVidia hardware across 2024.
- Is this Apple’s playbook of: “Not first to market, but the best when it arrives?”
From Deployment to Development
Way back when, in June 2017, Apple unveiled the CoreML framework at its Worldwide Developers Conference (WWDC).
CoreML is a gateway designed to deploy pre-trained AI models into various applications across Apple devices. This introduction, while early in the mainstream life of AI, reflected Apple’s initial approach of passive involvement in generative AI.
Advertisements
However, Apple’s recent introduction of MLX, an open-source machine learning framework that empowers developers to construct generative AI models on Apple Silicon, showcases its evolution from a passive user to an active developer.
As stated by Apple on GitHub, MLX takes inspiration from frameworks like PyTorch, Jax, and ArrayFire, with a distinctive feature of shared memory.
Tasks executed on MLX seamlessly run on supported devices (currently CPUs and GPUs) without the need for data movement, and the framework also includes a library of pre-trained generative AI models to speed up development.
Empowering Generative AI on Everyday Devices
Frameworks are one thing, but Apple is also actively working to extend the advantages of generative AI to everyday Apple devices, with the commitment evident in recent developments.
Analyst Ming-Chi Kuo alleges that Apple has made substantial investments in AI servers, procuring 2,000–3,000 units in 2023 and planning to acquire an additional 18,000–20,000 units in 2024, constituting 5% of worldwide AI server shipments for the year.
The company is thought to be buying Nvidia’s HGX H100 8-GPU, specifically designed for generative AI training and inference. With claims that Apple’s expenditures reached at least $620 million in 2023 and a projected $4.75 billion in 2024 for AI server acquisitions, it suggests Apple is positioning itself to play a significant role in the rapidly evolving generative AI landscape.
Additionally, Apple has been addressing the challenge of running large generative AI models on devices with limited memory, according to a research paper by Apple Researchers (PDF). Their approach involves leveraging flash memory, commonly found in mobile devices, rather than traditional RAM, to store neural networks AI models — perhaps a native AI embedded into an iPhone?
Building Foundational Models
Moving on to models, Apple is actively constructing its own foundational generative AI. One notable model Apple has developed is called “Ajax.”
Reportedly, Ajax consists of 200 billion parameters and has demonstrated performance comparable to recent models from OpenAI. This move towards self-reliance aligns with Apple’s historical approach of maintaining an integrated technology stack.
Moreover, Apple is collaborating with external partners to advance open-source capabilities in generative AI. A noteworthy example is the multimodal AI model Ferret, developed with researchers from Cornell University.
Ferret can detect semantic objects and concepts within user-specified regions of an image and hold extended, multi-turn conversations with the user.
Potential Use Cases Across the Apple Ecosystem
As we explore recent developments from Apple, let’s envision the potential impact on their future products. Given the vast potential of generative AI, the list provided below is not exhaustive.
- Next-gen Siri: Siri, Apple’s widely recognized assistant, stands as a flagship product. Generative AI could help Siri enhance its capabilities by enabling it to understand complex queries, grasp user intent, and deliver more detailed, personalized responses on intricate topics in a conversational manner that feels natural. The integration of generative AI also introduces multimodal features, allowing users to interact with Siri through voice commands and images, fostering a more inclusive and versatile user experience. As Apple advances in this direction, the future suggests a smarter and more user-friendly Siri.
- Generative AI in Creative Tools: Apple has significant potential in integrating generative AI into its creative tools. For instance, imagine extending a photo seamlessly beyond its original bounds or effortlessly adding or removing specific people identified by name. In video editing, imagine altering an iMovie and generating an entirely new music backing track by describing it in a few words. The possibilities could also extend to personalized Memoji generation based on a 3D scan of your face, offering users a more immersive and customized creative experience. Apple’s foray into generative AI within creative tools opens doors to innovative and user-centric functionalities.
- Developer and Customer Support: Apple is actively working on creating updated versions of tools such as Xcode and other programming aids, integrating generative AI to assist users in completing their code. On the customer support front, Apple is also in the process of crafting an AI-powered system for AppleCare employees. This system is designed to provide enhanced support, enabling AppleCare representatives to troubleshoot technical issues more effectively and offer customers comprehensive assistance in resolving their concerns.
- Convenience and productivity features: In the domain of convenience and productivity, Apple could integrate AI features into its iWork suite (Pages, Numbers, Keynote). Like Microsoft’s approach with Office, Apple could use AI-driven functionalities that go beyond static templates. Users could instruct the AI to create a one-page resume, generate a formatted cover letter from a short prompt, or build a presentation based on minimal input, such as key points, an audio recording, or photos.
The Bottom Line
In the competitive landscape of Generative AI, Apple’s silent entry may well transform into a significant presence.
We have yet to see mobile use of AI take off outside of some generative apps, some tentative steps by Samsung, and mobile access to tools like ChatGPT.
Apple’s focus appears to be on everyday device integration, and if this is a problem they can solve within the next few years, we may see AI take another leap forward in everyday adoption.
This may hark back to the old days of Apple: not always first to market, but doing it best when they arrive
References
- A Deep Dive into Apple’s Machine Learning Framework (Yufeng Chen on Medium)
- MLX on Github
- Apple’s ‘Ferret’ is a new open-source machine learning model (Apple Insider)
- What is Apple GPT? ‘Ajax’, Release Date, Latest News And Features (Mark Ellis Reviews)
- Efficient Large Language Model Inference with Limited Memory (Arxiv, PDF)