20.1 C
New Delhi
Saturday, December 21, 2024

Meta in Partnership with Arm to Enhance Smartphones with Advanced AI

More from Author

In Short:

Meta Connect 2024, held on Wednesday, introduced new AI features and wearable devices. Meta announced a partnership with Arm to create small language models (SLMs) for smartphones and other devices, enhancing on-device computing. These models aim to make AI more intuitive, enabling devices to perform tasks like making calls without user commands. Details on the SLMs remain limited.


Meta Connect 2024, the company’s annual developer conference, occurred on Wednesday, showcasing a range of new artificial intelligence (AI) features and wearable devices. Additionally, Meta announced a strategic partnership with the tech giant Arm to develop specialized small language models (SLMs) aimed at enhancing the functionality of smartphones and other devices. This initiative is designed to leverage on-device and edge computing, thereby facilitating faster AI inference.

Advancements in AI Functionality

According to a report by CNET, Meta and Arm intend to create AI models capable of performing more advanced tasks directly on devices. For example, the AI could function as a virtual assistant, enabling tasks such as making phone calls or capturing photographs. Currently, AI tools are utilized for various functions, including image editing and email drafting; however, user interaction is typically required to initiate these tasks.

Enhancing User Interaction

During the event, Meta and Arm emphasized their goal to make AI models more intuitive and responsive, reducing the reliance on manual commands or interactions with the interface. A key approach to achieving this involves deploying AI models directly on devices or utilizing nearby servers for edge computing, a method already adopted by research institutions and large enterprises.

Size and Capability Considerations

Ragavan Srinivasan, Vice President of Product Management for Generative AI at Meta, noted that developing these new AI models presents a significant opportunity. However, the models must be compact to function effectively on smaller devices. While Meta has previously developed large language models (LLMs) reaching 90 billion parameters, these are unsuitable for quicker processing on smaller devices. The Llama 3.2 models, featuring 1 billion and 3 billion parameters respectively, are expected to be more appropriate for this purpose.

Collaboration with Arm

Another critical aspect is equipping AI models with enhanced capabilities beyond simple text generation and computer vision. This is where Arm plays an essential role. The report indicates that Meta is collaborating closely with Arm to create processor-optimized AI models that can seamlessly adapt to the workflows of devices such as smartphones, tablets, and laptops. Currently, no additional details regarding the SLMs have been disclosed.

- Advertisement -spot_img

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.

- Advertisement -spot_img

Latest article