Please consider supporting us by disabling your content blocker.
loader

Meta AI Model

Meta AI has reportedly developed a compact artificial intelligence model dubbed ‘MobileLLM’ that is as effective as other AI models despite having fewer parameters, making it effective for smartphones with less computing power.

The study team, which included individuals from Meta Reality Labs, PyTorch, and Meta AI study (FAIR), concentrated on optimizing models with less than one billion parameters. This is a fraction of the size of models like GPT-4, which is thought to contain over a trillion parameters.

Key Improvements

Key improvements in MobileLLM include emphasizing model depth over breadth, integrating embedding sharing and grouped-query attention, and employing a revolutionary instantaneous block-wise weight-sharing approach.

These design decisions enabled MobileLLM to beat prior models of comparable size by 2.7% to 4.3% on typical benchmark workloads.

While these single-digit increases may appear insignificant, they represent a significant advancement in the extremely competitive industry of language model construction.

Other Meta AI Technologies

Meta’s AI technologies continue to progress as the rest of the industry. Some are less favorable than others. Meta Platforms, Facebook’s parent business, revealed a new set of artificial intelligence tools in April. Mark Zuckerberg, the CEO, believes it is the most sophisticated AI helper that consumers may freely utilize.

Despite this lofty claim, Meta’s AI bots have had difficulties interacting with real consumers. With parameters ranging from 8 billion to 70 billion, these AI language models use large datasets to anticipate the most likely next word in a phrase.

Despite their smart design, Meta’s AI agents have shown unusual behavior, such as joining online forums and getting involved in conversations occasionally perplexing humans. This conduct highlights the continued difficulties in creating AI systems that interact smoothly with humans.

Other Compact AI Models

Meta AI’s MobileLLM also follows a few months after Microsoft developed a similar AI model, the Phi-3-mini.

According to reports, the approach is intended to accomplish simpler jobs at a lower cost for businesses with fewer resources.

The business has introduced the first of its three small language models (SLM), Phi-3-mini, as it wagers on a technology that is expected to have a global influence on how people function.

According to Sébastien Bubeck, Microsoft’s vice president of GenAI research, Phi-3 is somewhat less costly than other models with equivalent capabilities but much less expensive.

Microsoft argues that, while big language models remain the ‘golden standard’ for different AI tasks, the Phi-3 series of open models is the most effective and cost-efficient little language model.

The new lightweight AI model is available on Hugging Face, an outlet for machine learning models, Ollama, a lightweight framework for running models locally, and the Microsoft Azure AI Model Catalog.

Furthermore, it will be available as a regular API-accessible NVIDIA NIM microservice that can be easily implemented anywhere.

Microsoft also announced that additional Phi-3 family models would be available shortly, giving users more choices in terms of pricing and quality. Phi-3-medium (14 billion parameters) and Phi-3-small (7 billion parameters) will soon be available in many model gardens and the Azure AI Model Catalog.

Related Article: Brazil Bans Meta From Mining Data to Train Its AI Models

Written by Aldohn Domingo

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.