loader

Meta Unveils Advanced Multimodal AI Models and the Future of AI Innovations

Meta has officially introduced its latest versions of the Llama artificial intelligence (AI) model, with the Llama 4 series making waves in the tech community. Unveiled on April 4, these new models represent a significant step forward, as Meta positions itself among the leaders in AI innovation.

The Llama 4 models, including Llama 4 Scout and Llama 4 Maverick, are touted as the first open-weight natively multimodal models, meaning they can process and interpret various types of media, not just text. In the words of the announcement, it emphasized that the company is also previewing Llama 4 Behemoth, which is being positioned as “one of the smartest LLMs in the world and our most powerful yet to serve as a teacher for our new models.”

Mark Zuckerberg, Meta’s CEO, previously noted that the company is committing to investing up to $65 billion in 2025 to bolster their AI capabilities. Such a substantial investment underlines the growing importance of AI technology within the company’s vision beyond social media and into functional applications such as premium subscriptions for its AI assistant, Meta AI.

In parallel news, OpenAI has revealed plans to release an open-source version of its LLM, which has not been done in years. This strategic move is in response to the growing interest in open-source AI technologies, drawing comparisons with Meta’s Llama model which has been downloaded one billion times since its launch in 2023. As companies embrace both proprietary and open-source models, the landscape of AI development continues to evolve rapidly.