25.1 C
New Delhi
Sunday, October 20, 2024

Meta Introduces Advanced AI Models and Film Tools to Foster Innovation

More from Author

In Short:

Meta, the parent company of Facebook, is launching new AI models, including a “Self-Taught Evaluator” that reduces human input in AI development. They introduced Meta Segment Anything 2.1 for image segmentation and the open-source Meta Spirit LM, which combines text and speech. Additionally, Meta unveiled Movie Gen, an AI tool for creating videos and audio from text. This technology will be refined through collaborations with filmmakers and content creators.


Meta Unveils New AI Models and Movie Tools to Drive Innovation

Meta Unveils New AI Models

Meta, the parent company of Facebook, announced on Friday the release of a series of new AI (Artificial Intelligence) models from its research division. Among these innovations is a model named the “Self-Taught Evaluator,” designed to minimize human involvement in the AI development process. The Fundamental AI Research (FAIR) team at Meta introduced various models and tools intended to enhance machine intelligence (AMI). Key releases include Meta Segment Anything (SAM) 2.1, which is an updated model focused on improving image segmentation, and Meta Spirit LM, a multimodal language model capable of integrating text and speech for more natural interactions. Notably, Meta Spirit LM is characterized as the company’s first open-source multimodal language model to effectively merge text and speech.

Additional Innovations

The advancements also feature Layer Skip, a solution that accelerates the generation times of large language models (LLMs) on new data, and SALSA, a tool for evaluating post-quantum cryptography. Furthermore, Meta introduced Meta Open Materials 2024, a dataset curated for AI-driven materials discovery, along with Meta Lingua, a user-friendly platform aimed at facilitating efficient AI model training.

Meta Open Materials 2024 offers open-source models and data derived from 100 million training examples, thus providing an accessible resource for materials discovery and AI research communities.

The Self-Taught Evaluator represents a novel approach to generating synthetic preference data for training reward models, eliminating the dependence on human annotations. Reports indicate that the researchers at Meta employed exclusively AI-generated data to train this evaluator model, effectively removing human input during that phase.

“As Mark Zuckerberg highlighted in a recent open letter, open-source AI ‘holds unprecedented potential to enhance human productivity, creativity, and quality of life,’ while also driving economic growth and advancing significant medical and scientific research,” stated Meta.

Launch of Meta Movie Gen

Earlier, on October 4, Meta launched Movie Gen, a suite of AI models capable of generating 1080p videos and audio from basic text prompts. According to Meta, these models provide high-definition video generation, personalized content, and precise editing capabilities, surpassing similar tools available in the industry. The Movie Gen system also facilitates audio syncing to visuals. Although still under development, Meta is collaborating with filmmakers to refine its functionalities, which may be utilized in social media and creative content production.

“Our initial generative AI initiatives began with the Make-A-Scene model series, allowing for the creation of images, audio, video, and 3D animations. Following that, diffusion models birthed a second wave, utilizing Llama Image foundation models to produce higher-quality images and videos and enabling effective image editing. Now, Movie Gen marks our third wave, integrating all these modalities and providing unprecedented fine-grained control for users,” commented Meta.

Movie Gen delivers four principal capabilities: video generation, personalized video creation, precise editing, and audio generation. Meta asserts that these models are developed using a combination of licensed and publicly available datasets.

Collaborating with Filmmakers

On October 17, Meta announced its collaboration with Blumhouse and other filmmakers through a pilot program to test Movie Gen prior to its public launch. Initial feedback indicates that this tool could assist creatives in rapidly exploring visual and audio ideas, although it is not intended as a substitute for traditional filmmaking. Meta plans to leverage insights from this pilot to refine Movie Gen ahead of its full-scale launch.

“While we don’t anticipate incorporating Movie Gen models into any public products until next year, Meta recognizes the importance of engaging in an open dialogue with the creative community to optimize the tool for creative applications and ensure responsible use,” stated Connor Hayes, VP of GenAI at Meta.

“These tools are geared to empower directors, and it is crucial to involve the creative industry during their development to ensure they are ideally suited for the tasks at hand,” added Jason Blum, founder and CEO of Blumhouse.

Meta intends to extend the pilot program for Movie Gen into 2025 to continue improving the models and user interfaces. The company also plans to collaborate with partners in the entertainment industry alongside digital-first content creators.


- Advertisement -spot_img

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.

- Advertisement -spot_img

Latest article