Meta is working on a new AI model aimed at power Facebook’s video recommendation engine across all its platforms, a company executive has revealed.
Tom Alison, Facebook’s head, has revealed that Meta’s “technology roadmap that goes to 2026” includes the development of an AI recommendation model capable of powering both the company’s TikTok-like Reels short video service and its more conventional, longer videos.
Also read: Here’s a closer look at Meta’s upcoming true AR smart glasses
Alison mentioned at Morgan Stanley’s tech conference in San Francisco that up to now, Meta has generally used different models for each of its products, such as Reels, Groups, and the core Facebook Feed, reports CNBC.
Also read: Meta toAI-generated images on FB, Instagram & Threads: Know more
As Meta delves ambitiously into AI, the company has been spending billions of dollars on Nvidia graphics processing units (GPUs), which have emerged as the primary chips utilised by AI researchers. These GPUs are used in training large language models, like those powering OpenAI’s widely used ChatGPT chatbot and other generative AI models.
Alison outlined that the initial phase of Meta’s tech roadmap, termed “phase 1,” revolved around transitioning the company’s existing recommendation systems from traditional computer chips to GPUs. This strategic move aims to enhance the overall performance of Meta’s products.
Alison further highlighted that as interest in Large Language Models (LLMs) surged last year, Meta executives were struck by how these big AI models could “handle lots of data and all kinds of very general-purpose types of activities like chatting,”
Recognising the potential, Meta envisioned a giant recommendation model applicable across its products. By the previous year, Meta had developed “this kind of new model architecture” and conducted trials on Reels.
This new “model architecture” contributed to Facebook achieving an “8% to 10% gain in Reels watch time” within the core Facebook app. Alison emphasised that this success demonstrated that the model was “learning from the data much more efficiently than the previous generation.”
Meta has now progressed to “phase 3” of its system re-architecture, focusing on validating the technology and extending it across multiple products.
“Instead of just powering Reels, we’re working on a project to power our entire video ecosystem with this single model, and then can we add our Feed recommendation product to also be served by this model,” Alison said. “If we get this right, not only will the recommendations be kind of more engaging and more relevant, but we think the responsiveness of them can improve as well.”
Explaining how it will work if successful, Alison said, “If you see something that you’re into in Reels, and then you go back to the Feed, we can kind of show you more similar content.”
Meta is exploring various generative AI projects, including the integration of more advanced chatting tools into its core Feed. For instance, if a user comes across a “recommended post about Taylor Swift,” they could potentially interact by simply clicking a button and asking Meta AI for further information about Taylor Swift.
Furthermore, Meta is experimenting with incorporating its AI chatting tool within Groups. This means that a member of a Facebook baking group, for example, could inquire about desserts and receive responses from a digital assistant.