Meta is enhancing its AI assistant by giving it the ability to have spoken conversations. At the Meta Connect event, CEO Mark Zuckerberg announced that the voice features will allow for “natural” conversations with the AI, powered by its Llama 3.2 model. Zuckerberg believes that voice will be a more natural way of interacting with AI than text and has the potential to become one of the most frequent ways we interact with AI.
Meta’s AI, which is integrated into Instagram, WhatsApp, Messenger, and Facebook, already has 500 million monthly users and is on track to become the most-used AI assistant by the end of this year. To make the conversations more interesting, Meta has partnered with celebrities to embed their voices into the assistant. These include Awkwafina, Dame Judi Dench, John Cena, Keegan Michael Key, and Kristen Bell. This is Meta’s second attempt at celebrity AI integration, after scrapping its “AI influencers” program last month.
Meta is also focusing on audio and video translation, with plans to introduce an audio translation tool for Instagram Reels and an auto-dubbing and lip-syncing feature that will make creators appear to be speaking in a different language. The company is currently testing these features on Instagram and Facebook for creators in Latin America and the US, with plans to expand to more creators and languages.
Meta AI is also getting an expanded set of photo capabilities, with the ability to generate AI-generated content based on users’ interests or current trends. This content will be shown in Facebook and Instagram feeds. Additionally, Meta AI will be able to add hats, glasses, and other accessories to photos, making them more fun and engaging.
Overall, these new features will make Meta’s AI assistant more interactive and entertaining, making it a more integral part of users’ social media experience.