Snapchat to Introduce AI-Powered Video Generation Tool for Creators, Improvements to My AI

Snapchat unveiled new artificial intelligence (AI) tools for users on Tuesday during its 6th annual Snap Partner Summit. The social media giant announced plans to launch an AI video generation tool for those with creator accounts, enabling users to create videos from text and image prompts. All AI-generated videos will feature a watermark to help distinguish them from authentic content.

Snapchat Launches AI Video Tool and Additional Features
In a press release, the company highlighted the new offerings, with the AI Video tool—called Snap AI Video—being a standout feature. This tool will be exclusive to Creators on the platform, who must have a public profile, actively post to their Stories and Spotlight, and maintain a substantial audience.

The feature functions like a typical AI video generator, allowing for video creation from text prompts, with plans to expand to image prompts soon. Currently, it is available in beta on the web for a select group of creators.

A company spokesperson informed TechCrunch that the AI feature utilizes Snap's proprietary foundational video models. Once widely rolled out, the company plans to implement icons and context cards to indicate when a Snap has been created using AI. A specific watermark will remain visible even if the content is downloaded or shared.

The spokesperson also mentioned that the video models have undergone extensive testing and safety evaluations to ensure they do not produce harmful content.

In addition, Snapchat has introduced a new AI Lens that allows users to see themselves as older versions of themselves. Snapchat Memories, available to Snapchat+ subscribers, will now include AI-generated captions and Lenses. Furthermore, My AI, the company’s built-in chatbot, will also receive enhancements and be capable of performing several new tasks.

Snapchat reports that users can now tackle more complex tasks with My AI, including interpreting parking signs, translating foreign menus, identifying unique plants, and more. Additionally, the company is collaborating with OpenAI to provide developers access to multimodal large language models (LLMs), enabling them to create Lenses that can recognize objects and offer more contextual information.