Introducing VLOGGER: a leap into the future of audio-driven human video synthesis!

Introducing VLOGGER: a leap into the future of audio-driven human video synthesis!

Google Research has just unveiled VLOGGER, a revolutionary framework that brings single images to life with dynamic videos created from audio inputs.

Imagine generating lifelike videos of people talking, complete with head motion, gaze, lip movement, and even upper-body gestures, all from a single photo and a piece of audio!

No need for person-specific training, face detection, or cropping. Google Research’s approach generates the entire image, capturing a broad spectrum of human communication scenarios.

Unparalleled photorealism and temporal coherence.

Generation of full-body dynamics, including gestures.

The applications? Endless! From video editing to personalization, VLOGGER could transform content creation, online communication, education, and more. ✨

Here’s the research for a deeper dive. And here’s the paper.

Here are some additional examples:

Related

Porsche announced today they would be opening an office in Silicon

After releasing it’s first original series in February, NetFlix is