{"id":4892,"date":"2024-04-21T09:40:00","date_gmt":"2024-04-21T09:40:00","guid":{"rendered":"https:\/\/aitesonics.com\/microsofts-ai-tool-can-turn-photos-into-realistic-videos-of-people-talking-and-singing-070052240\/"},"modified":"2024-04-21T09:40:00","modified_gmt":"2024-04-21T09:40:00","slug":"microsofts-ai-tool-can-turn-photos-into-realistic-videos-of-people-talking-and-singing-070052240","status":"publish","type":"post","link":"https:\/\/aitesonics.com\/microsofts-ai-tool-can-turn-photos-into-realistic-videos-of-people-talking-and-singing-070052240\/","title":{"rendered":"Microsoft's AI tool can turn photos into realistic videos of people talking and singing"},"content":{"rendered":"
Microsoft Research Asia has unveiled<\/a> a new experimental AI tool<\/a> called VASA-1 that can take a still image of a person \u2014 or the drawing of one \u2014 and an existing audio file to create a lifelike talking face out of them in real time. It has the ability to generate facial expressions and head motions for an existing still image and the appropriate lip movements to match a speech or a song. The researchers uploaded a ton of examples on the project page, and the results look good enough that they could fool people into thinking that they’re real.<\/p>\n While the lip and head motions in the examples could still look a bit robotic and out of sync upon closer inspection, it’s still clear that the technology could be misused to easily and quickly create deepfake videos of real people. The researchers themselves are aware of that potential and have decided not to release “an online demo, API, product, additional implementation details, or any related offerings” until they’re sure that their technology “will be used responsibly and in accordance with proper regulations.” They didn’t, however, say whether they’re planning to implement certain safeguards to prevent bad actors from using them for nefarious purposes, such as to create deepfake porn or misinformation campaigns.<\/p>\n The researchers believe their technology has a ton of benefits despite its potential for misuse. They said it can be used to enhance educational equity, as well as to improve accessibility for those with communication challenges, perhaps by giving them access to an avatar that can communicate for them. It can also provide companionship and therapeutic support for those who need it, they said, insinuating the VASA-1 could be used in programs that offer access to AI characters people can talk to.<\/p>\n