news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

deepfake Archives - Best News https://aitesonics.com/category/deepfake/ Tue, 07 May 2024 05:50:31 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.4 Google prohibits ads promoting websites and apps that generate deepfake porn https://aitesonics.com/google-prohibits-ads-promoting-websites-and-apps-that-generate-deepfake-porn-130059324/ https://aitesonics.com/google-prohibits-ads-promoting-websites-and-apps-that-generate-deepfake-porn-130059324/#respond Tue, 07 May 2024 05:50:31 +0000 https://aitesonics.com/google-prohibits-ads-promoting-websites-and-apps-that-generate-deepfake-porn-130059324/ Google has updated its Inappropriate Content Policy to include language that expressly prohibits advertisers from promoting websites and services that generate deepfake pornography. While the company already has strong restrictions in place for ads that feature certain types of sexual content, this update leaves no doubt that promoting "synthetic content that has been altered or […]

The post Google prohibits ads promoting websites and apps that generate deepfake porn appeared first on Best News.

]]>
Google has updated its Inappropriate Content Policy to include language that expressly prohibits advertisers from promoting websites and services that generate deepfake pornography. While the company already has strong restrictions in place for ads that feature certain types of sexual content, this update leaves no doubt that promoting "synthetic content that has been altered or generated to be sexually explicit or contain nudity" is in violation of its rules.

Any advertiser promoting sites or apps that generate deepfake porn, that show instructions on how to create deepfake porn and that endorse or compare various deepfake porn services will be suspended without warning. They will no longer be able to publish their ads on Google, as well. The company will start implementing this rule on May 30 and is giving advertisers the chance to remove any ad in violation of the new policy. As 404 Media notes, the rise of deepfake technologies has led to an increasing number of ads promoting tools that specifically target users wanting to create sexually explicit materials. Some of those tools reportedly even pretend to be wholesome services to be able to get listed on the Apple App Store and Google Play Store, but it's masks off on social media where they promote their ability to generate manipulated porn.

Google has, however, already started prohibiting services that create sexually explicit deepfakes in Shopping ads. Similar to its upcoming wider policy, the company has banned Shopping ads for services that "generate, distribute, or store synthetic sexually explicit content or synthetic content containing nudity. " Those include deepfake porn tutorials and pages that advertise deepfake porn generators.

The post Google prohibits ads promoting websites and apps that generate deepfake porn appeared first on Best News.

]]>
https://aitesonics.com/google-prohibits-ads-promoting-websites-and-apps-that-generate-deepfake-porn-130059324/feed/ 0
Tupac’s estate threatens to sue Drake for his AI-infused Kendrick Lamar diss https://aitesonics.com/tupacs-estate-threatens-to-sue-drake-for-his-ai-infused-kendrick-lamar-diss-182518997/ https://aitesonics.com/tupacs-estate-threatens-to-sue-drake-for-his-ai-infused-kendrick-lamar-diss-182518997/#respond Sun, 28 Apr 2024 04:09:55 +0000 https://aitesonics.com/tupacs-estate-threatens-to-sue-drake-for-his-ai-infused-kendrick-lamar-diss-182518997/ Tupac Shakur’s estate is none too happy about Drake cloning the late hip-hop legend’s voice in a Kendrick Lamar diss track. Billboard reported Wednesday that attorney Howard King, representing Mr. Shakur’s estate, sent a cease-and-desist letter calling Drake’s use of Shakur’s voice “a flagrant violation of Tupac’s publicity and the estate’s legal rights.” Drake (Aubrey […]

The post Tupac’s estate threatens to sue Drake for his AI-infused Kendrick Lamar diss appeared first on Best News.

]]>
Tupac Shakur’s estate is none too happy about Drake cloning the late hip-hop legend’s voice in a Kendrick Lamar diss track. Billboard reported Wednesday that attorney Howard King, representing Mr. Shakur’s estate, sent a cease-and-desist letter calling Drake’s use of Shakur’s voice “a flagrant violation of Tupac’s publicity and the estate’s legal rights.”

Drake (Aubrey Drake Graham) dropped the diss track “Taylor Made Freestyle” last Friday, the latest chapter of the artist’s simmering decade-long feud with Pulitzer and 17-time Grammy award winner Kendrick Lamar.

“Kendrick, we need ya, the West Coast savior / Engraving your name in some hip-hop history,” an AI-generated 2Pac recreation raps in Drake’s track. “If you deal with this viciously / You seem a little nervous about all the publicity.”

Representing Shakur’s estate, King wrote in the cease-and-desist letter that Drake has less than 24 hours to pull down “Taylor Made Freestyle,” or the estate would “pursue all of its legal remedies” to force the Canadian rapper’s hand. “The unauthorized, equally dismaying use of Tupac’s voice against Kendrick Lamar, a good friend to the Estate who has given nothing but respect to Tupac and his legacy publicly and privately, compounds the insult,” King wrote, according to Billboard.

“The Estate is deeply dismayed and disappointed by your unauthorized use of Tupac’s voice and personality,” King wrote. “Not only is the record a flagrant violation of Tupac’s publicity and the estate’s legal rights, it is also a blatant abuse of the legacy of one of the greatest hip-hop artists of all time. The Estate would never have given its approval for this use.”

“Taylor Made Freestyle” also used AI to clone Snoop Dogg’s voice, with Drake using digital clones of two of Lamar’s west-coast hip-hop influences to try to hit him where it hurts. In a video posted to social media the following day, Snoop didn’t appear to know about the track. “They did what? When? How? Are you sure?”, the 16-time Grammy nominee and herb connoisseur said. “Why everybody calling my phone, blowing me up? What the fuck? What happened? What’s going on? I’m going back to bed. Good night,” he continued.

Engadget emailed Snoop Dogg’s management to ask about his thoughts on Drake cloning his voice. At the time of publication, we hadn’t heard back.

The saga contains more than a bit of irony — if not outright hypocrisy — from Universal Music Group (UMG), the label representing Drake. You may remember the track “Heart on My Sleeve” by “Ghostwriter977,” which briefly went viral last year. It was pulled after UMG complained to streaming services because it used an AI-generated version of Drake’s voice (along with The Weeknd).

Engadget asked UMG if it approved of Drake’s use of AI-generated voices in “Taylor Made Freestyle” and where it stands on the broader issue of using artists’ digital clones. We haven’t received a comment at press time. Without a clear explanation, it’s hard not to see the label as being on the side of whatever seems most financially advantageous to it at any particular moment (surprise!).

Laws addressing AI-cloned voices of public figures are still in flux. Billboard notes that federal copyrights don’t clearly cover the issue since AI-generated vocals typically don’t use specific words or music from the original artist. Mr. King, speaking for Shakur’s estate, believes they violate California’s existing publicity rights laws. He described Drake’s use of Shakur’s voice as forming the “false impression that the estate and Tupac promote or endorse the lyrics for the sound-alike.”

Last month, Tennessee passed the ELVIS (“Ensuring Likeness Voice and Image Security”) Act to protect artists from unauthorized AI voice clones. The “first-of-its-kind legislation” makes copying a musician’s voice without consent a criminal Class A misdemeanor.

But none of the parties involved in this feud are in Tennessee. On the federal level, things are moving much more slowly, leaving room for legal uncertainty. In January, bipartisan US House legislators introduced the No Artificial Intelligence Fake Replicas And Unauthorized Duplications Act (“No AI FRAUD”), putting cloned voices like those Drake used in the government’s crosshairs. Congress hasn’t taken any public action on the bill in the more than three months since.

“It is hard to believe that [Tupac’s record label]’s intellectual property was not scraped to create the fake Tupac AI on the Record,” King wrote in the cease-and-desist letter. He demanded Drake offer “a detailed explanation for how the sound-alike was created and the persons or company that created it, including all recordings and other data ‘scraped’ or used.”

The post Tupac’s estate threatens to sue Drake for his AI-infused Kendrick Lamar diss appeared first on Best News.

]]>
https://aitesonics.com/tupacs-estate-threatens-to-sue-drake-for-his-ai-infused-kendrick-lamar-diss-182518997/feed/ 0
Drake deletes AI-generated Tupac track after Shakur’s estate threatened to sue https://aitesonics.com/drake-deletes-ai-generated-tupac-track-after-shakurs-estate-threatened-to-sue-191810881/ https://aitesonics.com/drake-deletes-ai-generated-tupac-track-after-shakurs-estate-threatened-to-sue-191810881/#respond Sun, 28 Apr 2024 04:08:26 +0000 https://aitesonics.com/drake-deletes-ai-generated-tupac-track-after-shakurs-estate-threatened-to-sue-191810881/ Drake apparently learned it isn’t wise to mess with Tupac Shakur — even decades after his untimely death. Billboard first spotted that the Canadian hip-hop artist deleted the X (Twitter) post with his track “Taylor Made Freestyle,” which used an AI-generated recreation of Shakur’s voice to try to get under Kendrick Lamar’s skin. The takedown […]

The post Drake deletes AI-generated Tupac track after Shakur’s estate threatened to sue appeared first on Best News.

]]>
Drake apparently learned it isn’t wise to mess with Tupac Shakur — even decades after his untimely death. Billboard first spotted that the Canadian hip-hop artist deleted the X (Twitter) post with his track “Taylor Made Freestyle,” which used an AI-generated recreation of Shakur’s voice to try to get under Kendrick Lamar’s skin.

The takedown came after an attorney representing the late hip-hop legend threatened to sue the Canadian rapper for his “unauthorized” use of Tupac’s voice if he didn’t remove it from social channels within 24 hours. However, the track was online for a week and — unsurprisingly — has been copiously reposted.

“The Estate is deeply dismayed and disappointed by your unauthorized use of Tupac’s voice and personality,” Howard King, the attorney representing Shakur’s estate, wrote earlier this week in a cease-and-desist letter acquired by Billboard. “Not only is the record a flagrant violation of Tupac’s publicity and the estate’s legal rights, it is also a blatant abuse of the legacy of one of the greatest hip-hop artists of all time. The Estate would never have given its approval for this use.”

King implied that using Shakur’s voice to diss Lamar was an especially egregious show of disrespect. Lamar, a 17-time Grammy winner and Pulitzer recipient, has spoken frequently about his deep admiration for Tupac, and the Oakland rapper’s estate says the feelings are mutual. “The unauthorized, equally dismaying use of Tupac’s voice against Kendrick Lamar, a good friend to the Estate who has given nothing but respect to Tupac and his legacy publicly and privately, compounds the insult,” King wrote in a cease-and-desist letter.

Drake’s track also included an AI-generated clone of Snoop Dogg’s voice. The Doggystyle rapper and cannabis aficionado appeared surprised in a social post last week: “They did what? When? How? Are you sure?” He continued, “Why everybody calling my phone, blowing me up? What the fuck? What happened? What’s going on? I’m going back to bed. Good night.”

However, the one-time Doggy Fizzle Televizzle host has a history of poker-faced coyness. Last year, he took to Instagram to solemnly announce he was “giving up smoke,” leading to rampant speculation about why the stoner icon would quit his favorite pastime. Soon after, his announcement was revealed as a PR stunt for Solo Stove — which, marketing gimmicks aside, makes some terrific bonfire pits.

The post Drake deletes AI-generated Tupac track after Shakur’s estate threatened to sue appeared first on Best News.

]]>
https://aitesonics.com/drake-deletes-ai-generated-tupac-track-after-shakurs-estate-threatened-to-sue-191810881/feed/ 0
Microsoft's AI tool can turn photos into realistic videos of people talking and singing https://aitesonics.com/microsofts-ai-tool-can-turn-photos-into-realistic-videos-of-people-talking-and-singing-070052240/ https://aitesonics.com/microsofts-ai-tool-can-turn-photos-into-realistic-videos-of-people-talking-and-singing-070052240/#respond Sun, 21 Apr 2024 09:40:00 +0000 https://aitesonics.com/microsofts-ai-tool-can-turn-photos-into-realistic-videos-of-people-talking-and-singing-070052240/ Microsoft Research Asia has unveiled a new experimental AI tool called VASA-1 that can take a still image of a person — or the drawing of one — and an existing audio file to create a lifelike talking face out of them in real time. It has the ability to generate facial expressions and head […]

The post Microsoft's AI tool can turn photos into realistic videos of people talking and singing appeared first on Best News.

]]>
Microsoft Research Asia has unveiled a new experimental AI tool called VASA-1 that can take a still image of a person — or the drawing of one — and an existing audio file to create a lifelike talking face out of them in real time. It has the ability to generate facial expressions and head motions for an existing still image and the appropriate lip movements to match a speech or a song. The researchers uploaded a ton of examples on the project page, and the results look good enough that they could fool people into thinking that they’re real.

While the lip and head motions in the examples could still look a bit robotic and out of sync upon closer inspection, it’s still clear that the technology could be misused to easily and quickly create deepfake videos of real people. The researchers themselves are aware of that potential and have decided not to release “an online demo, API, product, additional implementation details, or any related offerings” until they’re sure that their technology “will be used responsibly and in accordance with proper regulations.” They didn’t, however, say whether they’re planning to implement certain safeguards to prevent bad actors from using them for nefarious purposes, such as to create deepfake porn or misinformation campaigns.

The researchers believe their technology has a ton of benefits despite its potential for misuse. They said it can be used to enhance educational equity, as well as to improve accessibility for those with communication challenges, perhaps by giving them access to an avatar that can communicate for them. It can also provide companionship and therapeutic support for those who need it, they said, insinuating the VASA-1 could be used in programs that offer access to AI characters people can talk to.

According to the paper published with the announcement, VASA-1 was trained on the VoxCeleb2 Dataset, which contains “over 1 million utterances for 6,112 celebrities” that were extracted from YouTube videos. Even though the tool was trained on real faces, it also works on artistic photos like the Mona Lisa, which the researchers amusingly combined with an audio file of Anne Hathaway’s viral rendition of Lil Wayne’s Paparazzi. It’s so delightful, it’s worth a watch, even if you’re doubting what good a technology like this can do.

This article contains affiliate links; if you click such a link and make a purchase, we may earn a commission.

The post Microsoft's AI tool can turn photos into realistic videos of people talking and singing appeared first on Best News.

]]>
https://aitesonics.com/microsofts-ai-tool-can-turn-photos-into-realistic-videos-of-people-talking-and-singing-070052240/feed/ 0
Tom Hanks calls out dental ad for using AI likeness of him https://aitesonics.com/tom-hanks-calls-out-dental-ad-for-using-ai-likeness-of-him-161548459/ https://aitesonics.com/tom-hanks-calls-out-dental-ad-for-using-ai-likeness-of-him-161548459/#respond Fri, 05 Apr 2024 08:49:04 +0000 https://aitesonics.com/tom-hanks-calls-out-dental-ad-for-using-ai-likeness-of-him-161548459/ An advertiser reportedly used a deepfake of Tom Hanks to promote dental plans without the actor’s permission. Hanks shared a warning on Instagram on Sunday alerting his followers about the AI-generated video, which he wrote he had “nothing to do with.” Hanks has been outspoken about the challenges AI poses for the industry, and the […]

The post Tom Hanks calls out dental ad for using AI likeness of him appeared first on Best News.

]]>
An advertiser reportedly used a deepfake of Tom Hanks to promote dental plans without the actor’s permission. Hanks shared a warning on Instagram on Sunday alerting his followers about the AI-generated video, which he wrote he had “nothing to do with.” Hanks has been outspoken about the challenges AI poses for the industry, and the use of actors’ digital likenesses is one of the major points of concern voiced by striking SAG-AFTRA workers.

Just last spring, Hanks said in an appearance on The Adam Buxton Podcast that AI and deepfakes present both artistic and legal challenges. “I could be hit by a bus tomorrow and that’s it,” Hanks said, “but my performances can go on and on and on and on and on, and outside of the understanding that it’s been done with AI or deepfake, there’ll be nothing to tell you that it’s not me.” He also spoke of a hypothetical scenario in which an entire movie series could be made using an AI version of him that’s “32 years old from now until kingdom come.” Perhaps in confirmation of what’s to come, the offending dental plan ad depicts a significantly younger Hanks.

The use of AI to capitalize on celebrities’ legacies has already become an ethical issue. Roadrunner: A Film About Anthony Bourdain sparked widespread debate upon its release after it was revealed the documentary contained AI-generated voice overs of the beloved chef and storyteller. Just this weekend, Robin Williams’ daughter, Zelda Williams, posted in support of “SAG’s fight against AI,” writing on Instagram that she’d seen firsthand how the technology is used to capture the likeness of people “who cannot consent,” like her father.

“These recreations are, at their very best, a poor facsimile of greater people,” Williams wrote, “but at their worst, a horrendous Frankensteinian monster, cobbled together from the worst bits of everything this industry is, instead of what it should stand for.”

Hanks said in the April interview that the issue has been on his radar since filming The Polar Express in the early 2000s, which starred a CGI version of the actor. It was “the first time that we did a movie that had a huge amount of our own data locked in a computer,” Hanks told Buxton, adding, “We saw this coming.”

The post Tom Hanks calls out dental ad for using AI likeness of him appeared first on Best News.

]]>
https://aitesonics.com/tom-hanks-calls-out-dental-ad-for-using-ai-likeness-of-him-161548459/feed/ 0
Popular AI platform introduces rewards system to encourage deepfakes of real people https://aitesonics.com/popular-ai-platform-introduces-rewards-system-to-encourage-deepfakes-of-real-people-194326312/ https://aitesonics.com/popular-ai-platform-introduces-rewards-system-to-encourage-deepfakes-of-real-people-194326312/#respond Fri, 05 Apr 2024 07:58:56 +0000 https://aitesonics.com/popular-ai-platform-introduces-rewards-system-to-encourage-deepfakes-of-real-people-194326312/ Civitai, an online marketplace for sharing AI models, just introduced a new feature called “bounties” to encourage its community to develop passable deepfakes of real people, as originally reported by 404 Media. Whoever concocts the best AI model gets a virtual currency called “Buzz” that users can buy with actual money. Many of the bounties […]

The post Popular AI platform introduces rewards system to encourage deepfakes of real people appeared first on Best News.

]]>
Civitai, an online marketplace for sharing AI models, just introduced a new feature called “bounties” to encourage its community to develop passable deepfakes of real people, as originally reported by 404 Media. Whoever concocts the best AI model gets a virtual currency called “Buzz” that users can buy with actual money.

Many of the bounties posted to the site ask users to recreate the likeness of celebrities and social media influencers, most of them female. The reporting also calls out the lion’s share of these results as “nonconsensual sexual images.” This is the kind of thing that has been proliferating across the internet for years and years, but artificial intelligence allows for a more realistic end result. Additionally, 404 Media found some requests for private people with no significant online presence, making this even more creepy.

“I am very afraid of what this can become,” Michele Alves, an Instagram influencer who has a bounty on Civitai, told 404 Media. “I don't know what measures I could take, since the internet seems like a place out of control.”

According to market firm Andreessen Horowitz, Civitai is the seventh most popular generative AI platform at the moment. In other words, there are a whole lot of eyeballs on these bounty requests. It only took staffers at 404 Media moments to source images sent via a bounty request to a private person with personal social media accounts boasting just a few followers. The person who posted the bounty claimed it was his wife, but her social media accounts said otherwise. Gross.

One Civitai user declined the bounty on the grounds that it was “asking for legal problems in the future.” To that end, Virginia just updated its revenge porn laws to punish deekfake creators with up to one year in jail. Still, this particular request was fulfilled and several images were uploaded to the site, though they were non-sexual in nature.

It’s worth noting that very few of the bounty requests specifically state the poster’s looking for sexual material, couching the request in vague language. Some, however, go all-in, using terms like “degenerate request” along with comments on female breast size. Civitai, for its part, says that these bounties should not be used to create non-consensual AI-generated sexual images of real people.

However, both sexual images of public-facing figures and non-sexual images of regular people are allowed. After that, it’s just a matter of combining the two. 404 Media used the company's text-to-image tool to create non-consensual sexual images of a real person “in seconds.”

The post Popular AI platform introduces rewards system to encourage deepfakes of real people appeared first on Best News.

]]>
https://aitesonics.com/popular-ai-platform-introduces-rewards-system-to-encourage-deepfakes-of-real-people-194326312/feed/ 0
Taylor Swift deepfake used for Le Creuset giveaway scam https://aitesonics.com/taylor-swift-deepfake-used-for-le-creuset-giveaway-scam-123231417/ https://aitesonics.com/taylor-swift-deepfake-used-for-le-creuset-giveaway-scam-123231417/#respond Fri, 05 Apr 2024 06:57:20 +0000 https://aitesonics.com/taylor-swift-deepfake-used-for-le-creuset-giveaway-scam-123231417/ Taylor Swift is not giving out free Le Creuset products in social media advertisements — though deepfakes of her voice would like you to believe otherwise. A series of posts have recently surfaced on TikTok and in Meta’s Ad Library claiming to show Swift offering free Le Creuset cookware sets, the New York Times reports. […]

The post Taylor Swift deepfake used for Le Creuset giveaway scam appeared first on Best News.

]]>
Taylor Swift is not giving out free Le Creuset products in social media advertisements — though deepfakes of her voice would like you to believe otherwise. A series of posts have recently surfaced on TikTok and in Meta’s Ad Library claiming to show Swift offering free Le Creuset cookware sets, the New York Times reports. The ads featured clips where Swift was near Le Creuset products and used a synthetic version of her voice. The scammers used AI to have the cloned voice address her fans, “Swifties,” and produce other little remarks.

These posts led interested parties to fake versions of sites like The Food Network with made-up articles and testimonials about Le Creuset. Shoppers were then asked just to provide the $9.96 for shipping to get their free products. Unsurprisingly, no Dutch ovens arrived, and customers had additional monthly charges added to their cards. Le Creuset confirmed no such giveaway was occurring.

Swift is hardly the only celebrity who has recently found their voice co-opted using AI. She’s not even the only one used in the scam, with interior designer Joanna Gaines mimicked in ads from verified accounts or ones labeled as sponsored posts. In April 2023, the Better Business Bureau warned consumers about the high quality of ads featuring AI-manufactured versions of celebrities. Since then, scammers have used deepfakes to convince consumers that Luke Combs was selling weight loss gummies, Tom Hanks was promoting dental plans and Gayle King was selling other weight loss products, to name a few examples.

Little regulation exists for monitoring deepfakes or punishing the people who create them. A lot of the responsibility currently falls on the platforms, with YouTube, for example, laying out new steps for reporting deepfakes. At the same time, its working with select musicians to loan their voices out and create greater interest in AI-generated versions of real people.

Last year, two bills were introduced in Congress to address deepfakes: The No Fakes Act and the Deepfakes Accountability Act. However, the fate of both pieces of legislation is uncertain. At the moment, only select states, such as California and Florida, have any AI regulation.

The post Taylor Swift deepfake used for Le Creuset giveaway scam appeared first on Best News.

]]>
https://aitesonics.com/taylor-swift-deepfake-used-for-le-creuset-giveaway-scam-123231417/feed/ 0
Facebook was inundated with deepfaked ads impersonating UK's Prime Minister https://aitesonics.com/facebook-was-inundated-with-deepfaked-ads-impersonating-uks-prime-minister-143009584/ https://aitesonics.com/facebook-was-inundated-with-deepfaked-ads-impersonating-uks-prime-minister-143009584/#respond Fri, 05 Apr 2024 06:51:56 +0000 https://aitesonics.com/facebook-was-inundated-with-deepfaked-ads-impersonating-uks-prime-minister-143009584/ Facebook was flooded with fake advertisements featuring a deepfaked Rishi Sunak ahead of the UK's general election that's expected to take place this year, according to research conducted by communications company Fenimore Harper. The firm found 143 different ads impersonating the UK's Prime Minister on the social network last month, and it believes the ad […]

The post Facebook was inundated with deepfaked ads impersonating UK's Prime Minister appeared first on Best News.

]]>
Facebook was flooded with fake advertisements featuring a deepfaked Rishi Sunak ahead of the UK's general election that's expected to take place this year, according to research conducted by communications company Fenimore Harper. The firm found 143 different ads impersonating the UK's Prime Minister on the social network last month, and it believes the ad may have reached more than 400,000 people. It also said that funding for the ads originated from 23 countries, including Turkey, Malaysia, the Philippines and the United States, and that the collective amount of money spent to promote them from December 8, 2023 to January 8, 2024 was $16,500.

As The Guardian notes, one of the fake ads showed a BBC newscast wherein Sunak said that the UK government has decided to invest in a stock market app launched by Elon Musk. That clip then reportedly linked to a fake BBC news page promoting an investment scam. The video, embedded in Fenimore Harper's website, seems pretty realistic if the viewer doesn't look too closely at people's mouths when they speak. Someone who has no idea what deepfakes are could easily be fooled into thinking that the video is legit.

The company says this is the "first widespread paid promotion of a deepfaked video of a UK political figure." That said, Meta has long been contending with election misinformation on its websites and apps. A spokesperson told The Guardian that the "vast majority" of the adverts were disabled before Fenimore Harper's report was published and that "less than 0.5 percent of UK users saw any individual ad that did go live."

Meta announced late last year that it was going to require advertisers to disclose whether the ads they submit have been digitally altered in the event that they're political or social in nature. It's going to start enforcing the rule this year, likely in hopes that it can help mitigate the expected spread of fake news connected to the upcoming presidential elections in the US.

The post Facebook was inundated with deepfaked ads impersonating UK's Prime Minister appeared first on Best News.

]]>
https://aitesonics.com/facebook-was-inundated-with-deepfaked-ads-impersonating-uks-prime-minister-143009584/feed/ 0
ElevenLabs reportedly banned the account that deepfaked Biden's voice with its AI tools https://aitesonics.com/elevenlabs-reportedly-banned-the-account-that-deepfaked-bidens-voice-with-its-ai-tools-083355975/ https://aitesonics.com/elevenlabs-reportedly-banned-the-account-that-deepfaked-bidens-voice-with-its-ai-tools-083355975/#respond Fri, 05 Apr 2024 06:46:39 +0000 https://aitesonics.com/elevenlabs-reportedly-banned-the-account-that-deepfaked-bidens-voice-with-its-ai-tools-083355975/ ElevenLabs, an AI startup that offers voice cloning services with its tools, has banned the user that created an audio deepfake of Joe Biden used in an attempt to disrupt the elections, according to Bloomberg. The audio impersonating the president was used in a robocall that went out to some voters in New Hampshire last […]

The post ElevenLabs reportedly banned the account that deepfaked Biden's voice with its AI tools appeared first on Best News.

]]>
ElevenLabs, an AI startup that offers voice cloning services with its tools, has banned the user that created an audio deepfake of Joe Biden used in an attempt to disrupt the elections, according to Bloomberg. The audio impersonating the president was used in a robocall that went out to some voters in New Hampshire last week, telling them not to vote in their state’s primary. It initially wasn’t clear what technology was used to copy Biden’s voice, but a thorough analysis by security company Pindrop showed that the perpetrators used ElevenLabs’ tools.

The security firm removed the background noise and cleaned the robocall’s audio before comparing it to samples from more than 120 voice synthesis technologies used to generate deepfakes. Pindrop CEO Vijay Balasubramaniyan told Wired that it “came back well north of 99 percent that it was ElevenLabs.” Bloomberg says the company was notified of Pindrop’s findings and is still investigating, but it has already identified and suspended the account that made the fake audio. ElevenLabs told the news organization that it can’t comment on the issue itself, but that it’s “dedicated to preventing the misuse of audio AI tools and [that it takes] any incidents of misuse extremely seriously.”

The deepfaked Biden robocall shows how technologies that can mimic somebody else’s likeness and voice could be used to manipulate votes this upcoming presidential election in the US. “This is kind of just the tip of the iceberg in what could be done with respect to voter suppression or attacks on election workers,” Kathleen Carley, a professor at Carnegie Mellon University, told The Hill. “It was almost a harbinger of what all kinds of things we should be expecting over the next few months.”

It only took the internet a few days after ElevenLabs launched the beta version of its platform to start using it to create audio clips that sound like celebrities reading or saying something questionable. The startup allows customers to use its technology to clone voices for “artistic and political speech contributing to public debates.” Its safety page does warn users that they “cannot clone a voice for abusive purposes such as fraud, discrimination, hate speech or for any form of online abuse without infringing the law.” But clearly, it needs to put more safeguards in place to prevent bad actors from using its tools to influence voters and manipulate elections around the world.

This article contains affiliate links; if you click such a link and make a purchase, we may earn a commission.

The post ElevenLabs reportedly banned the account that deepfaked Biden's voice with its AI tools appeared first on Best News.

]]>
https://aitesonics.com/elevenlabs-reportedly-banned-the-account-that-deepfaked-bidens-voice-with-its-ai-tools-083355975/feed/ 0
Scammers use deepfakes to steal $25.6 million from a multinational firm https://aitesonics.com/scammers-use-deepfakes-to-steal-256-million-from-a-multinational-firm-034033977/ https://aitesonics.com/scammers-use-deepfakes-to-steal-256-million-from-a-multinational-firm-034033977/#respond Fri, 05 Apr 2024 06:40:30 +0000 https://aitesonics.com/scammers-use-deepfakes-to-steal-256-million-from-a-multinational-firm-034033977/ Bad actors keep using deepfakes for everything from impersonating celebrities to scamming people out of money. The latest instance is out of Hong Kong, where a finance worker for an undisclosed multinational company was tricked into remitting $200 million Hong Kong dollars ($25.6 million). According to Hong Kong police, scammers contacted the employee posing as […]

The post Scammers use deepfakes to steal $25.6 million from a multinational firm appeared first on Best News.

]]>
Bad actors keep using deepfakes for everything from impersonating celebrities to scamming people out of money. The latest instance is out of Hong Kong, where a finance worker for an undisclosed multinational company was tricked into remitting $200 million Hong Kong dollars ($25.6 million).

According to Hong Kong police, scammers contacted the employee posing as the company's United Kingdom-based chief financial officer. He was initially suspicious, as the email called for secret transactions, but that's where the deepfakes came in. The worker attended a video call with the "CFO" and other recognizable members of the company. In reality, each "person" he interacted with was a deepfake — likely created using public video clips of the actual individuals.

The deepfakes asked the employee to introduce himself and then quickly instructed him to make 15 transfers comprising the $25.6 million to five local bank accounts. They created a sense of urgency for the task, and then the call abruptly ended. A week later, the employee checked up on the request within the company, discovering the truth.

Hong Kong police have arrested six people so far in connection with such scams, according to CNN. The individuals involved stole eight identification cards and had filed 54 bank account registrations and 90 loan applications in 2023. They had also used deepfakes to trick facial recognition software in at least 20 cases.

The widespread use of deepfakes is one of the growing concerns of evolving AI technology. In January, Taylor Swift and President Joe Biden were among those whose identities were forged with deepfakes. In Swift's case, it was nonconsensual pornographic images of her and a financial scam targeting potential Le Creuset shoppers. President Biden's voice could be heard in some robocalls to New Hampshire constituents, imploring them not to vote in their state's primary.

Update, February 6 2024, 10:34AM ET: The Hong Kong police said in their press conference that they had arrested six people in connection with such scams, not necessarily this scam.

The post Scammers use deepfakes to steal $25.6 million from a multinational firm appeared first on Best News.

]]>
https://aitesonics.com/scammers-use-deepfakes-to-steal-256-million-from-a-multinational-firm-034033977/feed/ 0