news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

copyright Archives - Best News https://aitesonics.com/tag/copyright/ Sat, 13 Apr 2024 11:17:05 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.4 US bill proposes AI companies list what copyrighted materials they use https://aitesonics.com/us-bill-proposes-ai-companies-list-what-copyrighted-materials-they-use-123058589/ https://aitesonics.com/us-bill-proposes-ai-companies-list-what-copyrighted-materials-they-use-123058589/#respond Sat, 13 Apr 2024 11:17:05 +0000 https://aitesonics.com/us-bill-proposes-ai-companies-list-what-copyrighted-materials-they-use-123058589/ The debate over using copyrighted materials in AI training systems rages on — as does uncertainty over which works AI even pulls data from. US Congressman Adam Schiff is attempting to answer the latter, introducing the Generative AI Copyright Disclosure Act on April 9, Billboard reports. The bill would require AI companies to outline every […]

The post US bill proposes AI companies list what copyrighted materials they use appeared first on Best News.

]]>
The debate over using copyrighted materials in AI training systems rages on — as does uncertainty over which works AI even pulls data from. US Congressman Adam Schiff is attempting to answer the latter, introducing the Generative AI Copyright Disclosure Act on April 9, Billboard reports. The bill would require AI companies to outline every copyrighted work in their datasets.

“AI has the disruptive potential of changing our economy, our political system, and our day-to-day lives. We must balance the immense potential of AI with the crucial need for ethical guidelines and protections.” said Congressman Schiff in a statement. He added that the bill “champions innovation while safeguarding the rights and contributions of creators, ensuring they are aware when their work contributes to AI training datasets. This is about respecting creativity in the age of AI and marrying technological progress with fairness.” Organizations such as the Recording Industry Association of America (RIAA), SAG-AFTRA and WGA have shown support for the bill.

If the Generative AI Copyright Disclosure Act passes, companies would need to file all relevant data use to the Register of Copyrights at least 30 days before introducing the AI tool to the public. They would also have to provide the same information retroactively for any existing tools and make updates if they considerably altered datasets. Failure to do so would result in the Copyright Office issuing a fine — the exact number would depend on a company’s size and past infractions. To be clear, this wouldn’t do anything to prevent AI creators from using copyrighted work, but it would provide transparency on which materials they’ve taken from. The ambiguity over use was on full display in a March Bloomberg interview with OpenAI CTO Mira Murati, who claimed she was unsure if the tool Sora took data from YouTube, Facebook or Instagram posts.

The bill could even give companies and artists a clearer picture when speaking out against or suing for copyright infringement — a fairly common occurrence. Take the New York Times, which sued OpenAI and Microsoft for using its articles to train chatbots without an agreement or compensation, or Sarah Silverman, who sued OpenAI (a frequent defendant) and Meta for using her books and other works to train their AI models.

The entertainment industry has also been leading calls for AI protections. AI regulation was a big sticking point in the SAG-AFTRA and WGA strikes last year, ending only when detailed policies around AI went into their contracts. SAG-AFTRA has recently voiced its support for California bills requiring consent from actors to use their avatars and from heirs to make AI versions of deceased individuals. It’s no surprise that Congressman Schiff represents California’s 30th district, which includes Hollywood, Burbank and Universal City.

Musicians are echoing their fellow creatives, with over 200 artists signing an open letter in April that calls for AI protections, the Guardian reported. “This assault on human creativity must be stopped,” the letter, issued by the Artist Rights Alliance, states. “We must protect against the predatory use of AI to steal professional artists’ voices and likenesses, violate creators’ rights, and destroy the music ecosystem.” Billie Eilish, Jon Bon Jovi and Pearm Jam were among the signatories.

The post US bill proposes AI companies list what copyrighted materials they use appeared first on Best News.

]]>
https://aitesonics.com/us-bill-proposes-ai-companies-list-what-copyrighted-materials-they-use-123058589/feed/ 0
Judge rules that AI-generated art isn't copyrightable, since it lacks human authorship https://aitesonics.com/judge-rules-that-ai-generated-art-isnt-copyrightable-since-it-lacks-human-authorship-150033903/ https://aitesonics.com/judge-rules-that-ai-generated-art-isnt-copyrightable-since-it-lacks-human-authorship-150033903/#respond Sat, 13 Apr 2024 11:05:25 +0000 https://aitesonics.com/judge-rules-that-ai-generated-art-isnt-copyrightable-since-it-lacks-human-authorship-150033903/ A federal judge has agreed with US government officials that a piece of artificial intelligence-generated art isn’t eligible for copyright protection in the country since there was no human authorship involved. “Copyright has never stretched so far […] as to protect works generated by new forms of technology operating absent any guiding human hand, as […]

The post Judge rules that AI-generated art isn't copyrightable, since it lacks human authorship appeared first on Best News.

]]>
A federal judge has agreed with US government officials that a piece of artificial intelligence-generated art isn’t eligible for copyright protection in the country since there was no human authorship involved. “Copyright has never stretched so far […] as to protect works generated by new forms of technology operating absent any guiding human hand, as plaintiff urges here,” Judge Beryl Howell of the US District Court for the District of Columbia wrote in the ruling, which The Hollywood Reporter obtained. “Human authorship is a bedrock requirement of copyright.”

Dr. Stephen Thaler sued the US Copyright Office after the agency rejected his second attempt to copyright an artwork titled A Recent Entrance to Paradise (pictured above) in 2022. The USCO agreed that the work was generated by an AI model that Thaler calls the Creativity Machine. The computer scientist applied to copyright the work himself, describing the piece “as a work-for-hire to the owner of the Creativity Machine.” He claimed that the USCO’s “human authorship” requirement was unconstitutional.

Howell cited rulings in other cases in which copyright protection was denied to artwork that lacked human involvement, such as the famous case of a monkey that managed to capture a few selfies. “Courts have uniformly declined to recognize copyright in works created absent any human involvement,” the judge wrote.

The judge noted that the growing influence of generative AI will lead to “challenging questions” about the level of human input that’s required to meet the bar for copyright protection, as well as how original artwork created by systems trained on copyrighted pieces can truly be (an issue that’s the subject of several other legal battles).

However, Howell indicated that Thaler’s case wasn’t an especially complex one, since he admitted that he wasn’t involved in the creation of A Recent Entrance to Paradise. “In the absence of any human involvement in the creation of the work, the clear and straightforward answer is the one given by the [Federal] Register: No,” Howell ruled. Thaler plans to appeal the decision.

According to Bloomberg, this is the first ruling in the US on copyright protections for AI-generated art, though it’s an issue that the USCO has been contending with for some time. In March, the agency issued guidance on copyrighting AI-generated images that are based on text prompts — generally, they’re not eligible for copyright protection. The agency has offered some hope to generative AI enthusiasts, though. “The answer will depend on the circumstances, particularly how the AI tool operates and how it was used to create the final work,” the USCO said. “This is necessarily a case-by-case inquiry.”

The agency has also granted limited copyright protection to a graphic novel with AI-generated elements. It said in February that while the Midjourney-created images in Kris Kashtanova’s Zarya of the Dawn were not eligible to be copyrighted, the text and layout of the work were.

This article contains affiliate links; if you click such a link and make a purchase, we may earn a commission.

The post Judge rules that AI-generated art isn't copyrightable, since it lacks human authorship appeared first on Best News.

]]>
https://aitesonics.com/judge-rules-that-ai-generated-art-isnt-copyrightable-since-it-lacks-human-authorship-150033903/feed/ 0
US Copyright Office opens public comments on AI and content ownership https://aitesonics.com/us-copyright-office-opens-public-comments-on-ai-and-content-ownership-170225911/ https://aitesonics.com/us-copyright-office-opens-public-comments-on-ai-and-content-ownership-170225911/#respond Sat, 13 Apr 2024 10:18:33 +0000 https://aitesonics.com/us-copyright-office-opens-public-comments-on-ai-and-content-ownership-170225911/ The US Copyright Office (USCO) wants your thoughts on generative AI and who can theoretically be declared to own its outputs. The technology has increasingly commanded the legal system’s attention, and as such office began seeking public comments on Wednesday about some of AI’s thorniest issues (via Ars Technica). These include questions about companies training […]

The post US Copyright Office opens public comments on AI and content ownership appeared first on Best News.

]]>
The US Copyright Office (USCO) wants your thoughts on generative AI and who can theoretically be declared to own its outputs. The technology has increasingly commanded the legal system’s attention, and as such office began seeking public comments on Wednesday about some of AI’s thorniest issues (via Ars Technica). These include questions about companies training AI models on copyrighted works, the copyright eligibility of AI-generated content (along with liability for infringing on it) and how to handle machine-made outputs mimicking human artists’ work.

“The adoption and use of generative AI systems by millions of Americans — and the resulting volume of AI-generated material — have sparked widespread public debate about what these systems may mean for the future of creative industries and raise significant questions for the copyright system,” the USCO wrote in a notice published on Wednesday.

One issue the office hopes to address is the required degree of human authorship to register a copyright on (otherwise AI-driven) content, citing the rising number of attempts to copyright material that names AI as an author or co-author. “The crucial question appears to be whether the ‘work’ is basically one of human authorship, with the computer merely being an assisting instrument, or whether the traditional elements of authorship in the work (literary, artistic, or musical expression or elements of selection, arrangement, etc.) were actually conceived and executed not by man but by a machine,” the USCO wrote.

Although the issue is far from resolved, several cases have hinted at where the boundaries may fall. For example, the office said in February that the (human-made) text and layout arrangement from a partially AI-generated graphic novel were copyrightable, but the work’s Midjourney-generated images weren’t. On the other hand, a Federal judge recently rejected an attempt to register AI-generated art which had no human intervention other than its inciting text prompt. “Copyright has never stretched so far […] as to protect works generated by new forms of technology operating absent any guiding human hand, as plaintiff urges here,” US District Judge Beryl Howell wrote in that ruling.

The USCO also seeks input on increasing infringement claims from copyright owners against AI companies for training on their published works. Sarah Silverman is among the high-profile plaintiffs suing OpenAI and Meta for allegedly training ChatGPT and LLaMA (respectively) on their written work — in her case, her 2010 memoir The Bedwetter. OpenAI also faces a class-action lawsuit over using scraped web data to train its viral chatbot.

The USPO says the public comment period will be open until November 15th. You can share your thoughts until then.

The post US Copyright Office opens public comments on AI and content ownership appeared first on Best News.

]]>
https://aitesonics.com/us-copyright-office-opens-public-comments-on-ai-and-content-ownership-170225911/feed/ 0
Viral indie game Only Up! delisted from Steam https://aitesonics.com/viral-indie-game-only-up-delisted-from-steam-171652546/ https://aitesonics.com/viral-indie-game-only-up-delisted-from-steam-171652546/#respond Sat, 13 Apr 2024 10:09:21 +0000 https://aitesonics.com/viral-indie-game-only-up-delisted-from-steam-171652546/ The developer of Only Up!, a viral indie climbing game that blew up in popularity on Twitch streams, has delisted the title from Steam. After receiving accusations of using infringing assets and promoting NFTs, the game’s creator said they plan to “put the game behind” them due to stress. “What I need now is peace […]

The post Viral indie game Only Up! delisted from Steam appeared first on Best News.

]]>
The developer of Only Up!, a viral indie climbing game that blew up in popularity on Twitch streams, has delisted the title from Steam. After receiving accusations of using infringing assets and promoting NFTs, the game’s creator said they plan to “put the game behind” them due to stress. “What I need now is peace of mind and healing,” wrote developer SCKR Games.

The developer posted an update on the title’s Steam page explaining the decision, as first spotted by PCGamesN. “I’m a solo developer and this game is my first experience in Gamedev, a game I did for creativity, to test myself, and where I made a lot of mistakes,” SCKR Games wrote on Steam. “The game has kept me under a lot of stress all these months. Now I want to put the game behind me. And yes. The game won’t be available in the [Steam] store soon, that’s what I decided myself.” The title was delisted at the time of this article’s publication, with its name changed to “not available.” You can view a cached version of the game’s listing on the Internet Archive’s Wayback Machine.

The title’s absurd difficulty became its calling card — likely a big reason it was a hot destination on Twitch. Players stepped into the shoes of Jackie, a teenager from the projects with dreams of rising out of poverty. Inspired by “Jack and the Beanstalk,” the developer tasked gamers with climbing and parkouring through elaborate mazes of pipes and other objects stretching into the sky. Lacking a save feature, it put you back at square one after falling. “The point is that each successive level raised the stakes in the game, the higher you climb the more painful to fall,” the developer wrote on the Only Up! Steam store page. However, the title did include a time-slowing feature to help fine-tune the more difficult leaps.

According to data viewed by PCGamesN, Only Up! attracted up to 280,000 concurrent viewers on Twitch at its peak. A YouTube walkthrough from ‘iShowSpeed’ (Darren Jason Watkins ) has garnered 5.6 million views in two months.

This isn’t the first time Only Up! has been removed. SCKR Games delisted it in late June following the games’ alleged copyright violations. A 3D artist accused SCKR Games of using a Sketchfab asset, a giant statue of a girl that wasn’t licensed for commercial use. (The game cost $10.) Only Up! returned in early July with a statue of Atlas replacing the infringing one.

The one-person SCKR Games says it will return with a new project. “I plan to take a pause, and continue my education in game design and further with new experience and knowledge to direct my energies to my next game with the working title Kith — it will be a new experience and a new concept with realism, a completely different genre and setting, and the emphasis is on cinematography,” the developer wrote. “This time I hope the project will be created by a small team. This is a challenging project on which I want to significantly improve my skills in game design. Thank you for your understanding.”

This article contains affiliate links; if you click such a link and make a purchase, we may earn a commission.

The post Viral indie game Only Up! delisted from Steam appeared first on Best News.

]]>
https://aitesonics.com/viral-indie-game-only-up-delisted-from-steam-171652546/feed/ 0
Appeals court overturns $1 billion copyright lawsuit against Cox https://aitesonics.com/appeals-court-overturns-1-billion-copyright-lawsuit-against-cox-130810427/ https://aitesonics.com/appeals-court-overturns-1-billion-copyright-lawsuit-against-cox-130810427/#respond Thu, 11 Apr 2024 17:05:47 +0000 https://aitesonics.com/appeals-court-overturns-1-billion-copyright-lawsuit-against-cox-130810427/ An appeals court has blocked a $1 billion copyright verdict from 2019 against US internet service provider Cox Communications and ordered a retrial, Arts Technica has reported. A three-judge panel ruled unanimously that Cox didn't profit directly from its users' piracy, rebutting claims from Sony, Universal and Warner. The judges did affirm the original jury's […]

The post Appeals court overturns $1 billion copyright lawsuit against Cox appeared first on Best News.

]]>
An appeals court has blocked a $1 billion copyright verdict from 2019 against US internet service provider Cox Communications and ordered a retrial, Arts Technica has reported. A three-judge panel ruled unanimously that Cox didn't profit directly from its users' piracy, rebutting claims from Sony, Universal and Warner.

The judges did affirm the original jury's finding of willful contributory infringement from the trial, first announced back in 2018. To that effect, they ordered a new damages trial that may reduce the size of the award.

"We reverse the vicarious liability verdict and remand for a new trial on damages because Cox did not profit from its subscribers' acts of infringement, a legal prerequisite for vicarious liability," the panel wrote. It added that "no reasonable jury could find that Cox received a direct financial benefit from its subscribers' infringement of Plaintiffs' copyrights."

Cox allegedly refused to take "reasonable measures" to fight piracy, according to the original allegations. Internet providers are supposed to terminate the accounts of offending users, but the ISP only conducted temporary disconnections and warned some users more than 100 times. The labels claimed it even instituted a cap on accepted copyright complaints and cut back on anti-piracy staffers.

However, the judges said that Sony offered no causal connection between infringement and higher revenues for Cox. "No evidence suggest that customers chose Cox's Internet service, as opposed to a competitor's, because of any knowledge or expectation about Cox's lenient response to infringement."

Under the US Digital Millennium Copyright Act (DMCA) and EU rules, ISPs enjoy "safe harbor" protections that shield them from liability for user actions. However, that only holds if they comply with specific requirements and address copyright violations promptly — and in this case, Cox didn't do that, the judges said.

"The jury saw evidence that Cox knew of specific instances of repeat copyright infringement occurring on its network, that Cox traced those instances to specific users, and that Cox chose to continue providing monthly Internet access to those users… because it wanted to avoid losing revenue," the ruling states. The case is now headed back to a US District court.

The post Appeals court overturns $1 billion copyright lawsuit against Cox appeared first on Best News.

]]>
https://aitesonics.com/appeals-court-overturns-1-billion-copyright-lawsuit-against-cox-130810427/feed/ 0
Valve fails to get out of paying its EU geo-blocking fine https://aitesonics.com/valve-fails-to-get-out-of-paying-its-eu-geo-blocking-fine-122053595/ https://aitesonics.com/valve-fails-to-get-out-of-paying-its-eu-geo-blocking-fine-122053595/#respond Fri, 05 Apr 2024 08:53:23 +0000 https://aitesonics.com/valve-fails-to-get-out-of-paying-its-eu-geo-blocking-fine-122053595/ Valve has failed to convince a court that it didn't infringe EU law by geo-blocking activation keys, according to a new ruling. The company argued that, based on copyright law, publishers had the right to charge different prices for games in different countries. However, the EU General Court confirmed that its geo-blocking actions "infringed EU […]

The post Valve fails to get out of paying its EU geo-blocking fine appeared first on Best News.

]]>
Valve has failed to convince a court that it didn't infringe EU law by geo-blocking activation keys, according to a new ruling. The company argued that, based on copyright law, publishers had the right to charge different prices for games in different countries. However, the EU General Court confirmed that its geo-blocking actions "infringed EU competition law"and that copyright law didn't apply.

"Copyright is intended only to ensure for the right holders concerned protection of the right to exploit commercially the marketing or the making available of the protected subject matter, by the grant of licences in return for payment of remuneration," it wrote in a statement. "However, it does not guarantee them the opportunity to demand the highest possible remuneration or to engage in conduct such as to lead to artificial price differences between the partitioned national markets."

The original charges centered around activation keys. The commission said Valve and five publishers (Bandai Namco, Capcom, Focus Home, Koch Media and ZeniMax) agreed to use geo-blocking so that activation keys sold in some countries — like Czech Republic, Estonia, Hungary and Latvia — would not work in other member states. That would prevent someone in, say, Germany buying a cheaper key in Latvia, where prices are lower. However, doing so violates the EU's Digital Single Market rules, which enforces an open market across the EU.

The five developers were given a reduced fine of €7.8 million (over $9.4 million at the time) for cooperating, but Valve decided to fight and faced the full €1.6 million, or more than $1.9 million penalty. In a statement back in 2021, Valve said that the charges didn't pertain to PC games sold on Steam, but that it was accused of locking keys to particular territories at the request of publishers. It added that it turned off region locks for most cases (other than local laws) in 2015 because of the EU's concerns.

The court rejected the appeal and backed the original EU Commission's decision that the companies’ actions had “unlawfully restricted cross-border sales” of games. As a result, Valve is still subject to the original €1.6 million fine — but it has two months and ten days to appeal. Engadget has reached out to Valve for comment.

The post Valve fails to get out of paying its EU geo-blocking fine appeared first on Best News.

]]>
https://aitesonics.com/valve-fails-to-get-out-of-paying-its-eu-geo-blocking-fine-122053595/feed/ 0
New tool lets artists fight AI image bots by hiding corrupt data in plain sight https://aitesonics.com/new-tool-lets-artists-fight-ai-image-bots-by-hiding-corrupt-data-in-plain-sight-095519848/ https://aitesonics.com/new-tool-lets-artists-fight-ai-image-bots-by-hiding-corrupt-data-in-plain-sight-095519848/#respond Fri, 05 Apr 2024 08:22:06 +0000 https://aitesonics.com/new-tool-lets-artists-fight-ai-image-bots-by-hiding-corrupt-data-in-plain-sight-095519848/ From Hollywood strikes to digital portraits, AI's potential to steal creatives' work and how to stop it has dominated the tech conversation in 2023. The latest effort to protect artists and their creations is Nightshade, a tool allowing artists to add undetectable pixels into their work that could corrupt an AI's training data, the MIT […]

The post New tool lets artists fight AI image bots by hiding corrupt data in plain sight appeared first on Best News.

]]>
From Hollywood strikes to digital portraits, AI's potential to steal creatives' work and how to stop it has dominated the tech conversation in 2023. The latest effort to protect artists and their creations is Nightshade, a tool allowing artists to add undetectable pixels into their work that could corrupt an AI's training data, the MIT Technology Review reports. Nightshade's creation comes as major companies like OpenAI and Meta face lawsuits for copyright infringement and stealing personal works without compensation.

University of Chicago professor Ben Zhao and his team created Nightshade, which is currently being peer reviewed, in an effort to put some of the power back in artists' hands. They tested it on recent Stable Diffusion models and an AI they personally built from scratch.

Nightshade essentially works as a poison, altering how a machine-learning model produces content and what that finished product looks like. For example, it could make an AI system interpret a prompt for a handbag as a toaster or show an image of a cat instead of the requested dog (the same goes for similar prompts like puppy or wolf).

Nightshade follows Zhao and his team's August release of a tool called Glaze, which also subtly alters a work of art's pixels but it makes AI systems detect the initial image as entirely different than it is. An artist who wants to protect their work can upload it to Glaze and opt in to using Nightshade.

Damaging technology like Nightshade could go a long way towards encouraging AI's major players to request and compensate artists' work properly (it seems like a better alternative to having your system rewired). Companies looking to remove the poison would likely need to locate every piece of corrupt data, a challenging task. Zhao cautions that some individuals might attempt to use the tool for evil purposes but that any real damage would require thousands of corrupted works.

The post New tool lets artists fight AI image bots by hiding corrupt data in plain sight appeared first on Best News.

]]>
https://aitesonics.com/new-tool-lets-artists-fight-ai-image-bots-by-hiding-corrupt-data-in-plain-sight-095519848/feed/ 0
AI music pioneer quits after disagreement over 'fair use' of copyrighted works https://aitesonics.com/ai-music-pioneer-quits-after-disagreement-over-fair-use-of-copyrighted-works-114546092/ https://aitesonics.com/ai-music-pioneer-quits-after-disagreement-over-fair-use-of-copyrighted-works-114546092/#respond Fri, 05 Apr 2024 07:56:30 +0000 https://aitesonics.com/ai-music-pioneer-quits-after-disagreement-over-fair-use-of-copyrighted-works-114546092/ Countless aspects of generative AI have caused rampant debate, including its access to copyrighted material. Now, the vice president of audio at Stability AI, Ed Newton-Rex, has resigned due to his belief that training generative AI models using copyrighted content doesn't qualify as "fair use," he wrote in an op-ed on Music Business Worldwide. He […]

The post AI music pioneer quits after disagreement over 'fair use' of copyrighted works appeared first on Best News.

]]>
Countless aspects of generative AI have caused rampant debate, including its access to copyrighted material. Now, the vice president of audio at Stability AI, Ed Newton-Rex, has resigned due to his belief that training generative AI models using copyrighted content doesn't qualify as "fair use," he wrote in an op-ed on Music Business Worldwide. He joins the likes of artists such as Bad Bunny, who recently spoke out against a viral TikTok song that used AI to mimic his voice.

Meanwhile, AI companies have steadfastly supported fair use (training models with copyrighted material without asking permission or providing compensation), and Newton-Rex's decision marks a unique change from the norm. In his public resignation letter, Newton-Rex explains that he believes Stability AI has a more "nuanced view" than some of its competitors. However, he had an issue with the company's recent submission to the United States Copyright Office, which argued that AI development should fall under fair use.

"I disagree because one of the factors affecting whether the act of copying is fair use, according to Congress, is 'the effect of the use upon the potential market for or value of the copyrighted work,'" Newton-Rex stated. "Today's generative AI models can clearly be used to create works that compete with the copyrighted works they are trained on. So I don't see how using copyrighted works to train generative AI models of this nature can be considered fair use."

Newton-Rex is a published classical composer and founded Jukedeck, which created music using AI, in 2012. He became the product director of TikTok's in-house AI lab after the company purchased Jukedeck in 2019 and subsequently worked at Voicey (acquired by Snap) before joining Stability AI in November 2022.

Ironically, there's also been an (as yet unsuccessful) push to protect AI-produced work. In August, a judge upheld the US Copyright Office's decision that AI-generated art can't be copyrighted, stating, "Human authorship is a bedrock requirement of copyright."

The post AI music pioneer quits after disagreement over 'fair use' of copyrighted works appeared first on Best News.

]]>
https://aitesonics.com/ai-music-pioneer-quits-after-disagreement-over-fair-use-of-copyrighted-works-114546092/feed/ 0
A 'silly' attack made ChatGPT reveal real phone numbers and email addresses https://aitesonics.com/a-silly-attack-made-chatgpt-reveal-real-phone-numbers-and-email-addresses-200546649/ https://aitesonics.com/a-silly-attack-made-chatgpt-reveal-real-phone-numbers-and-email-addresses-200546649/#respond Fri, 05 Apr 2024 07:43:42 +0000 https://aitesonics.com/a-silly-attack-made-chatgpt-reveal-real-phone-numbers-and-email-addresses-200546649/ A team of researchers was able to make ChatGPT reveal some of the bits of data it has been trained on by using a simple prompt: asking the chatbot to repeat random words forever. In response, ChatGPT churned out people’s private information including email addresses and phone numbers, snippets from research papers and news articles, […]

The post A 'silly' attack made ChatGPT reveal real phone numbers and email addresses appeared first on Best News.

]]>
A team of researchers was able to make ChatGPT reveal some of the bits of data it has been trained on by using a simple prompt: asking the chatbot to repeat random words forever. In response, ChatGPT churned out people’s private information including email addresses and phone numbers, snippets from research papers and news articles, Wikipedia pages, and more.

The researchers, who work at Google DeepMind, the University of Washington, Cornell, Carnegie Mellon University, the University of California Berkeley, and ETH Zurich, urged AI companies to seek out internal and external testing before releasing large language models, the foundational tech that powers modern AI services like chatbots and image-generators. “It’s wild to us that our attack works and should’ve, would’ve, could’ve been found earlier,” they wrote, and published their findings in a paper on Tuesday that 404 Media first reported on.

Chatbots like ChatGPT and prompt-based image generators like DALL-E are powered by large language models, deep learning algorithms that are trained on enormous amounts of data that critics say is often scraped off the public internet without consent. But until now, it wasn’t clear what data OpenAI’s chatbot was trained on since the large language models that power it are closed-source.

When the researchers asked ChatGPT to “repeat the word ‘poem’ forever”, the chatbot initially compiled, but then revealed an email address and a cellphone number for a real founder and CEO”, the paper revealed. When asked to repeat the word “company”, the chatbot eventually spat out the email address and phone number of a random law firm in the US. “In total, 16.9 percent of the generations we tested contained memorized [personally identifiable information]” the researchers wrote.

Using similar prompts, the researchers were also able to make ChatGPT reveal chunks of poetry, Bitcoin addresses, fax numbers, names, birthdays, social media handles, explicit content from dating websites, snippets from copyrighted research papers and verbatim text from news websites like CNN. Overall, they spent $200 to generate 10,000 examples of personally identifiable information and other data cribbed straight from the web totalling “several megabytes”. But a more serious adversary, they noted, could potentially get a lot more by spending more money. “The actual attack”, they wrote, “is kind of silly.”

OpenAI patched the vulnerability on August 30, the researchers say. But in our own tests, Engadget was able to replicate some of the paper’s findings. When we asked ChatGPT to repeat the word “reply” forever, for instance, the chatbot did so, before eventually revealing someone’s name and Skype ID. OpenAI did not respond to Engadget’s request for comment.

The post A 'silly' attack made ChatGPT reveal real phone numbers and email addresses appeared first on Best News.

]]>
https://aitesonics.com/a-silly-attack-made-chatgpt-reveal-real-phone-numbers-and-email-addresses-200546649/feed/ 0
The New York Times is suing OpenAI and Microsoft for copyright infringement https://aitesonics.com/the-new-york-times-is-suing-openai-and-microsoft-for-copyright-infringement-181212615/ https://aitesonics.com/the-new-york-times-is-suing-openai-and-microsoft-for-copyright-infringement-181212615/#respond Fri, 05 Apr 2024 07:06:43 +0000 https://aitesonics.com/the-new-york-times-is-suing-openai-and-microsoft-for-copyright-infringement-181212615/ The New York Times is suing OpenAI and Microsoft for using published news articles to train its artificial intelligence chatbots without an agreement that compensates it for its intellectual property. The lawsuit, which was filed in a Federal District Court in Manhattan, marks the first time a major news organization has pursued the ChatGPT developers […]

The post The New York Times is suing OpenAI and Microsoft for copyright infringement appeared first on Best News.

]]>
The New York Times is suing OpenAI and Microsoft for using published news articles to train its artificial intelligence chatbots without an agreement that compensates it for its intellectual property. The lawsuit, which was filed in a Federal District Court in Manhattan, marks the first time a major news organization has pursued the ChatGPT developers for copyright infringement. The NYT did not specify how much it seeks in payout from the companies but that “this action seeks to hold them responsible for the billions of dollars in statutory and actual damages.”

The NYT claims that OpenAI and Microsoft, the makers of Chat GPT and Copilot, “seek to free-ride on The Times’s massive investment in its journalism” without having any licensing agreements. In one part of the complaint, the NYT highlights that its domain (www.nytimes.com) was the most used proprietary source mined for content to train GPT-3.

It alleges more than 66 million records, ranging from breaking news articles to op-eds, published across the NYT websites and other affiliated brands were used to train the AI models. The lawsuit alleges that the defendants in the case have used “almost a century’s worth of copyrighted content,” causing significant harm to the Times’ bottom line. The NYT also says that OpenAI and Microsoft’s products can “generate output that recites Times content verbatim, closely summarizes it, and mimics its expressive style.” This mirrors other complaints from comedians and authors like Sarah Silverman and Julian Sancton who claim OpenAI has profited off their works.

"We respect the rights of content creators and owners and are committed to working with them to ensure they benefit from AI technology and new revenue models," an OpenAI spokesperson told Engadget. In an email, the representative explained that the two parties were engaged in ongoing "productive conversations" and the company described the lawsuit as unexpected. "We are surprised and disappointed with this development," the OpenAI spokesperson told Engadget. Still, OpenAI is hopeful that the two will find a "mutually beneficial way to work together."

If the lawsuit makes any headway, it could create opportunities for other publishers to pursue similar legal action and make training AI models for commercial purposes more costly. Competitors in the space, like CNN and BBC News have already tried limiting what data AI web crawlers can scrape for training and development purposes.

While it’s unclear if the NYT is open to a licensing agreement after its earlier negotiations failed, leading to the lawsuit, OpenAI has reached a few deals recently. This month, it agreed to pay publisher Axel Springer for access to its content in a deal projected to be worth millions. And articles from Politico and Business Insider will be made available to train OpenAI’s next gen AI tools as part of a three year deal. It also previously made a deal with the AP to use its archival content dating back to 1985. Microsoft did not respond to a request for comment.

Update, December 27 2023, 8:36 PM ET: This story has been to include comments from an OpenAI spokesperson on the lawsuit.

The post The New York Times is suing OpenAI and Microsoft for copyright infringement appeared first on Best News.

]]>
https://aitesonics.com/the-new-york-times-is-suing-openai-and-microsoft-for-copyright-infringement-181212615/feed/ 0