news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

news

regulation Archives - Best News https://aitesonics.com/tag/regulation/ Tue, 16 Apr 2024 04:14:28 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.4 Hitting the Books: Why AI needs regulation and how we can do it https://aitesonics.com/hitting-the-books-containing-big-tech-tom-kemp-it-rev-ai-regulation-143014628/ https://aitesonics.com/hitting-the-books-containing-big-tech-tom-kemp-it-rev-ai-regulation-143014628/#respond Tue, 16 Apr 2024 04:14:28 +0000 https://aitesonics.com/hitting-the-books-containing-big-tech-tom-kemp-it-rev-ai-regulation-143014628/ The burgeoning AI industry has barrelled clean past the “move fast” portion of its development, right into the part where we “break things” — like society! Since the release of ChatGPT last November, generative AI systems have taken the digital world by storm, finding use in everything from machine coding and industrial applications to game […]

The post Hitting the Books: Why AI needs regulation and how we can do it appeared first on Best News.

]]>
The burgeoning AI industry has barrelled clean past the “move fast” portion of its development, right into the part where we “break things” — like society! Since the release of ChatGPT last November, generative AI systems have taken the digital world by storm, finding use in everything from machine coding and industrial applications to game design and virtual entertainment. It’s also quickly been adopted for illicit purposes like scaling spam email operations and creating deepfakes.

That’s one technological genie we’re never getting back in its bottle so we’d better get working on regulating it, argues Silicon Valley–based author, entrepreneur, investor, and policy advisor, Tom Kemp, in his new book, Containing Big Tech: How to Protect Our Civil Rights, Economy, and Democracy. In the excerpt below, Kemp explains what form that regulation might take and what its enforcement would mean for consumers.

Excerpt from Containing Big Tech: How to Protect Our Civil Rights, Economy, and Democracy (IT Rev, August 22, 2023), by Tom Kemp.


Road map to contain AI

Pandora in the Greek myth brought powerful gifts but also unleashed mighty plagues and evils. So likewise with AI, we need to harness its benefits but keep the potential harms that AI can cause to humans inside the proverbial Pandora’s box.

When Dr. Timnit Gebru, founder of the Distributed Artificial Intelligence Research Institute (DAIR), was asked by the New York Times regarding how to confront AI bias, she answered in part with this: “We need to have principles and standards, and governing bodies, and people voting on things and algorithms being checked, something similar to the FDA [Food and Drug Administration]. So, for me, it’s not as simple as creating a more diverse data set, and things are fixed.”

She’s right. First and foremost, we need regulation. AI is a new game, and it needs rules and referees. She suggested we need an FDA equivalent for AI. In effect, both the AAA and ADPPA call for the FTC to act in that role, but instead of drug submissions and approval being handled by the FDA, Big Tech and others should send their AI impact assessments to the FTC for AI systems. These assessments would be for AI systems in high-impact areas such as housing, employment, and credit, helping us better address digital redlining. Thus, these bills foster needed accountability and transparency for consumers.

In the fall of 2022, the Biden Administration’s Office of Science and Technology Policy (OSTP) even proposed a “Blueprint for an AI Bill of Rights.” Protections include the right to “know that an automated system is being used and understand how and why it contributes to outcomes that impact you.” This is a great idea and could be incorporated into the rulemaking responsibilities that the FTC would have if the AAA or ADPPA passed. The point is that AI should not be a complete black box to consumers, and consumers should have rights to know and object—much like they should have with collecting and processing their personal data. Furthermore, consumers should have a right of private action if AI-based systems harm them. And websites with a significant amount of AI-generated text and images should have the equivalent of a food nutrition label to let us know what AI-generated content is versus human generated.

We also need AI certifications. For instance, the finance industry has accredited certified public accountants (CPAs) and certified financial audits and statements, so we should have the equivalent for AI. And we need codes of conduct in the use of AI as well as industry standards. For example, the International Organization for Standardization (ISO) publishes quality management standards that organizations can adhere to for cybersecurity, food safety, and so on. Fortunately, a working group with ISO has begun developing a new standard for AI risk management. And in another positive development, the National Institute of Standards and Technology (NIST) released its initial framework for AI risk management in January 2023.

We must remind companies to have more diverse and inclusive design teams building AI. As Olga Russakovsky, assistant professor in the Department of Computer Science at Princeton University, said: “There are a lot of opportunities to diversify this pool [of people building AI systems], and as diversity grows, the AI systems themselves will become less biased.”

As regulators and lawmakers delve into antitrust issues concerning Big Tech firms, AI should not be overlooked. To paraphrase Wayne Gretzky, regulators need to skate where the puck is going, not where it has been. AI is where the puck is going in technology. Therefore, acquisitions of AI companies by Big Tech companies should be more closely scrutinized. In addition, the government should consider mandating open intellectual property for AI. For example, this could be modeled on the 1956 federal consent decree with Bell that required Bell to license all its patents royalty-free to other businesses. This led to incredible innovations such as the transistor, the solar cell, and the laser. It is not healthy for our economy to have the future of technology concentrated in a few firms’ hands.

Finally, our society and economy need to better prepare ourselves for the impact of AI on displacing workers through automation. Yes, we need to prepare our citizens with better education and training for new jobs in an AI world. But we need to be smart about this, as we can’t say let’s retrain everyone to be software developers, because only some have that skill or interest. Note also that AI is increasingly being built to automate the development of software programs, so even knowing what software skills should be taught in an AI world is critical. As economist Joseph E. Stiglitz pointed out, we have had problems managing smaller-scale changes in tech and globalization that have led to polarization and a weakening of our democracy, and AI’s changes are more profound. Thus, we must prepare ourselves for that and ensure that AI is a net positive for society.

Given that Big Tech is leading the charge on AI, ensuring its effects are positive should start with them. AI is incredibly powerful, and Big Tech is “all-in” with AI, but AI is fraught with risks if bias is introduced or if it’s built to exploit. And as I documented, Big Tech has had issues with its use of AI. This means that not only are the depth and breadth of the collection of our sensitive data a threat, but how Big Tech uses AI to process this data and to make automated decisions is also threatening.

Thus, in the same way we need to contain digital surveillance, we must also ensure Big Tech is not opening Pandora’s box with AI.

This article contains affiliate links; if you click such a link and make a purchase, we may earn a commission.

The post Hitting the Books: Why AI needs regulation and how we can do it appeared first on Best News.

]]>
https://aitesonics.com/hitting-the-books-containing-big-tech-tom-kemp-it-rev-ai-regulation-143014628/feed/ 0
Synchron's BCI implants may help paralyzed patients reconnect with the world https://aitesonics.com/bci-implant-severe-paralysis-synchron-medicine-stroke-160012833/ https://aitesonics.com/bci-implant-severe-paralysis-synchron-medicine-stroke-160012833/#respond Tue, 16 Apr 2024 04:13:56 +0000 https://aitesonics.com/bci-implant-severe-paralysis-synchron-medicine-stroke-160012833/ Dr. Tom Oxley visibly stiffens at the prospect of using brain-computer interface technology for something as gauche as augmenting able-bodied humans. “We’re not building a BCI to control Spotify or to watch Netflix,” the CEO of medical device startup Synchron tersely told Engadget over a video call last week. “There’s all this hype and excitement […]

The post Synchron's BCI implants may help paralyzed patients reconnect with the world appeared first on Best News.

]]>
Dr. Tom Oxley visibly stiffens at the prospect of using brain-computer interface technology for something as gauche as augmenting able-bodied humans. “We’re not building a BCI to control Spotify or to watch Netflix,” the CEO of medical device startup Synchron tersely told Engadget over a video call last week.

“There’s all this hype and excitement about BCI, about where it might go,” Oxley continued. “But the reality is, what’s it gonna do for patients? We describe this problem for patients, not around wanting to super-augment their brain or body, but wanting to restore the fundamental agency and autonomy that [able-bodied people] take for granted.”

Around 31,000 Americans currently live with Amyotrophic lateral sclerosis (ALS) with another 5,000 diagnosed every year. Nearly 300,000 Americans suffer from spinal cord paralysis, and another approximately 18,000 people join those ranks annually. Thousands more are paralyzed by stroke and accident, losing their ability to see, hear or feel the world around them. And with the lack of motor control in their extremities, these Americans can also lose access to a critical component of modern life: their smartphone.

“[A smartphone] creates our independence and our autonomy,” Oxley said. “It’s communicating to each other, text messaging, emailing. It’s controlling the lights in your house, doing your banking, doing your shopping, all those things.”

“If you can control your phone again,” he said. “you can restore those elements of your lifestyle.”

So while Elon Musk promises an fantastical cyberpunk future where everybody knows Kung Fu and can upload their consciousness to the cloud on a whim, startups like Synchron, as well as Medtronic, Blackrock Neurotech, BrainGate and Precision Neuroscience and countless academic research teams, are working to put this transformative medical technology into clinical practice, reliably and ethically.

The best way to a man’s mind is through his jugular vein

Brooklyn-based Synchron made history in 2022 when it became the first company to successfully implant a BCI into a human patient as part of its pioneering COMMAND study performed in partnership with Mount Sinai Hospital. To date, the medical community has generally had just two options in capturing the myriad electrical signals that our brains produce: low-fidelity but non-invasive EEG wave caps, or high-fidelity Utah Array neural probes that require open-brain surgery to install.

Synchron’s Stentrode device provides a third: it is surgically guided up through a patient’s jugular vein to rest within a large blood vessel near their motor cortex where its integrated array of sensors yield better-fidelity signal than an EEG cap without the messy implantation or eventual performance drop off of probe arrays.

“We’re not putting penetrative electronics into the brain and so the surgical procedure itself is minimally invasive,” Dr. David Putrino, Director of Rehabilitation Innovation for the Mount Sinai Health System, explained to Engadget. “The second piece of it is, you’re not asking a neurologist to learn anything new … They know how to place stents, and you’re really asking to place a stent in a big vessel — it’s not a hard task.”

“These types of vascular surgeries in the brain are commonly performed,” said Dr. Zoran Nenadić, William J. Link Chair and Professor of Biomedical Engineering at the University of California, Irvine. “I think they’re clever using this route to deliver these implants into the human brain, which otherwise is an invasive surgery.”

Though the Stentrode’s signal quality is not quite on par with a probe array, it doesn’t suffer the signal degradation that arrays do. Quite the opposite, in fact. “When you use penetrative electrodes and you put them in the brain,” Putrino said, “gliosis forms around the electrodes and impedances change, signal quality goes down, you lose certain electrodes. In this case, as the electrode vascularizes into the blood vessel, it actually stabilizes and improves the recording over time.”

A device for those silent moments of terror

“We’re finally, actually, paying attention to a subset of individuals with disabilities who previously have not had technology available that gives them digital autonomy,” Putrino said. He points out that for many severely paralyzed people, folks who can perhaps wiggle a finger or toe, or who can use eye tracking technology, the communication devices at their disposal are situational at best. Alert buttons can shift out of reach, eye tracking systems are largely stationary tools and unusable in cars.

“We communicate with these folks on a regular basis and the fears that are brought up that this technology can help with,” Putrino recalls. “It is exactly in these silent moments, where it’s like, the eye tracking has been put away for the night and then you start to choke, how do you call someone in? Your call button or your communication device is pushed to the side and you see the nurse starting to prepare the wrong medication for you. How do you alert them? These moments happen often in a disabled person’s life and we don’t have an answer for these things.”

With a BCI, he continued, locked-in patients are no longer isolated. They can simply wake their digital device from sleep mode and use it to alert caregivers. ”This thing works outside, it works in different light settings, it works regardless of whether you’re laying flat on your back or sitting up in your chair,” Putrino said. “Versatile, continuous digital control is the goal.”

Reaching that goal is still at least half a decade away. “Our goal over the next five years is to get market approval and then we’ll be ready to scale up that point,” Oxley said. The rate of that scaling will depend on the company’s access to cath labs. These are facilities found in both primary and secondary level hospitals so there are thousands of them around the country, Oxley said. Far more than the handful of primary level hospitals that are equipped to handle open-brain BCI implantation surgeries.

A show of hands for another hole in your head

In 2021, Synchron conducted its SWITCH safety study for the Stentrode device itself, implanting it in four ALS patients and monitoring their health over the course of the next year. The study found the device to be “safe, with no serious adverse events that led to disability or death,” according to a 2022 press release. The Stentrod “stayed in place for all four patients and the blood vessel in which the device was implanted remained open.”

Buoyed by that success, Synchon launched its headline-grabbing COMMAND study last year, which uses the company’s entire brain.io system in six patients to help them communicate digitally. “We’re really trying to show that this thing improves quality of life and improves agency of the individual,” Putrino said. The team had initially expected the recruitment process through which candidate patients are screened, to take five full years to complete.

Dr. Putrino was not prepared for the outpouring of interest, especially given the permanent nature of these tests and quality of life that patients might expect to have once they’re in. “Many of our patients have end-stage ALS, so being part of a trial is a non-trivial decision,” Putrino said. “That’s like, do you want to spend what maybe some of the last years of your life with researchers as opposed to with family members?”

“Is that a choice you want to make for folks who are considering the trial who have a spinal cord injury?” asked Putrino, as those folks are also eligible for implantation. “We have very candid conversations with them around, look, this is a gen one device,” he warns. “Do you want to wait for gen five because you don’t have a short life expectancy, you could live another 30 years. This is a permanent implant.”

Still, the public interest in Synchron’s BCI work has led to such a glut of interested patients, that the team was able to perform its implantation surgery on the sixth and final patient of the study in early August — nearly 18 months ahead of schedule. The team will need to continue the study for at least another year (to meet minimum safety standards like in the previous SWITCH study) but has already gotten permission from the NIH to extend its observation portion to the full original five years. This will give Synchron significantly more data to work with in the future, Putrino explained.

How we can avoid another Argus II SNAFU

Our Geordi LaForge visor future seemed a veritable lock in 2013, when Second Sight Medical Products received an FDA Humanitarian Use Device designation for its Argus II retinal prosthesis, two years after it received commercial clearance in Europe. The medical device, designed to restore at least rudimentary functional vision to people suffering profound vision loss from retinitis pigmentosa, was implanted in the patient’s retina and converted digital video signals it received from an external, glasses-mounted camera into the analog electrical impulses that the brain can comprehend — effectively bypassing the diseased portions of the patient’s ocular system.

With the technical blessing of the FDA in hand (Humanitarian Use cases are not subject to nearly the same scrutiny as full FDA approval), Second Sight filed for IPO in 2013 and was listed in NASDAQ the following year. Seven years after that, the company went belly up in 2020, declared itself out of business and wished the best of luck to the suckers who spent $150k to get its hardware hardwired into their skulls.

“Once you’re in that [Humanitarian Use] category, it’s kind of hard to go back and do all of the studies that are necessary to get the traditional FDA approvals to move forward,” Dr. An Do, Assistant Professor in the Department of Neurology at University of California, Irvine, told Engadget. “I think the other issue is that these are orphan diseases. There’s a very small group of people that they’re catering to.”

As IEEE Spectrum rightfully points out, one loose wire, one degraded connection or faulty lead, and these patients can potentially re-lose what little sight they had regained. There’s also the chance that the implant, without regular upkeep, eventually causes an infection or interferes with other medical procedures, requiring a costly, invasive surgery to remove.

“I am constantly concerned about this,” Putrino admitted. “This is a question that keeps me up at night. I think that, obviously, we need to make sure that companies can in good faith proceed to the next stage of their work as a company before they begin any clinical trials.”

He also calls on the FDA to expand its evaluations of BCI companies to potentially include examining the applicant’s ongoing financial stability. “I think that this is definitely a consideration that we need to think about because we don’t want to implant patients and then have them just lose this technology.”

“We always talk to our patients as we’re recruiting them about the fact that this is a permanent implant,” Putrino continued. “We make a commitment to them that they can always come to us for device related questions, even outside the scope of the clinical trial.”

But Putrino admits that even with the best intentions, companies simply cannot guarantee their customers of continued commercial success. “I don’t really know how we safeguard against the complete failure of a company,” he said. “This is just one of the risks that people are going to take coming in. It’s a complex issue and it’s one I worry about because we’re right here on the bleeding edge and it’s unclear if we have good answers to this once the technology goes beyond clinical trials.”

Luckily, the FDA does. As one agency official explained to Engadget, “the FDA’s decisions are intended to be patient-centric with the health and safety of device users as our highest priority.” Should a company go under, file bankruptcy or otherwise be unable to provide the services it previously sold, in addition to potentially being ordered by the court to continue care for its existing patients, “the FDA may also take steps to protect patients in these circumstances. For example, the FDA may communicate to the public, recommendations for actions that health care providers and patients should take.”

The FDA official also notes that the evaluation process itself involves establishing whether an applicant “demonstrates reasonable assurance of safety and effectiveness of the device when used as intended in its environment of use for its expected life … FDA requirements apply to devices regardless of a firm’s decision to stop selling and distributing the device.”

The Synchron Switch BCI, for its part, is made from biologically inert materials that will eventually be reabsorbed into the body, “so even if Synchron disappeared tomorrow, the Switch BCI is designed to safely remain in the patient’s body indefinitely,” Oxley said. “The BCI runs on a software platform that is designed for stability and independent use, so patients can use the platform without our direct involvement.”

However, this approach “is not sufficient and that, given BCIs’ potential influence on individuals and society, the nature of what is safe and effective and the balance between risk and benefit require special consideration,” argued a 2021 op-ed in the AMA Journal of Ethics. “The line between therapy and enhancement for BCIs is difficult to draw precisely. Therapeutic devices function to correct or compensate for some disease state, thereby restoring one to ‘normality’ or the standard species-typical form.” But what, and more importantly who, gets to define normality? How far below the mean IQ can you get before forcibly raising your score through BCI implantation is deemed worthwhile to society?

The op-ed’s authors concede that “While BCIs raise multiple ethical concerns, such as how to define personhood, respect for autonomy, and adequacy of informed consent, not all ethical issues justifiably form the basis of government regulation.” The FDA’s job is to test devices for safety and efficacy, not equality, after all. As such the authors instead argue that, “a new committee or regulatory body with humanistic aims, including the concerns of both individuals and society, ought to be legislated at the federal level in order to assist in regulating the nature, scope, and use of these devices.”

This article contains affiliate links; if you click such a link and make a purchase, we may earn a commission.

The post Synchron's BCI implants may help paralyzed patients reconnect with the world appeared first on Best News.

]]>
https://aitesonics.com/bci-implant-severe-paralysis-synchron-medicine-stroke-160012833/feed/ 0
The FCC will vote to restore net neutrality later this month https://aitesonics.com/the-fcc-will-vote-to-restore-net-neutrality-later-this-month-161813609/ https://aitesonics.com/the-fcc-will-vote-to-restore-net-neutrality-later-this-month-161813609/#respond Sat, 13 Apr 2024 11:21:05 +0000 https://aitesonics.com/the-fcc-will-vote-to-restore-net-neutrality-later-this-month-161813609/ The Federal Communications Commission (FCC) plans to vote to restore net neutrality later this month. With Democrats finally holding an FCC majority in the final year of President Biden’s first term, the agency can fulfill a 2021 executive order from the President and bring back the Obama-era rules that the Trump administration’s FCC gutted in […]

The post The FCC will vote to restore net neutrality later this month appeared first on Best News.

]]>
The Federal Communications Commission (FCC) plans to vote to restore net neutrality later this month. With Democrats finally holding an FCC majority in the final year of President Biden’s first term, the agency can fulfill a 2021 executive order from the President and bring back the Obama-era rules that the Trump administration’s FCC gutted in 2017.

The FCC plans to hold the vote during a meeting on April 25. Net neutrality treats broadband services as an essential resource under Title II of the Communications Act, giving the FCC greater authority to regulate the industry. It lets the agency prevent ISPs from anti-consumer behavior like unfair pricing, blocking or throttling content and providing pay-to-play “fast lanes” to internet access.

Democrats had to wait three years to enact Biden’s 2021 executive order to reinstate the net neutrality rules passed in 2015 by President Obama’s FCC. The confirmation process of Biden FCC nominee Gigi Sohn for telecommunications regulator played no small part. She withdrew her nomination in March 2023 following what she called “unrelenting, dishonest and cruel attacks.”

Republicans (and Democratic Senator Joe Manchin) opposed her confirmation through a lengthy 16-month process. During that period, telecom lobbying dollars flowed freely and Republicans cited past Sohn tweets critical of Fox News, along with vocal opposition from law enforcement, as justification for blocking the confirmation. Democrats finally regained an FCC majority with the swearing-in of Anna Gomez in late September, near the end of Biden’s third year in office.

“The pandemic proved once and for all that broadband is essential,” FCC Chairwoman Rosenworcel wrote in a press release. “After the prior administration abdicated authority over broadband services, the FCC has been handcuffed from acting to fully secure broadband networks, protect consumer data, and ensure the internet remains fast, open, and fair. A return to the FCC’s overwhelmingly popular and court-approved standard of net neutrality will allow the agency to serve once again as a strong consumer advocate of an open internet.”

The post The FCC will vote to restore net neutrality later this month appeared first on Best News.

]]>
https://aitesonics.com/the-fcc-will-vote-to-restore-net-neutrality-later-this-month-161813609/feed/ 0
US bill proposes AI companies list what copyrighted materials they use https://aitesonics.com/us-bill-proposes-ai-companies-list-what-copyrighted-materials-they-use-123058589/ https://aitesonics.com/us-bill-proposes-ai-companies-list-what-copyrighted-materials-they-use-123058589/#respond Sat, 13 Apr 2024 11:17:05 +0000 https://aitesonics.com/us-bill-proposes-ai-companies-list-what-copyrighted-materials-they-use-123058589/ The debate over using copyrighted materials in AI training systems rages on — as does uncertainty over which works AI even pulls data from. US Congressman Adam Schiff is attempting to answer the latter, introducing the Generative AI Copyright Disclosure Act on April 9, Billboard reports. The bill would require AI companies to outline every […]

The post US bill proposes AI companies list what copyrighted materials they use appeared first on Best News.

]]>
The debate over using copyrighted materials in AI training systems rages on — as does uncertainty over which works AI even pulls data from. US Congressman Adam Schiff is attempting to answer the latter, introducing the Generative AI Copyright Disclosure Act on April 9, Billboard reports. The bill would require AI companies to outline every copyrighted work in their datasets.

“AI has the disruptive potential of changing our economy, our political system, and our day-to-day lives. We must balance the immense potential of AI with the crucial need for ethical guidelines and protections.” said Congressman Schiff in a statement. He added that the bill “champions innovation while safeguarding the rights and contributions of creators, ensuring they are aware when their work contributes to AI training datasets. This is about respecting creativity in the age of AI and marrying technological progress with fairness.” Organizations such as the Recording Industry Association of America (RIAA), SAG-AFTRA and WGA have shown support for the bill.

If the Generative AI Copyright Disclosure Act passes, companies would need to file all relevant data use to the Register of Copyrights at least 30 days before introducing the AI tool to the public. They would also have to provide the same information retroactively for any existing tools and make updates if they considerably altered datasets. Failure to do so would result in the Copyright Office issuing a fine — the exact number would depend on a company’s size and past infractions. To be clear, this wouldn’t do anything to prevent AI creators from using copyrighted work, but it would provide transparency on which materials they’ve taken from. The ambiguity over use was on full display in a March Bloomberg interview with OpenAI CTO Mira Murati, who claimed she was unsure if the tool Sora took data from YouTube, Facebook or Instagram posts.

The bill could even give companies and artists a clearer picture when speaking out against or suing for copyright infringement — a fairly common occurrence. Take the New York Times, which sued OpenAI and Microsoft for using its articles to train chatbots without an agreement or compensation, or Sarah Silverman, who sued OpenAI (a frequent defendant) and Meta for using her books and other works to train their AI models.

The entertainment industry has also been leading calls for AI protections. AI regulation was a big sticking point in the SAG-AFTRA and WGA strikes last year, ending only when detailed policies around AI went into their contracts. SAG-AFTRA has recently voiced its support for California bills requiring consent from actors to use their avatars and from heirs to make AI versions of deceased individuals. It’s no surprise that Congressman Schiff represents California’s 30th district, which includes Hollywood, Burbank and Universal City.

Musicians are echoing their fellow creatives, with over 200 artists signing an open letter in April that calls for AI protections, the Guardian reported. “This assault on human creativity must be stopped,” the letter, issued by the Artist Rights Alliance, states. “We must protect against the predatory use of AI to steal professional artists’ voices and likenesses, violate creators’ rights, and destroy the music ecosystem.” Billie Eilish, Jon Bon Jovi and Pearm Jam were among the signatories.

The post US bill proposes AI companies list what copyrighted materials they use appeared first on Best News.

]]>
https://aitesonics.com/us-bill-proposes-ai-companies-list-what-copyrighted-materials-they-use-123058589/feed/ 0
FAA grounds Starship until SpaceX takes 63 'corrective actions' https://aitesonics.com/faa-grounds-starship-until-spacex-takes-63-corrective-actions-174825385/ https://aitesonics.com/faa-grounds-starship-until-spacex-takes-63-corrective-actions-174825385/#respond Sat, 13 Apr 2024 10:09:13 +0000 https://aitesonics.com/faa-grounds-starship-until-spacex-takes-63-corrective-actions-174825385/ SpaceX's latest Starship test launch was its last for the foreseeable future. The FAA announced Friday that it has closed its investigation into April's mishap, but that the company will not be allowed to resume test launches until it addresses a list of 63 "corrective actions" for its launch system. "The vehicle’s structural margins appear […]

The post FAA grounds Starship until SpaceX takes 63 'corrective actions' appeared first on Best News.

]]>
SpaceX's latest Starship test launch was its last for the foreseeable future. The FAA announced Friday that it has closed its investigation into April's mishap, but that the company will not be allowed to resume test launches until it addresses a list of 63 "corrective actions" for its launch system.

"The vehicle’s structural margins appear to be better than we expected," SpaceX CEO Elon Musk joked with reporters in the wake of the late April test launch. Per the a report from the US Fish and WIldlife Service, however, the failed launch resulted in a 385-acre debris field that saw concrete chunks flung more than 2,600 feet from the launchpad, a 3.5-acre wildfire and "a plume cloud of pulverized concrete that deposited material up to 6.5 miles northwest of the pad site.”

"Corrective actions include redesigns of vehicle hardware to prevent leaks and fires, redesign of the launch pad to increase its robustness, incorporation of additional reviews in the design process, additional analysis and testing of safety critical systems and components including the Autonomous Flight Safety System, and the application of additional change control practices," the FAA release reads. Furthermore, the FAA says that SpaceX will have to not only complete that list but also apply for and receive a modification to its existing license "that addresses all safety, environmental and other applicable regulatory requirements prior to the next Starship launch." In short, SpaceX has reached the "finding out" part.

SpaceX released a blog post shortly after the FAA's announcement was made public, obliquely addressing the issue. "Starship’s first flight test provided numerous lessons learned," the post reads, crediting its "rapid iterative development approach" with both helping develop all of SpaceX's vehicles to this point and "directly contributing to several upgrades being made to both the vehicle and ground infrastructure."

The company admitted that its Autonomous Flight Safety System (AFSS), which is designed to self-destruct a rocket when it goes off its flightpath but before it hits the ground, suffered "an unexpected delay" — that lasted 40 seconds. SpaceX did not elaborate on what cause, if any, it found for the fault but has reportedly since "enhanced and requalified the AFSS to improve system reliability."

"SpaceX is also implementing a full suite of system performance upgrades unrelated to any issues observed during the first flight test," the blog reads. Those improvements include a new hot-stage separation system which will more effectively decouple the first and second stages, a new electronic "Thrust Vector Control (TVC) system" for its Raptor heavy rockets, and "significant upgrades" to the orbital launch mount and pad system which just so happened to have failed in the first test but is, again, completely unrelated to this upgrade. Whether those improvements overlap with the 63 that the FAA is imposing, could not be confirmed at the time of publication as the FAA had not publically released them.

The post FAA grounds Starship until SpaceX takes 63 'corrective actions' appeared first on Best News.

]]>
https://aitesonics.com/faa-grounds-starship-until-spacex-takes-63-corrective-actions-174825385/feed/ 0
AI tech leaders make all the right noises at cozy closed-door Senate meeting https://aitesonics.com/ai-tech-leaders-make-all-the-right-noises-at-cozy-closed-door-senate-meeting-194505318/ https://aitesonics.com/ai-tech-leaders-make-all-the-right-noises-at-cozy-closed-door-senate-meeting-194505318/#respond Sat, 13 Apr 2024 10:00:04 +0000 https://aitesonics.com/ai-tech-leaders-make-all-the-right-noises-at-cozy-closed-door-senate-meeting-194505318/ The CEOs of leading AI companies — including Meta's Mark Zuckerberg, Microsoft's Satya Nadella, Alphabet's Sundar Pichai, Tesla's Elon Musk and Open AI's Sam Altman — appeared before Congress once again on Wednesday. But instead of the normal bombast and soapboxing we see during public hearings about the dangers of unfettered AI development, this conversation […]

The post AI tech leaders make all the right noises at cozy closed-door Senate meeting appeared first on Best News.

]]>
The CEOs of leading AI companies — including Meta's Mark Zuckerberg, Microsoft's Satya Nadella, Alphabet's Sundar Pichai, Tesla's Elon Musk and Open AI's Sam Altman — appeared before Congress once again on Wednesday. But instead of the normal bombast and soapboxing we see during public hearings about the dangers of unfettered AI development, this conversation reportedly took on far more muted tones.

In all, more than 20 tech and civil society leaders spoke with lawmakers at Wednesday's meeting, organized by Senate Majority Leader Chuck Schumer, to discuss how AI development should be regulated moving forward. Senators Martin Heinrich (D-NM), Todd Young (R-IN) and Mike Rounds (R-SD) who were also in attendance and reportedly working with the majority leader to draft additional proposals.

The word of the day: consensus. “First, I asked everyone in the room, ‘Is government needed to play a role in regulating AI?’ and every single person raised their hands even though they had diverse views,” Schumer told reporters Wednesday.

But as Bloomberg reports, "areas of disagreement were apparent throughout the morning session" with Zuckerberg, Altman and Bill Gates all differing on the risks posed by open-source AI (three guesses as to where old Monopoly Bill came down on that issue). True to form, Elon Musk got into it with "Berkeley researcher Deb Raji for appearing to downplay concerns about AI-powered self-driving cars, according to one of the people in the room," Bloomberg reports.

“Some people mentioned licensing and testing and other ways of regulation … there were various suggestions as to how to do it, but no consensus emerged yet,” Schumer said following the event.

“That’s probably the worst wedding to try to do seating for,” Humane Intelligence CEO Rumman Chowdhury said of the event as an attendee. She also noted that Elon Musk and Mark Zuckerberg did not interact and sat at opposite ends of the room-width table — presumably to keep the two bloodthirsty cagefighting CEOs from throwing down and Royal Rumbling the esteemed proceedings.

The meeting participants generally agreed that the federal government needs to “help to deal with what we call transformational innovation,” one unnamed participant suggested. That could entail creating a $32 billion fund that would assist with “the kind of stuff that maximizes the benefits of AI,” Schumer told reporters.

Following the seven-hour event, Facebook released Mark Zuckerberg's official remarks. They cover the company's long-standing talking points about developing and rolling out the technology "in a responsible manner," coordinating its efforts with civil society leaders (instead of say, allegedly fomenting genocide like that one time in Myanmar) and ensuring "that America continue to lead in this area and define the technical standard that the world uses."

In a departure from his rhetoric in recent years warning of perceived growing threats from China, Zuckerberg pointed to a new boogieman: "the next leading open source model … out of Abu Dhabi." This appears to have been a thinly-veiled reference to the UAE's recent entrance into AI development.

Elon Musk, famed libertarian and bloodsworn enemy of the FTC, warned reporters corralled outside of the hearing about the "civilizational risk" posed by AI. He wants a Federal Department of AI to help regulate the industry. He reportedly envisions it operating similarly to the FAA or SEC (two more agencies Musk has been variously scolded by) but did not elaborate beyond that. “I think this meeting could go down in history as important to the future of civilization,” he told reporters.

The post AI tech leaders make all the right noises at cozy closed-door Senate meeting appeared first on Best News.

]]>
https://aitesonics.com/ai-tech-leaders-make-all-the-right-noises-at-cozy-closed-door-senate-meeting-194505318/feed/ 0
FTC starts claims process for Fortnite players tricked into making unwanted purchases https://aitesonics.com/ftc-starts-claims-process-for-fortnite-players-tricked-into-making-unwanted-purchases-201534338/ https://aitesonics.com/ftc-starts-claims-process-for-fortnite-players-tricked-into-making-unwanted-purchases-201534338/#respond Fri, 05 Apr 2024 09:03:08 +0000 https://aitesonics.com/ftc-starts-claims-process-for-fortnite-players-tricked-into-making-unwanted-purchases-201534338/ As part of a $520 million settlement with the Federal Trade Commission, Epic Games will be forced to provide refunds to Fortnite players who were allegedly tricked into making unintended purchases on the platform. About $245 million has been specifically earmarked for these refunds. The regulator has started notifying more than 37 million people via […]

The post FTC starts claims process for Fortnite players tricked into making unwanted purchases appeared first on Best News.

]]>
As part of a $520 million settlement with the Federal Trade Commission, Epic Games will be forced to provide refunds to Fortnite players who were allegedly tricked into making unintended purchases on the platform. About $245 million has been specifically earmarked for these refunds. The regulator has started notifying more than 37 million people via email if they are for compensation.

The entire process may take one month to complete and the FTC says customers who believe they were impacted will have until January 17, 2024, to submit a claim where you can simply apply for a refund directly on the FTC’s website. The FTC notes that this is one of the largest refunds in a gaming-related case to happen to date.

The FTC previously claimed that Epic Games used deceptive tactics to get Fortnite players to make unintended in-game purchases. As part of a complaint first announced by the FTC in December of last year, the agency says the video game-making company made it easy for underage players to rack up charges “without parental consent” and also “locked the accounts of consumers” that disputed unauthorized charges. Because Epic Games violated the Children’s Online Privacy Protection Act or COPPA, it was ordered to pay $275 million in addition to the consumer refunds.

The post FTC starts claims process for Fortnite players tricked into making unwanted purchases appeared first on Best News.

]]>
https://aitesonics.com/ftc-starts-claims-process-for-fortnite-players-tricked-into-making-unwanted-purchases-201534338/feed/ 0
The US Senate and Silicon Valley reconvene for a second AI Insight Forum https://aitesonics.com/the-us-senate-and-silicon-valley-reconvene-for-a-second-ai-insight-forum-143128622/ https://aitesonics.com/the-us-senate-and-silicon-valley-reconvene-for-a-second-ai-insight-forum-143128622/#respond Fri, 05 Apr 2024 08:21:32 +0000 https://aitesonics.com/the-us-senate-and-silicon-valley-reconvene-for-a-second-ai-insight-forum-143128622/ Senator Charles Schumer (D-NY) once again played host to Silicon Valley’s AI leaders on Tuesday as the US Senate reconvened its AI Insights Forum for a second time. On the guest list this go around: manifesto enthusiast Marc Andreessen and venture capitalist John Doerr, as well as Max Tegmark of the Future of Life Institute […]

The post The US Senate and Silicon Valley reconvene for a second AI Insight Forum appeared first on Best News.

]]>
Senator Charles Schumer (D-NY) once again played host to Silicon Valley’s AI leaders on Tuesday as the US Senate reconvened its AI Insights Forum for a second time. On the guest list this go around: manifesto enthusiast Marc Andreessen and venture capitalist John Doerr, as well as Max Tegmark of the Future of Life Institute and NAACP CEO Derrick Johnson. On the agenda: “the transformational innovation that pushes the boundaries of medicine, energy, and science, and the sustainable innovation necessary to drive advancements in security, accountability, and transparency in AI,” according to a release from Sen. Schumer’s office.

Upon exiting the meeting Tuesday, Schumer told the assembled press, "it is clear that American leadership on AI can’t be done on the cheap. Almost all of the experts in today’s Forum called for robust, sustained federal investment in private and public sectors to achieve our goals of American-led transformative and sustainable innovation in AI.

Per National Security AI Commission estimates, paying for that could cost around $32 billion a year. However, Schumer believes that those funding challenges can be addressed by "leveraging the private sector by employing new and innovative funding mechanisms – like the Grand Challenges prize idea."

"We must prioritize transformational innovation, to help create new vistas, unlock new cures, improve education, reinforce national security, protect the global food supply, and more," Schumer remarked. But in doing so, we must act sustainably in order to minimize harms to workers, civil society and the environment. "We need to strike a balance between transformational and sustainable innovation," Schumer said. "Finding this balance will be key to our success."

Senators Brian Schatz (D-HI) and John Kennedy (R-LA) also got in on the proposed regulatory action Tuesday, introducing legislation that would provide more transparency on AI-generated content by requiring clear labeling and disclosures. Such technology could resemble the Content Credentials tag that the C2PA and CAI industry advocacy groups are developing.

"Our bill is simple," Senator Schatz said in a press statement. "If any content is made by artificial intelligence, it should be labeled so that people are aware and aren’t fooled or scammed.”

The Schatz-Kennedy AI Labeling Act, as they're calling it, would require generative AI system developers to clearly and conspicuously disclose AI-generated content to users. Those developers, and their licensees, would also have to take "reasonable steps" to prevent "systematic publication of content without disclosures." The bill would also establish a working group to create non-binding technical standards to help social media platforms automatically identify such content as well.

“​​It puts the onus where it belongs: on the companies and not the consumers,” Schatz said on the Senate floor Tuesday. “Labels will help people to be informed. They will also help companies using AI to build trust in their content.”

Tuesday’s meeting follows the recent introduction of new AI legislation, dubbed the Artificial Intelligence Advancement Act of 2023 (S. 3050). Senators Martin Heinrich (D-NM), Mike Rounds (R-SD), Charles Schumer (D-NY) and Todd Young (R-IN) all co-sponsored the bill. The bill proposes AI bug bounty programs and would require a vulnerability analysis study for AI-enabled military applications. It’s passage into law would also launch a report into AI regulation in the financial services industry (which the head of the SEC had recently been lamenting) as well as a second report on data sharing and coordination.

“It’s frankly a hard challenge,” SEC Chairman Gary Gensler told The Financial Times recently, speaking on the challenges the financial industry faces in AI adoption and regulation. “It’s a hard financial stability issue to address because most of our regulation is about individual institutions, individual banks, individual money market funds, individual brokers; it’s just in the nature of what we do.”

"Working people are fighting back against artificial intelligence and other technology used to eliminate workers or undermine and exploit us," AFL-CIO President Liz Shuler said at the conclusion of Tuesday's forum. "If we fail to involve workers and unions across the entire innovation process, AI will curtail our rights, threaten good jobs and undermine our democracy. But the responsible adoption of AI, properly regulated, has the potential to create opportunity, improve working conditions and build prosperity."

The forums are part of Senator Schumer’s SAFE Innovation Framework, which his office debuted in June. “The US must lead in innovation and write the rules of the road on AI and not let adversaries like the Chinese Communist Party craft the standards for a technology set to become as transformative as electricity,” the program announcement reads.

While Andreesen calls for AI advancement at any cost and Tegmark continues to advocate for a developmental “time out,” rank and file AI industry workers are also fighting to make their voices heard ahead of the forum. On Monday, a group of employees from two dozen leading AI firms published an open letter to Senator Schumer, demanding Congress take action to safeguard their livelihoods from the “dystopian future” that Andreessen’s screed, for example, would require.

“Establishing robust protections related to workplace technology and rebalancing power between workers and employers could reorient the economy and tech innovation toward more equitable and sustainable outcomes,” the letter authors argue.

Senator Ed Markey (D-MA) and Representative Pramila Jayapal (WA-07) had, the previous month, called on leading AI companies to “answer for the working conditions of their data workers, laborers who are often paid low wages and provided no benefits but keep AI products online.”

"We covered a lot of good ground today, and I think we’ll all be walking out of the room with a deeper understanding of how to approach American-led AI innovation," Schumer said Tueseay. "We’ll continue this conversation in weeks and months to come – in more forums like this and committee hearings in Congress – as we work to develop comprehensive, bipartisan AI legislation."

The post The US Senate and Silicon Valley reconvene for a second AI Insight Forum appeared first on Best News.

]]>
https://aitesonics.com/the-us-senate-and-silicon-valley-reconvene-for-a-second-ai-insight-forum-143128622/feed/ 0
The White House will reportedly reveal a ‘sweeping’ AI executive order on October 30 https://aitesonics.com/the-white-house-will-reportedly-reveal-a-sweeping-ai-executive-order-on-october-30-200558649/ https://aitesonics.com/the-white-house-will-reportedly-reveal-a-sweeping-ai-executive-order-on-october-30-200558649/#respond Fri, 05 Apr 2024 08:21:04 +0000 https://aitesonics.com/the-white-house-will-reportedly-reveal-a-sweeping-ai-executive-order-on-october-30-200558649/ The Biden Administration is reportedly set to unveil a broad executive order on artificial intelligence next week. According to The Washington Post, the White House’s “sweeping order” would use the federal government’s purchasing power to enforce requirements on AI models before government agencies can use them. The order is reportedly scheduled for Monday, October 30, […]

The post The White House will reportedly reveal a ‘sweeping’ AI executive order on October 30 appeared first on Best News.

]]>
The Biden Administration is reportedly set to unveil a broad executive order on artificial intelligence next week. According to The Washington Post, the White House’s “sweeping order” would use the federal government’s purchasing power to enforce requirements on AI models before government agencies can use them. The order is reportedly scheduled for Monday, October 30, two days before an international AI Safety Summit in the UK.

The order will allegedly require advanced AI models to undergo a series of assessments before federal agencies can adopt them. In addition, it would ease immigration for highly skilled workers, which was heavily restricted during the Trump administration. Federal agencies, including the Defense Department, Energy Department and intelligence branches, would also have to assess how they might incorporate AI into their work. The report notes that the analyses would emphasize strengthening the nation’s cyber defenses.

On Tuesday evening, the White House reportedly sent invitations for a “Safe, Secure, and Trustworthy Artificial Intelligence” event for Monday, October 30, hosted by President Biden. The Washington Post indicates that the executive order isn’t finalized, and details could still change.

Meanwhile, European officials are working on AI regulations across the Atlantic, aiming for a finalized package by the end of the year. The US Congress is also in the earlier stages of drafting AI regulations. Senator Charles Schumer (D-NY) hosted AI leaders on Tuesday at the second AI Insights Forum.

AI regulation is currently one of the most buzzed-about topics in the tech world. Generative AI has rapidly advanced in the last two years as image generators like Midjourney and DALL-E 3 emerged, producing convincing photos that could be disseminated for disinformation and propaganda (as some political campaigns have already done). Meanwhile, OpenAI’s ChatGPT, Anthropic’s Claude, Google’s Bard and other advanced large language model (LLM) chatbots have arguably sparked even more concern, allowing anyone to compose fairly convincing text passages while answering questions that may or may not be truthful. There are even AI models for cloning celebrities’ voices.

In addition to misinformation and its potential impact on elections, generative AI also sparks worries about the job market, especially for artists, graphic designers, developers and writers. Several high-profile media outlets, most infamously CNET, have been caught using AI to compose entire error-ridden articles with only the thinnest of disclosures.

The post The White House will reportedly reveal a ‘sweeping’ AI executive order on October 30 appeared first on Best News.

]]>
https://aitesonics.com/the-white-house-will-reportedly-reveal-a-sweeping-ai-executive-order-on-october-30-200558649/feed/ 0
Sweeping White House executive order takes aim at AI's toughest challenges https://aitesonics.com/sweeping-white-house-ai-executive-order-takes-aim-at-the-technologys-toughest-challenges-090008655/ https://aitesonics.com/sweeping-white-house-ai-executive-order-takes-aim-at-the-technologys-toughest-challenges-090008655/#respond Fri, 05 Apr 2024 08:17:04 +0000 https://aitesonics.com/sweeping-white-house-ai-executive-order-takes-aim-at-the-technologys-toughest-challenges-090008655/ The Biden Administration unveiled its ambitious next steps in addressing and regulating artificial intelligence development on Monday. Its expansive new executive order (EO) seeks to establish further protections for the public as well as improve best practices for federal agencies and their contractors. “The President several months ago directed his team to pull every lever,” […]

The post Sweeping White House executive order takes aim at AI's toughest challenges appeared first on Best News.

]]>
The Biden Administration unveiled its ambitious next steps in addressing and regulating artificial intelligence development on Monday. Its expansive new executive order (EO) seeks to establish further protections for the public as well as improve best practices for federal agencies and their contractors.

“The President several months ago directed his team to pull every lever,” a senior administration official told reporters on a recent press call. “That’s what this order does, bringing the power of the federal government to bear in a wide range of areas to manage AI’s risk and harness its benefits … It stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world and like all executive orders, this one has the force of law.”

These actions will be introduced over the next year with smaller safety and security changes happening in around 90 days and with more involved reporting and data transparency schemes requiring nine to 12 months to fully deploy. The administration is also creating an “AI council,” chaired by White House Deputy Chief of Staff Bruce Reed, who will meet with federal agency heads to ensure that the actions are being executed on schedule.

Public safety

“In response to the President’s leadership on the subject, 15 major American technology companies have begun their voluntary commitments to ensure that AI technology is safe, secure and trustworthy before releasing it to the public,” the senior administration official said. “That is not enough.”

The EO directs the establishment of new standards for AI safety and security, including reporting requirements for developers whose foundation models might impact national or economic security. Those requirements will also apply in developing AI tools to autonomously implement security fixes on critical software infrastructure.

By leveraging the Defense Production Act, this EO will “require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests,” per a White House press release. That information must be shared prior to the model being made available to to the public, which could help reduce the rate at which companies unleash half-baked and potentially deadly machine learning products.

In addition to the sharing of red team test results, the EO also requires disclosure of the system’s training runs (essentially, its iterative development history). “What that does is that creates a space prior to the release… to verify that the system is safe and secure,” officials said.

Administration officials were quick to point out that this reporting requirement will not impact any AI models currently available on the market, nor will it impact independent or small- to medium-size AI companies moving forward, as the threshold for enforcement is quite high. It’s geared specifically for the next generation of AI systems that the likes of Google, Meta and OpenAI are already working on with enforcement on models starting at 10^26 petaflops, a capacity currently beyond the limits of existing AI models. “This is not going to catch AI systems trained by graduate students, or even professors,” the administration official said.

What’s more, the EO will encourage the Departments of Energy and Homeland Security to address AI threats “to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks,” per the release. “Agencies that fund life-science projects will establish these standards as a condition of federal funding, creating powerful incentives to ensure appropriate screening and manage risks potentially made worse by AI.” In short, any developers found in violation of the EO can likely expect a prompt and unpleasant visit from the DoE, FDA, EPA or other applicable regulatory agency, regardless of their AI model’s age or processing speed.

In an effort to proactively address the decrepit state of America’s digital infrastructure, the order also seeks to establish a cybersecurity program, based loosely on the administration’s existing AI Cyber Challenge, to develop AI tools that can autonomously root out and shore up security vulnerabilities in critical software infrastructure. It remains to be seen whether those systems will be able to address the concerns of misbehaving models that SEC head Gary Gensler recently raised.

AI watermarking and cryptographic validation

We’re already seeing the normalization of deepfake trickery and AI-empowered disinformation on the campaign trail. So, the White House is taking steps to ensure that the public can trust the text, audio and video content that it publishes on its official channels. The public must be able to easily validate whether the content they see is AI-generated or not, argued White House officials on the press call.

The Department of Commerce is in charge of the latter effort and is expected to work closely with existing industry advocacy groups like the C2PA and its sister organization, the CAI, to develop and implement a watermarking system for federal agencies. “We aim to support and facilitate and help standardize that work [by the C2PA],” administration officials said. “We see ourselves as plugging into that ecosystem.”

Officials further explained that the government is supporting the underlying technical standards and practices that will lead to digital watermarking’ wider adoption — similar to the work it did around developing the HTTPS ecosystem and in getting both developers and the public on-board with it. This will help federal officials achieve their other goal of ensuring that the government’s official messaging can be relied upon.

Civil rights and consumer protections

The first Blueprint for an AI Bill of Rights that the White House released last October directed agencies to “combat algorithmic discrimination while enforcing existing authorities to protect people’s rights and safety,” the administration official said. “But there’s more to do.”

The new EO will require guidance be extended to “landlords, federal benefits programs and federal contractors” to prevent AI systems from exacerbating discrimination within their spheres of influence. It will also direct the Department of Justice to develop best practices for investigating and prosecuting civil rights violations related to AI, as well as, according to the announcement, “the use of AI in sentencing, parole and probation, pretrial release and detention, risk assessments, surveillance, crime forecasting and predictive policing, and forensic analysis.”

Additionally, the EO calls for prioritizing federal support to accelerate development of privacy-preserving techniques that would enable future large language models to be trained on large datasets without the current risk of leaking personal details that those datasets might contain. These solutions could include “cryptographic tools that preserve individuals’ privacy,” developed with assistance from the Research Coordination Network and National Science Foundation. The executive order also reiterates its calls for bipartisan legislation from Congress addressing the broader privacy issues that AI systems present for consumers.

In terms of healthcare, the EO states that the Department of Health and Human Services will establish a safety program that tracks and remedies unsafe, AI-based medical practices. Educators will also see support from the federal government in using AI-based educational tools like personalized chatbot tutoring.

Worker protections

The Biden administration concedes that while the AI revolution is a decided boon for business, its capabilities make it a threat to worker security through job displacement and intrusive workplace surveillance. The EO seeks to address these issues with “the development of principles and employer best practices that mitigate the harms and maximize the benefit of AI for workers,” an administration official said. “We encourage federal agencies to adopt these guidelines in the administration of their programs.”

The EO will also direct the Department of Labor and the Council of Economic Advisors to both study how AI might impact the labor market and how the federal government might better support workers “facing labor disruption” moving forward. Administration officials also pointed to the potential benefits that AI might bring to the federal bureaucracy including cutting costs, and increasing cybersecurity efficacy. “There’s a lot of opportunity here, but we have to to ensure the responsible government development and deployment of AI,” an administration official said.

To that end, the administration is launching on Monday a new federal jobs portal, AI.gov, which will offer information and guidance on available fellowship programs for folks looking for work with the federal government. “We’re trying to get more AI talent across the board,” an administration official said. “Programs like the US Digital Service, the Presidential Innovation Fellowship and USA jobs — doing as much as we can to get talent in the door.” The White House is also looking to expand existing immigration rules to streamline visa criteria, interviews and reviews for folks trying to move to and work in the US in these advanced industries.

The White House reportedly did not brief the industry on this particular swath of radical policy changes, though administration officials did note that they had already been collaborating extensively with AI companies on many of these issues. The Senate held its second AI Insight Forum event last week on Capitol Hill, while Vice President Kamala Harris is scheduled to speak at the UK Summit on AI Safety, hosted by Prime Minister Rishi Sunak on Tuesday.

At an event hosted by The Washington Post on Thursday, Senate Majority Leader Chuck Schumer (D-NY) was already arguing that the executive order did not go far enough and could not be considered an effective replacement for congressional action, which to date, has been slow in coming.

“There’s probably a limit to what you can do by executive order,” Schumer told WaPo, “They [the Biden Administration] are concerned, and they’re doing a lot regulatorily, but everyone admits the only real answer is legislative.”

The post Sweeping White House executive order takes aim at AI's toughest challenges appeared first on Best News.

]]>
https://aitesonics.com/sweeping-white-house-ai-executive-order-takes-aim-at-the-technologys-toughest-challenges-090008655/feed/ 0