The use of facial recognition technology, a form of biometric artificial intelligence, is growing across the U.S. as an efficient security system that can identify people based on measuring facial features, but has been hit with some notable criticisms.
Police departments, the health care industry, and companies looking to fight back against cyber fraud have rolled out the technology in recent years to bolster security measures. The tech is far from new, with its roots stretching back to the mid-1960s, when researchers in Palo Alto pioneered training computers to recognize faces, and has exploded in use since around 2010.
Today, machine learning algorithms – a subset of artificial intelligence that uses data and algorithms to mimic how humans learn – has fine-tuned the technology. The tech can measure and identify facial measurements in a photo or video, and cross-analyze whether two photos or videos show the same person, or even pick a person out in a crowd of people, Amazon Web Services explains.
“Machines use computer vision to identify people, places, and things in images with accuracy at or above human levels and with much greater speed and efficiency. Using complex artificial intelligence (AI) technology, computer vision automates extraction, analysis, classification, and understanding of useful information from image data,” according to the Amazon subsidiary.
Output of an Artificial Intelligence system from Google Vision, performing Facial Recognition on a photograph of a man, with facial features identified and facial bounding boxes present, San Ramon, California, Nov. 22, 2019. (Smith Collection/Gado/Getty Images)
The tech is being used to patrol for fraud, where some companies have users verify their identity with their face, to ATMs using the tech to authenticate customers or even for doctors accessing patient records. While a New York City supermarket rolled out the tech to patrol for shoplifters, and Madison Square Garden Entertainment has used the recognition software to identify and boot event-goers from venues such as Radio City or Madison Square Garden.
For police departments, the use of the tech is widespread, with the CEO of facial recognition firm Clearview AI telling the BBC last month that police departments in the U.S. have used its software nearly 1 million times.
A man uses an iris recognition scanner. (Ian Waldie/Getty Images)
Facial recognition technology, however, has come under scrutiny by some local leaders and civil liberties groups with accusations it violates people’s privacy and civil liberties.
In Anchorage, Alaska just this week, local leaders passed a measure restricting the use of facial recognition in the city, citing privacy must be protected and to prevent the technology from being misused.
Police departments’ use of the software has especially faced condemnation, as a handful of people across the country report they were mistakenly arrested due to the technology.
Robert Williams, for example, spent 30 hours in jail back in 2020 after Michigan police allegedly ran a blurry pic of a suspect who stole watches from a store and determined Williams was behind the crime.
“The day I was arrested, I had no idea it was facial recognition,” Williams told Newsweek this month. “I was arrested for no reason.”
His case was ultimately dismissed, but he and the ACLU are suing the department over the arrest.
Such instances of mistaken identities and arrests have played out a handful of times, including in November when Georgia man Randall Reid was arrested on theft warrants in Louisiana, despite the man saying he had never visited that state before, Newsweek reported.
“Police reliance on flawed face recognition technology has resulted in repeated arrests of people for crimes they had absolutely nothing to do with. This technology makes us less secure, not more,” Nathan Freed Wessler, deputy director of ACLU’s Speech, Privacy, and Technology Project, told Fox News Digital of the tech.
“Local lawmakers from Maine to Alaska have already hit the brakes on this dangerous technology by putting it off limits to police. The time for additional cities and states to take action to prevent government abuse is now,” Freed Wessler added.
Overseas, the European Parliament called for a ban in 2021 on facial recognition in public space by police departments, noting that the tech has the possibility of better keeping residents safe, but risked their rights to privacy and freedom of movement.
“AI applications may offer great opportunities in the field of law enforcement … thereby contributing to the safety and security of EU citizens, while at the same time they may entail significant risks for the fundamental rights of people,” the European legislative body said.
A man in a mask attends a protest against the use of police facial recognition cameras at the Cardiff City Stadium for the Cardiff City v Swansea City Championship match on Jan. 12, 2020 in Cardiff, Wales. (Photo by Matthew Horwood/Getty Images)
Meanwhile, the artificial intelligence community has made strides in recent months on building more powerful systems across the board.
Large language models, a deep learning algorithm that’s trained with copious amounts of text, have become wildly popular since OpenAI’s release of ChatGPT last year. The chatbot system is able to mimic human conversation based on prompts it is given, and can execute various tasks such as writing short stories, composing emails, answering questions and even coming up with recipes.
For facial recognition specifically, studies have found the use of the technology will likely increase in the future. A study published last month predicts that facial recognition market revenue will increase from $5.1 billion in 2022 to $19.3 billion in 2032, according to Markets.us.