Loading...

A US Agency Rejected Face Recognition--and Landed in Big Trouble

A US Agency Rejected Face Recognition--and Landed in Big Trouble<br />
<b>Warning</b>:  Undefined array key /var/www/vhosts/lawyersinamerica.com/httpdocs/app/views/singleBlog/singleBlogView.php on line 59
">
business
Mar 2023


A US Agency Rejected Face Recognition--and Landed in Big Trouble

In June 2021, Dave Zvenyach, director of a group tasked with improving digital access to US government services, sent a Slack message to his team. He'd decided that Login.gov, which provides a secure way to access dozens of government apps and websites, wouldn't use selfies and face recognition to verify the identity of people creating new accounts. "The benefits of liveness/selfie do not outweigh any discriminatory impact," he wrote, referring to the process of asking users to upload a selfie and photo of their ID so that algorithms can compare the two.

Zvenyach's rejection of face recognition, detailed in a report this month by the Office of the Inspector General of the General Services Administration, the agency that houses Login.gov, saw a government official draw a line in the sand in order to protect citizens from discrimination by algorithms. Face recognition technology has become more accurate, but many systems have been found to work less reliably for women with dark skin, people who identify as Asian, or people with a nonbinary gender identity.

Yet Zvenyach's pronouncement also put Login.gov and US agencies using the service at odds with federal security guidelines. For access to some sensitive data or services, they require that a person's identity be confirmed against a government ID, either in person or remotely using a biometric such as fingerprint or face recognition.

The inspector general's report finds that the GSA misled 22 agencies paying for use of Login.gov by claiming its service was fully compliant with National Institute of Standards and Technology requirements when it was not. An official from one federal agency told OIG investigators that Login.gov not complying with the standard left their agency at greater risk of fraud. Zvenyach did not respond to questions from WIRED about the report.

Though Zvenyach left the GSA in September 2022, and a new Login.gov director was appointed that same month, spokesperson Channing Grate says that the service will continue to avoid use of face recognition "until we have confidence that it can be deployed equitably and without causing harm to vulnerable populations." That leaves Login.gov out of compliance with NIST requirements, although the standard is being revised, and a new draft calls for an alternative to face recognition to be offered.

The allegations of misconduct at the GSA come at a time of renewed scrutiny on US government use of face recognition for administrative purposes. Migrants at the US-Mexico border have complained that a new app offered by the Department of Homeland Security to speed up asylum applications that uses selfies and face recognition functions poorly for people with dark skin. Civil liberties groups have long argued that human rights threats posed by face recognition outweigh the benefits of its use.

The report from the GSA's inspector general says that Zvenyach notified other agencies relying on Login.gov that its lack of face recognition put them out of compliance with NIST requirements in early 2022, after a WIRED article drew attention to Login.gov's face recognition policy.

In January that year, an Internal Revenue Service contract for online account verification with startup ID.me, which uses selfies and face recognition to verify new accounts, triggered public backlash over discrimination and privacy concerns. A WIRED story on the NIST standard driving use of the technology referred to Login.gov documentation that said it sometimes asked users to upload selfies for checking against an ID.

The GSA informed WIRED after publication that Login.gov's documentation was inaccurate and Login.gov did not use face recognition, and the article was updated. The OIG report says that a few days later, in early February, seven months after his internal message on face recognition, Zvenyach wrote to federal agencies that were using Login.gov to inform them that it was not in fact compliant with NIST requirements, due to his group's stance on face recognition.
"We have made the decision not to use facial recognition, liveness detection, or any other emerging technology in connection with government benefits and services until rigorous review has given us confidence that we can do so equitably and without causing harm to vulnerable populations," he wrote. The report says that Zvenyach later told investigators he had no knowledge of NIST requirements but that Login.gov leaders knew they were out of compliance as early as 2020.

Those NIST requirements, aimed at curbing identity fraud, attempt to solve a tricky problem. When a person accesses a government service, the agency needs to check who they are, a process known as proofing. In person, you can just pull out an identification card for verification, but online it's more difficult. For sensitive data or access, the NIST's digital identity standards call for remote digital proofing, which uses face recognition to compare a smartphone selfie with a photo on an ID card, and also liveness detection, which analyzes an image to detect whether it contains a real live human or is fake.

Rebecca Williams, a member of the American Civil Liberty Union's Surveillance Resistance Lab, previously worked at the White House's Office of Management and Budget. In that role she researched government work on modernizing digital identity, frequently met with Login.gov staff, and also heard complaints about the service. "Of the laundry list of things that Login.gov is doing that I might complain about, having somebody refuse to incorporate biometrics is not one of them," she says.

Both the IRS face recognition scandal last year and new report on Login.gov this month, Williams says, underscore a need for conversations including citizens and lawmakers about the kinds of identity verification they're comfortable with and whether people want a digital form of identification at all. Williams says that should mean no use of biometrics like face recognition and never sharing biometric data collected by a federal agency with a law enforcement agency.

After controversy over its ID.me contract, the IRS allowed people to opt to have their identity confirmed via video call with an agent instead of by face recognition. ID.me says people can also take a photo ID to any of 650 retail locations in the US, a small number in a large country.

Harvard University professor Jim Waldo says there are places in the US where people already identify themselves that can be used in place of remote face recognition for some swaths of the population. He supports a federated approach to proofing so people can show up at a US Postal Service branch office to verify their identity. GSA has worked on a pilot program with USPS for in-person identity checks.

For the past 15 years, Waldo has challenged students in a class he teaches about privacy to design a digital identity system that can verify a person is who they claim to be. He's noticed that most students generally start out thinking that requiring a digital ID for everybody is a good idea but become less confident it can work as they talk through the details.

Checking identity at scale with automation inevitably leads to problems for some, because technologies like face recognition are statistical, Waldo says. Those failures lead to suspicion about the pattern of errors, because "nobody actually believes this stuff is going to be fair or non-discriminatory," he says. "It's a trust issue, not a technology issue."

The NIST is in the process of revising its digital identity guidelines. A draft calls for offering an alternative to face recognition. It also adds a requirement to evaluate biometric technologies for performance across demographic groups on an ongoing basis. The NIST, which regularly tests commercial face recognition algorithms, has found many have problems identifying certain groups of people.

Not all federal agencies agreed with a face recognition use mandate: In comments submitted on the revision process in 2020, the Social Security Administration urged alternatives to face recognition, citing "privacy, usability, and policy concerns" alongside questions of discrimination falling heaviest on people of color.

Ryan Galluzzo, the lead on the NIST's digital identity program, says the revision has a focus on expanding choices for federal agencies and people signing in to government apps and websites. He calls face recognition a "socially sensitive technology."

"While it has valid applications to identity proofing use cases, we are also very interested in ways to provide individuals and organizations with innovative and responsible options that can bring similar convenience and security at higher assurance levels."

Precisely how the US government should treat face recognition has been an issue of increasing debate. Earlier this month, a slate of Democratic lawmakers in both houses of Congress introduced a bill that would place a moratorium on use of face recognition by federal agencies, although the proposal is unlikely to succeed.

Federal agencies have also come under pressure from the White House to weigh the potential discriminatory impacts of algorithms. An AI Bill of Rights released by White House Office of Science and Technology Policy in October says people have a right to live lives free from ineffective algorithms. An executive order on racial equity signed by President Biden last month says government agencies should be "protecting the public from algorithmic discrimination."

Top