12:04 GMT +321 August 2019
Listen Live
    The FBI headquarters building in Washington, DC.

    Despite Years of Warnings, FBI Hasn’t Adopted Facial Recognition Safeguards

    © AFP 2019 / YURI GRIPAS
    US
    Get short URL
    112

    It’s been three years since the Government Accountability Office (GAO), a US government watchdog, blasted the FBI for the way its facial recognition technology handled privacy and accuracy issues, but the bureau’s implemented none of the proposed changes.

    An activist tells Sputnik the FBI's stalling makes it "abundantly clear" how little it thinks of civil liberties.

    The federal law enforcement agency actually agreed, at least in part, with three of the GAO's six recommendations in its May 2016 report about the FBI's Next Generation Identification-Interstate Photo System (NGI-IPS). However, a new report by the watchdog issued last week found that the FBI hasn't implemented any of its advice, which concerned adherence to privacy and accuracy standards set by the Department of Justice.

    The auditors recommended the FBI test the system's accuracy and effectiveness at least once a year, in line with DOJ policy, but the bureau insisted it didn't need to, since no complaints had arisen about those things. GAO also said the bureau should test its false positive rate and improve the transparency of its facial recognition system operations and how it determines whether or not the privacy rights of individuals are respected by the system — none of which the FBI has done.

    Finally, the watchdog agency told the FBI it should hold the different facial recognition systems in use by other state and federal law enforcement agencies to the same standards as its own, something it has also failed to do.

    "Until FBI officials can assure themselves that the data they receive from external partners are reasonably accurate and reliable, it is unclear whether such agreements are beneficial to the FBI, whether the investment of public resources is justified, and whether photos of innocent people are unnecessarily included as investigative leads," the GAO said.

    "By addressing these issues, DOJ would have reasonable assurance that their [facial recognition] technology provides accurate information that helps enhance, rather than hinder, criminal investigations," Gretta Goodwin, GAO's director of justice and law enforcement issues, told Nextgov for a Thursday article. "Even more, DOJ would help ensure that it is sufficiently protecting the privacy and civil liberties of US citizens."

    However, rather than address the problems with its facial recognition systems, the FBI has decided to multiply them by testing Amazon's Rekognition — an unproven software being marketed to law enforcement agencies over the protests of even Amazon's own employees, Sputnik reported.

    The bureau's also been testing out the National Institute of Standards and Technology's (NIST) tattoo image-matching system, a project it had supported for four years, despite it only having a 67.9 percent accuracy rate, including false positives, Sputnik reported in January 2019. The system, dubbed Tatt-E, or Tattoo Recognition Technology Evaluation, works similarly to Rekognition, pulling on a large database of images to match them via AI.

    "It is extremely concerning for our privacy and civil liberties that the FBI is deploying and expanding the use of face recognition technology without implementing even the minimum, baseline privacy and accuracy safeguards," Professor Bryan Ford, who leads the Decentralized/Distributed Systems lab at the Swiss Federal Institute of Technology in Lausanne, Switzerland, (EPFL), told Sputnik Friday. "Misuse of surveillance technologies such as face recognition without sufficient validation, transparency, and oversight presents huge threats both to individuals and to our society."

    "Without careful evaluation of accuracy factors such as false positive rate (FPR), for example, face recognition false positives could start to become a major factor in putting a lot of innocent people in jail, just as we have historically seen in the misuse and overreliance of fingerprint forensics and various forms of 'expert' testimony," he said.

    "Without sufficient transparency and controls over how the facial data of innocent people whose images are ingested for incidental reasons, there are also huge risks of supplemental use and misuse of these datasets for purposes other than catching known criminals," Ford said. "And as we saw when the US Office of Personnel Management leaked to foreign hackers the highly sensitive personnel files of most Americans with a security clearance, no sensitive database like this can be considered sufficiently well-protected to be 'safe': a giant facial recognition database is just another, equally attractive target to hackers, abusers, and resourceful foreign spy agencies."

    Ford said facial recognition has the "classic problem that all biometrics have": you can't ever change your biometric image. "Those who are unlucky enough to look like someone on one of the FBI's watch lists, according to whatever non-transparent algorithm they use, may be in for regular harassment from law enforcement, harassment for the rest of their lives."

    "Still another important risk comes from the fact that a lot of the recent face recognition and other biometric technologies use machine learning, and that in turn brings the risk that the machine learning or AI algorithms can 'learn' biases from human biases subtly embedded in the data sets the algorithms are trained on. For example, a facial recognition scheme might end up having a much higher false positive rate on certain classes of people, e.g., being more likely to falsely report a match against a known suspect when used on people of a certain race or physique, just because the datasets the algorithm is trained on already exhibited such a bias and no one happened to notice," Ford told Sputnik.

    "If they're not even bothering to do general, baseline evaluations of false positive rate over a broad population, then the risk is huge that their algorithms might have major biases when used on specific populations, and without transparency and extensive analysis and testing it's very likely no one within the FBI or elsewhere will even notice until perhaps decades have passed and many innocent people are in jail," he said.

    "Facial recognition technology has advanced dramatically, becoming much more widespread, inexpensive and somewhat more accurate," Sue Udry, a peace and social justice activist and executive director of Defending Rights and Dissent, told Sputnik Friday. "It may be a powerful tool for law enforcement, but the implications for privacy and liberty are overwhelming. The GAO makes it abundantly clear that protecting our privacy and civil liberties is of no concern to the FBI."

    "It's time for Congress to take more decisive action and ensure oversight of the way the FBI is making use of this very powerful technology," she said.

    Related:

    Hackers Reportedly Stole Personal Data on Thousands of FBI, US Police Officers
    Is the FBI Spying on Your Favorite Politician?
    FBI Joins Criminal Probe Into Boeing 737 Max 8 Certification - Report
    Tags:
    Rekognition, recommendations, safeguard, Facial Recognition, civil liberties, law enforcement, report, privacy, National Institute of Standards and Technology, Federal Bureau of Investigation (FBI), US Government Accountability Office (GAO), Sue Udry
    Community standardsDiscussion
    Comment via FacebookComment via Sputnik