“A new Privacy Impact Assessment details how the Department of Homeland Security’s Immigration and Customs Enforcement agency uses facial recognition and what protections it plans to put in place to prevent abuse.“
“The assessment, signed by DHS Chief Privacy Officer Dena Kozanas and ICE Privacy Officer Jordan Holz, lays out more than a dozen potential privacy risks associated with the agency’s use of and access to numerous databases and algorithms to identify travelers or suspects. Those risks include the possibility that ICE could abuse those services or use them outside of their intended scope, that the agency might submit or rely upon low quality images that have been found to impact accurate identification, that it might rely on inaccurate information contained in third-party databases and that it could mishandle data, leading to a breach or compromise of personally identifiable information by hackers.
The document makes clear just how much information and data are within the program’s reach. DHS has two systems, the Automated Biometric Identification System (IDENT) and the Homeland Advanced Recognition Technology (HART), which stores and processes digital fingerprints, facial scans and iris scans along with biographical information for identified individuals.
However, the office that stores those images (the Office of Biometric Identity Management) is also in the process of connecting to the FBI’s primary identity management system, the Department of Defense’s Automated Biometric Identification System, the Department of State’s Consolidated Consular Database, databases compiled by state and local law enforcement organizations, region-specific intelligence fusion centers and databases maintained by commercial vendors.
Each system has its own database of images but many also track and collect other biometrics and information about individuals. Often DHS can also access that information and agencies like the FBI can hold onto and use probe photos sent by ICE later for other investigative purposes.
The report also notes that ICE investigators can run images through facial recognition systems that haven’t been approved for agency-wide use by the central Homeland Security Investigations Operational Systems Development and Management unit (OSDM) in the event of “exigent circumstances.”
One privacy risk cited in the assessment is the potential to use image for purposes other than that which they were initially collected. That risk is mitigated, according to ICE, by deleting images from facial recognition systems that were not vetted prior to use.
The assessment also notes the risk of abuse of facial recognition systems by employees and contractors. Training programs and rules of behavior that are being developed by Homeland Security Investigations, ICE’s privacy office and DHS’ Science and Technology Directorate. Supervisors will periodically audit each employee’s use of facial recognition services to ensure compliance and ICE Privacy will only approve commercial vendors who provide auditing capabilities for their own systems.
To guard against data breaches, HSI will only submit “the minimum amount of information necessary for the [service] to run a biometric query,” such as the probe photo, the case agent’s name and the legal violation being investigated. If a breach occurs “the information lost by the FRS will be minimal and out of context,” the report claims. Another DHS agency, Customs and Border Protection, saw tens of thousands of photos from its facial recognition program stolen last year when hackers compromised a subcontractor who had been storing and retaining the images without permission.
The use of facial recognition systems by DHS under the Trump administration has come under scrutiny as tech experts have fretted over the technical limitations and activists have complained about a lack of transparency from ICE regarding how it uses the technology and the potential to facilitate widespread targeting of Latinos, Muslims and other vulnerable populations.
In line with previous assessments from the National Institute of Standards and Technology, the privacy report also makes clear that numerous factors impact the accuracy of the many algorithms relied on by DHS, including lighting, photo quality, camera quality, distance or angle of the subject, facial expressions, aging and accessories like glasses, hats or facial hair.
Doctor Nicol Turner Lee, a fellow at the Center for Technology Innovation at the Brookings Institute who studies algorithmic integrity, said some of the guardrails outlined in the assessment — like emphasizing trainings and accountability measures — are a step in the right direction. However, she said the agency’s continued reliance on open source image collection and coordination with other major databases still leave significant concerns around accuracy, privacy and civil liberty.
“I think what they’re doing [here] is good but we still have a host of other challenges to address and remedy for the full-scale deployment of facial recognition,” Lee said in a phone interview. “We still need a better accounting of the types of training data that is being used, we still need a conversation on the technical specifications and its ability to fairly identify – particularly — people of color that are not sufficiently found in certain facial recognition systems.”
Lee also said there remain concerns about biases embedded in facial recognition system and “within the context of ICE, the likelihood of certain populations being more violently subjected to this over-profiling and overrepresentation in certain databases.”