Tag Archives: Privacy

DHS Facial Recognition Privacy Risk Assessment Report

Standard
Image: “FCW

FCW

A new Privacy Impact Assessment details how the Department of Homeland Security’s Immigration and Customs Enforcement agency uses facial recognition and what protections it plans to put in place to prevent abuse.

_____________________________________________________________________________

“The assessment, signed by DHS Chief Privacy Officer Dena Kozanas and ICE Privacy Officer Jordan Holz, lays out more than a dozen potential privacy risks associated with the agency’s use of and access to numerous databases and algorithms to identify travelers or suspects. Those risks include the possibility that ICE could abuse those services or use them outside of their intended scope, that the agency might submit or rely upon low quality images that have been found to impact accurate identification, that it might rely on inaccurate information contained in third-party databases and that it could mishandle data, leading to a breach or compromise of personally identifiable information by hackers.

The document makes clear just how much information and data are within the program’s reach. DHS has two systems, the Automated Biometric Identification System (IDENT) and the Homeland Advanced Recognition Technology (HART), which stores and processes digital fingerprints, facial scans and iris scans along with biographical information for identified individuals.

However, the office that stores those images (the Office of Biometric Identity Management) is also in the process of connecting to the FBI’s primary identity management system, the Department of Defense’s Automated Biometric Identification System, the Department of State’s Consolidated Consular Database, databases compiled by state and local law enforcement organizations, region-specific intelligence fusion centers and databases maintained by commercial vendors.

Each system has its own database of images but many also track and collect other biometrics and information about individuals. Often DHS can also access that information and agencies like the FBI can hold onto and use probe photos sent by ICE later for other investigative purposes.

The report also notes that ICE investigators can run images through facial recognition systems that haven’t been approved for agency-wide use by the central Homeland Security Investigations Operational Systems Development and Management unit (OSDM) in the event of “exigent circumstances.”

One privacy risk cited in the assessment is the potential to use image for purposes other than that which they were initially collected. That risk is mitigated, according to ICE, by deleting images from facial recognition systems that were not vetted prior to use.

The assessment also notes the risk of abuse of facial recognition systems by employees and contractors. Training programs and rules of behavior that are being developed by Homeland Security Investigations, ICE’s privacy office and DHS’ Science and Technology Directorate. Supervisors will periodically audit each employee’s use of facial recognition services to ensure compliance and ICE Privacy will only approve commercial vendors who provide auditing capabilities for their own systems.

To guard against data breaches, HSI will only submit “the minimum amount of information necessary for the [service] to run a biometric query,” such as the probe photo, the case agent’s name and the legal violation being investigated. If a breach occurs “the information lost by the FRS will be minimal and out of context,” the report claims. Another DHS agency, Customs and Border Protection, saw tens of thousands of photos from its facial recognition program stolen last year when hackers compromised a subcontractor who had been storing and retaining the images without permission.

The use of facial recognition systems by DHS under the Trump administration has come under scrutiny as tech experts have fretted over the technical limitations and activists have complained about a lack of transparency from ICE regarding how it uses the technology and the potential to facilitate widespread targeting of Latinos, Muslims and other vulnerable populations.

In line with previous assessments from the National Institute of Standards and Technology, the privacy report also makes clear that numerous factors impact the accuracy of the many algorithms relied on by DHS, including lighting, photo quality, camera quality, distance or angle of the subject, facial expressions, aging and accessories like glasses, hats or facial hair.

Doctor Nicol Turner Lee, a fellow at the Center for Technology Innovation at the Brookings Institute who studies algorithmic integrity, said some of the guardrails outlined in the assessment — like emphasizing trainings and accountability measures — are a step in the right direction. However, she said the agency’s continued reliance on open source image collection and coordination with other major databases still leave significant concerns around accuracy, privacy and civil liberty.

“I think what they’re doing [here] is good but we still have a host of other challenges to address and remedy for the full-scale deployment of facial recognition,” Lee said in a phone interview. “We still need a better accounting of the types of training data that is being used, we still need a conversation on the technical specifications and its ability to fairly identify – particularly — people of color that are not sufficiently found in certain facial recognition systems.”

Lee also said there remain concerns about biases embedded in facial recognition system and “within the context of ICE, the likelihood of certain populations being more violently subjected to this over-profiling and overrepresentation in certain databases.”

https://fcw.com/articles/2020/05/27/ice-facial-recognition-privacy.aspx

“Tracing”Challenges Using Tech To Combat COVID-19

Standard
Image: “FCW

FCW” By Steve Kelman

This refers to gathering information about those with whom newly infected people have been in touch, in order to notify them that they might have been infected.  The most-interesting example of this is a recently developed Singapore app called TraceTogether.

It is impossible to mention systems such as these without some raising concerns about privacy. These efforts are still in the earliest stages — but we should be tracking how combating coronavirus has entered the digital age.

______________________________________________________________________________

“Recently there has been attention to the importance of what is called “contact tracing” for fighting the coronavirus.

This has come up in the discussions of “reopening the country” after recent lockdowns, with the argument that slowing disease spread depends heavily on being able to do this, though it did not appear in the president’s re-opening plan.

But contact tracing has historically been a resource-intensive and very imperfect process. Officials have had to go to newly infected people and interview them about whom they have been in contact with over the previous two weeks. Memories of course are often imperfect. People may not even know everyone with whom they interacted. And the interviewing itself takes significant time and manpower.

In just-published guidance of contact tracing, the Centers for Disease Control has stated that “contact tracing in the U.S. will require that states, tribes, localities and territorial establish large cadres of contact tracers.” Reaching people to interview about contacts can be slow, and contacting those contacts delays things further. Meanwhile, there is a limited window between infection and illness to catch contacts with problems, so speed is important.

However, since the Ebola outbreak in 2014, mobile telephone technology and especially smartphone penetration have dramatically improved. We are now seeing, mostly in Asia, the use of tech to provide quicker, more accurate, and more economical contact tracing in response to the coronavirus pandemic. I blogged a number of years ago on the theme of areas where Asia was overtaking the U.S. in tech apps, which I illustrated with the widespread use in China of mobile payment apps using smartphones and QR codes. We are now seeing Asian superiority with digital coronavirus apps in Asia as well.

This was the theme of a recent piece in the Daily Alert, a publication of the Harvard Business Review that publishes short management-related articles, called How digital contact tracing slowed covid-19 in East Asia, by MIT Sloan School professor Yasheng Huang and grad students Meicen Sun and Yuze Sui.

I think the most-interesting example of this is a recently developed Singapore app called TraceTogether. For those choosing the use the app, Bluetooth tracks smartphones that have also installed the app. The app then tracks when a user is in close proximity with these other persons, including timestamps. If an individual using the app becomes positive to Covid-19 they can choose to allow the Singapore Ministry of Health to access the tracking data — which can then be used to identify and then contact any recent close contacts based on the proximity and duration of an encounter. This is tech-enabled quick and accurate contact tracing. Apple and Google recently announced ago that they are developing a similar Bluetooth-based app, but rolling it out is apparently still a few months away.

Other Asian countries have used tech in other ways to help fight the virus. Taiwan has created a “digital fence,” whereby anyone required to undergo home quarantine has their location monitored via cellular signals from their phones. Venturing too far from home triggers an alert system, and calls and messages are sent to ascertain the person’s whereabouts. South Korea has an app called Corona100, which alerts users of the presence of any diagnosed Covid-19 patient within a 100-meter radius, along with the patient’s diagnosis date, nationality, age, gender, and prior locations. (A map version of the app called Corona Map similarly plots locations of diagnosed patients to help those who want to avoid these areas.)

Preview(opens in a new tab)

It is impossible to mention systems such as these without some raising concerns about privacy. The Singapore SmartTracker will save data for only 21 days, and the names of the ill and their contacts will not be shared with others. Wired ran an article on privacy risks of the Google/Apple system and concluded purported risks were quite small.

A bigger question is whether the government should be allowed under any circumstances to require people to sign onto a new contact-tracing app. Observers worry that without very widespread adoption, the benefits of such apps will dramatically decline. One can make an argument, which underlines the general case for disease quarantines, that if people do not quarantine themselves and then become sick, the costs fall not just on themselves but on others they might infect. However, even Singapore, a country without the robust culture of privacy we have in the U.S., has not been willing to require people to install SmartTracker, and only about 20% have done so.

In other words, these efforts are still in the earliest stages — but we should be tracking how combating coronavirus has entered the digital age.”

Surveillance In A Pandemic: Preserving Civil Liberties

Standard
Image: POGO

THE PROJECT ON GOVERNMENT OVERSIGHT

Surveillance is unlikely to provide much value in the United States until testing dramatically improves. Cell phone tracking faces significant technical hurdles. Surveillance programs must have guardrails. There are lessons we can learn from other countries that are enacting a variety of surveillance measures. 

______________________________________________________________________________

“In this virtual briefing, we examine surveillance measures in response to the COVID-19 pandemic. The discussion looks at the obstacles to effective contact tracing systems, and what principles should guide the government if it does choose to enact public health surveillance measures as part of its pandemic response. 

Some key takeaways include:

  • Surveillance is unlikely to provide much value in the United States until testing dramatically improves: Without a quick and robust testing system it will be impossible to create an effective contact tracing system, even with intensive surveillance measures.
  • Cell phone tracking faces significant technical hurdles: Measures currently being considered, such as the Apple and Google Bluetooth project, have a limited ability to accurately identify the types of contacts that pose a high risk of infection, which could lead to an ineffective system that generates false alarms and loses the public’s trust.
  • Surveillance programs must have guardrails: There are numerous limits that could be placed on any surveillance measures to protect civil liberties and prevent mission creep, such as prohibiting use other than for public health purposes and creating a timeline for deleting data.
  • There are lessons we can learn from other countries that are enacting a variety of surveillance measures. Some of those programs may be effective, but others appear more designed to facilitate draconian enforcement and support repressive regimes.”

Amazon’s “Ring” On The Congressional Privacy Hot Seat

Standard

“FCW:

The House Oversight and Reform Subcommittee on Economic and Consumer Policy, asked for a range of information, including copies of all agreements the company has reached with local governments going back to 2013, details on integration of any facial recognition tools and instances where law enforcement has requested video footage from Ring.

Click to access 2020-02-19.RK%20to%20Huseman-Amazon%20re%20Ring%20%281%29.pdf

COMMITTEE ON OVERSIGHT AND REFORM


“The Subcommittee on Economic and Consumer Policy is writing to request documents and information about Ring’s partnerships with city governments and local police departments, along with the company’s policies governing the data it collects,” Krishnamoorthi wrote.  “The Subcommittee is examining traditional constitutional protections against surveilling Americans and the balancing of civil liberties and security interests.”

Ring reportedly works closely with local governments and police departments to promote its surveillance tools and has entered into agreements with cities to provide discounts on Ring products to their residents in exchange for city subsidies.  Reports also indicate that Ring has entered into agreements with police departments to provide free Ring products for giveaways to the public.

Ring reportedly tightly controls what cities and law enforcement agencies can say about Ring, requiring any public statement to be approved in advance.   In one instance, Ring is reported to have edited a police department’s press release to remove the word “surveillance.”

“The Subcommittee is seeking more information regarding why cities and law enforcement agencies enter into these agreements,” wrote Krishnamoorthi.  “The answer appears to be that Ring gives them access to a much wider system of surveillance than they could build themselves, and Ring allows law enforcement access to a network of surveillance cameras on private property without the expense to taxpayers of having to purchase, install, and monitor those cameras.”

The Subcommittee demands Amazon provide information about these partnerships dating back to January 1, 2013.”

https://oversight.house.gov/news/press-releases/oversight-subcommittee-seeks-information-about-ring-s-agreements-with-police-and

HUD Inspector General Warns Over 1 Billion Personally Identifiable (PI) Records At Risk

Standard

FCW

“The management alert bulletin, issued Jan. 13 by HUD’s Office of Inspector General, warns that “HUD is unable to identify, categorize, and adequately secure all of its electronic and paper records that contain personally identifiable information.”

____________________________________________________________________________

“The Department of Housing and Urban Development is failing to safeguard and manage more than 1 billion records containing personally identifiable information, according to a management alert from the agency’s internal watchdog.

An accompanying memorandum, circulated to HUD officials in December, points to several risk factors. HUD maintains legacy systems that lack basic electronic transaction processing capabilities, which in turn leads to a reliance on paper processing. A survey of HUD officials found that many in the agency are concerned about the volume of paper records held by the agency — including mortgage binders with personal and financial information.

The December memorandum indicated that a formal report was forthcoming but that in the course of the assessment, OIG personnel “encountered specific records management and privacy issues that pose a serious threat to sensitive information that we believed important to raise now rather than wait for the conclusion of our broader evaluation.”

The OIG probe also found that HUD lacks an complete records inventory and that eight of 25 offices surveyed had an inventory of electronic records with personally identifiable information. What’s more, HUD systems don’t allow for any kind of enterprisewide search to locate sensitive information. The agency also is lagging behind in governmentwide efforts to convert from paper to electronic records and in implementing a data classification process to identify and tag controlled unclassified information.

“As a federal agency housing such an extensive amount of sensitive data, HUD must prioritize its capability to properly identify and protect this information,” the OIG alert states. “Failure to do so places both the agency and private citizens at risk.”

The alert comes as some in Congress are concerned that HUD is leveraging facial recognition software to provide security in facilities subsidized by the agency. A group of Democratic lawmakers from the House and Senate asked HUD Secretary Ben Carson in a Dec. 18 letter about the use of such technology in federal subsidized housing, including rules about biometric data collection and retention.”

https://fcw.com/articles/2020/01/13/hud-pii-risk-oig.aspx?oly_enc_id=

“Streamlined Electronic Services for Constituents Act” Is Now Law

Standard
Image: “Govloop.com

FEDSCOOP

Modernizes the ways members of Congress receive permission from constituents before contacting federal agencies on their behalf.

Critically, the bill will allow constituents to submit a new electronic version of the privacy paperwork that is a requirement of the Privacy Act of 1974.

______________________________________________________________________________

“At the moment, members of Congress must obtain this form through a paper submission, something that — the bill’s cosponsors point out — is pretty inconvenient.

“When the American taxpayers we represent need assistance with Social Security, Medicare, Veteran Affairs or any other federal agency, they should be able to get the help and information they need quickly and in a straightforward manner,” Sen. Tom Carper, D-Del., a cosponsor of the bill, said. “This bipartisan, bicameral bill helps to ensure that elected officials like myself can be even more effective advocates for our constituents by modernizing our constituent services process.”

The bill also requires that the White House Office of Management and Budget create a standardized release form to be used by all federal agencies, as well as a “system” for the electronic submission of that form. In June, the Congressional Budget Office estimated that building such a system would cost around $15 million.

The bipartisan bill was introduced in the House by Rep. Garret Graves, R-La., and in the Senate by Sens. Rob Portman, R-Ohio, and Carper. It passed the House in February and the Senate in July.”

It’s Time to End the National Security Agency’s Metadata Collection Program

Standard
Image: “POGO”

“THE PROJECT ON GOVERNMENT OVERSIGHT (POGO)”

“When the issues are taken together—severe costs to privacy, no evidence of security value, technical flaws—they indicate that we are better off without the NSA’s metadata collection program.”

______________________________________________________________________________

“This piece originally appeared on Wired.

“If it ain’t broke, don’t fix it,” the adage goes. But for the sunset of Patriot Act authorities later this year—including Section 215, a controversial provision that allows the National Security Agency to collect records, including those about Americans’ phone calls—the more applicable phrase may be “If it keeps breaking, throw it out.”

In 2015, Congress passed the USA Freedom Act to reform Section 215 and prohibit the nationwide bulk collection of communications metadata, like who we make calls to and receive them from, when, and the call duration. The provision was replaced with a significantly slimmed-down call detail record program, known as CDR. Rather than collecting information in bulk, CDR collects communications metadata of surveillance targets as well as those of individuals up to two degrees of separation (commonly called “two hops”) from the surveillance target. But this newer system appears to be no more effective than its predecessor and is highly damaging to constitutional rights. Given this combination, it’s time for Congress to pull the plug and end the authority for the CDR program.

It’s unsurprising that just last week a bipartisan group in Congress introduced a bill to do so. Last month, the New York Times reported that a highly placed congressional staffer had stated the CDR program has been out of operation for months, and several days later, NSA Director Paul Nakasone issued comments responding to questions about the Times story by saying the NSA was deliberating the future of the program. If accurate, this news is major but not shocking; this large-scale-collection program has been fraught with problems. Last year, the NSA announced that technical problems had caused it to collect information it wasn’t legally authorized to, and that in response, the agency had voluntarily deleted all the call detail records it had previously acquired through the CDR program—without even waiting for a court order or trying to save some of the data—indicating that the system was unwieldy and the data being collected was not important to the agency.

Since its inception, we have not seen a single publicized instance of the program providing any unique security value—and in fact, the program has damaged privacy significantly. In its most recent transparency report, the NSA announced that it collected a staggering 534,396,285 call detail records during the 2017 calendar year; the NSA states the number includes duplicates, but we have no way of knowing if this is a frequent issue. Without knowing scale of duplicates issues or average number of CDRs per person, it’s difficult to say how many Americans this affects—the NSA claims it is unable to determine this, despite statutory requirement to do so and publicly disclose it—but the number is certainly enormous. Our communications metadata can be highly sensitive and can reveal intimate details of our lives. Americans should not be subject to this type of surveillance absent suspicion, particularly if the program conducting it has not yielded any demonstrated value in preventing or investigating terrorism.

When the issues are taken together—severe costs to privacy, no evidence of security value, technical flaws, the NSA’s willingness to broadly discard data it has collected, and a recent media report that the program has been shut down—they indicate that we are better off without this program.

But it’s important that Congress does more than just end the CDR program. Many in the privacy and civil liberties community worry that if the Section 215 metadata collection authority is no longer in use, the CDR program could still be active but justified with a different legal provision, and out of the public’s view. The public can only have confidence that congressional reforms are effective and not a meaningless game of whack-a-mole if lawmakers and the Privacy and Civil Liberties Oversight Board conduct rigorous oversight to find out whether such a shift happened with the CDR program. And if Congress does end the program, it should build in legal restrictions to ensure that the program cannot be restarted under a different authority.

The problems with the CDR program seem to be a continuation of the government’s misplaced faith in the nationwide bulk collection program that the CDR program replaced. After the government’s vehement defense of the need for bulk collection, the President’s Review Group on Surveillance, the Privacy and Civil Liberties Oversight Board, and eventually even the intelligence community’s top-ranking official stated that it had not provided unique value and was not necessary to fulfill counterterrorism goals.

As the December sunset approaches for several PATRIOT Act authorities, including Section 215, it is clear that the failed experiment of large-scale metadata collection needs to end. Prohibiting nationwide bulk collection received strong bipartisan support in 2015 during the USA FREEDOM Act debate. In the House, 196 Republicans and 142 Democrats voted for the bill—and most of those who voted against it did so because they felt the bill’s reforms did not go far enough—while over two-thirds of Senators also supported the bill. Further limiting mass surveillance of communications metadata is likely to receive bipartisan support again, especially given the lack of evidence that it aids security.

Congress should go farther than ending the CDR authority, to take on additional critical reforms. In the wake of the Snowden disclosures, public faith in the intelligence community and the Foreign Intelligence Surveillance Court that rules on data-collection efforts under Section 215 has degraded. And more recent inaccurate and unsubstantiated criticisms of these entities have harmed trust further. The USA FREEDOM Act took important steps toward restoring that faith by requiring that significant FISA court opinions be declassified, and creating a special advocate to represent privacy concerns in the court’s proceedings. But these provisions should be strengthened. For years, The Constitution Project has advocated for creating a more robust special advocate; strengthening provisions for FISA court declassifications would be a critical change as well.

Congress should also consider a range of other reforms during this year’s PATRIOT Act debate, relating to minimizing data retention of non-targets, civil rights, and transparency. But the first problem to address, and the one with the clearest solution, is authority for the CDR. It’s long past time to pull the plug. “

https://www.pogo.org/analysis/2019/04/its-time-to-end-the-nsas-metadata-collection-program/

ACLU “Freedom Of Information Act” Filing Demands Records Of Government Facial Recognition Technology

Standard
Image: “Techcrunch.com”

“FEDSCOOP’

“The American Civil Liberties Union has filed a Freedom of Information Act (FOIA) request with the Department of Justice demanding any records on the agency’s use of facial recognition technology.

“The request seeks records from the DOJ as a whole, as well as component agencies the FBI and Drug Enforcement Administration.”

______________________________________________________________________________

“The request is extensive. The organization is asking for 20 distinct sets of records ranging from any policy direction on the use facial recognition to “records relating to inquiries to companies, solicitations from companies, or meetings with companies about the purchase, piloting, or testing of face recognition, gait recognition, or voice recognition technology” and beyond. The group additionally wants any records showing the accuracy rates of the systems employed and records showing what audit work has been done to determine or assess the accuracy rate.

This news is just the latest in the ACLU’s strong ongoing opposition to the use of facial recognition technology by government entities.

The ACLU filed a similar request with the Department of Homeland Security in October 2018. In contrast to this most recent FOIA, though, which seeks information regardless of the vendor of the technology, the DHS FOIA focused on learning more about the agency’s work with Amazon’s Rekognition software.

Just last week, the ACLU and a coalition of other civil rights groups wrote letters to the three big purveyors of facial recognition technology — Amazon, Microsoft and Google — demanding in no uncertain terms that they cease doing business with government customers.

And over the summer, the group put members of Congress in the facial recognition crosshairs to see how they’d react.

While this last action — a study in which 28 sitting members of Congress were falsely identified as individuals who have been arrested for a crime — prompted some action from lawmakers, federal agencies have remained pretty quiet.

The companies developing the technology have also reacted to public criticism differently. Google, notably, has backed down from some government work, citing its new AI principles, and Microsoft has argued for proactive regulation.

Amazon, meanwhile, a company that is often the target of the ACLU’s opposition, has maintained that facial recognition is a positive and powerful tool for government. “There have always been and will always be risks with new technology capabilities,” Dr. Matt Wood, head of AI at AWS, wrote in a blog post this summer. “But we believe it is the wrong approach to impose a ban on promising new technologies because they might be used by bad actors for nefarious purposes in the future.”

Invasive Police Aerial Surveillance Is Widespread

Standard

Drones Watching

Image ACLU California

‘THE PROJECT ON GOVERNMENT OVERSIGHT (POGO)”

“While the greatest risks posed by drones and aerial surveillance lay ahead as tech continues to advance and becomes more powerful, easier to automate, and cheaper, there are already significant threats.

Drones, which already possess so much surveillance power, are widespread and broadly in use by police departments throughout the country.”


“In recent years, we’ve seen significant efforts to roll back the mass surveillance that technological advances have permitted on an unprecedented scale. In 2015, Congress passed the USA FREEDOM Act to ban bulk collection of sensitive information such as Americans’ communications metadata. And this year, the Supreme Court ruled that tracking an individual’s location from their cell phone required a warrant, creating a privacy protection even though it involved public activities. But amid these victories for privacy rights, another form of surveillance has been quite literally rising up all around us: aerial surveillance. And this snooping from the skies most often comes in the form of police departments across the country deploying powerful drones.

Aerial surveillance and the broad use of drones threaten to undermine the progress made in recent years to prevent unreasonable location tracking and government stockpiling of sensitive, personal information. With existing and emerging technologies, government may be able to use aerial surveillance to track our movements en masse and catalog participation in constitutionally protected activities such as protests, religious ceremonies, and political rallies.

Aerial Surveillance Can Be Incredibly Invasive

Existing technology that is affordable and in wide use allows law enforcement to spy on individuals over huge distances. The most prominent example is the DJI Zenmuse Z30 camera, which can be affixed to commonly used drone models such as the Inspire 2 and the Matrice. Chinese manufacturer DJI, the drone maker most favored by U.S. law enforcement, promotes the Zenmuse Z30 by describing it as “the most powerful integrated aerial zoom camera on the market with 30x optical and 6x digital zoom for a total magnification up to 180x.”

Demonstration of digital and optical zoom capacity with footage claims to be taken at a 3.7 mile distance. (Note: The video claims to use a Matrice 600 drone with a DJI Zenmuse Z30 drone camera)(Source: SkyLink Japan / YouTube)

The implications of this are profound, and frightening. With this technology, law enforcement can use small and inconspicuous drones to snoop on individuals from thousands of feet away, and even watch activities occurring several miles away with a good degree of precision. In an aerial space, these drones can easily move to adjust view and overcome obstacles that make this type of long distance surveillance impossible from ground level.

Invasive Aerial Surveillance is Cheap

In addition to the surveillance powers modern drones possess in terms of long-distance monitoring, automated identification, and automated tracking, technological advances are making aerial surveillance an exponentially cheaper option, and thus something that can be done more broadly and on a larger scale. The Inspire 2 costs around $3,000, and equipping it with the powerful Z30 zoom camera costs an additional $3,000. In comparison, police helicopters cost roughly $500,000 to $3,000,000. The helicopter’s operating costs of $200 to $400 per hour and the maintenance costs increase the expense of this traditional aerial surveillance tool even more.

With this cost differential, a department could potentially purchase a fleet of 500 drones in lieu of a single police chopper—a swarm of devices that can watch individuals without notice from thousands of feet away, use software to identify people in an automated manner, and follow them without human piloting. As technology improves, the potential power of this type of fleet will only increase, creating the possibility of a massive surveillance umbrella permanently buzzing over America’s cities and towns.

Invasive Aerial Surveillance Is Widespread

Map of Inspire 2 and Matrice drones (which can be equipped with the X30) in use by police departments throughout the country. (Source: Google Maps screenshot created based on data from the Center for the Study of the Drones at Bard College)

According to research by the Center for the Study of the Drone at Bard College, as of May 2018, at least 910 state and local public safety agencies have purchased drones (based on Federal Aviation Administration and other records). Of those, 599 are law enforcement agencies. The survey identified the make and model of drones owned by 627 of the 910 agencies. Of the 627, 523 have drones made by DJI. Of those, over 200 agencies fly either the Inspire or Matrice models, which can be equipped with the Z30 zoom camera.

Invasive Aerial Surveillance Can Identify You

With its capacity for precise zooming at short distances, aerial surveillance can, in combination with other automated identification technologies, allow for effortless cataloging of individuals and their activities. There are two prominent automated identification technologies that could allow for easy identification from immense distances: automated license plate readers and facial recognition technology. These technologies are already in wide use by government agencies. U.S. Immigration and Customs Enforcement maintains a nationwide net of automated license plate readers to track individuals, and the FBI already maintains a facial recognition database of fifty percent of American adults and permits law enforcement from dozens of states to use it.

Positive automated license plate reader identification from drone footage claimed to be taken at a 1200 ft distance. (Note: The video claims to use a Matrice 100 drone with a DJI Zenmuse Z30 drone camera) (Source: Sharon Arenhaim /YouTube)

This means that the government could surreptitiously watch sensitive activities and catalog individuals. Everyone entering or exiting a political meeting, union meeting, or lawyer’s office could be identified and catalogued. Or a drone could zoom in on and scan all the cars parked outside a medical facility or church, and create a list of attendees in seconds with no human effort. These fears are not hypothetical. American Civil Liberties Union research efforts exposed the fact that the FBI was deploying aerial surveillance to record the activities of protesters in Baltimore. Vendors marketing drones to police departments highlight their ability to pick individuals out of a public gathering such as a political rally as a feature, not a cause of potential abuse.

FBI aerial surveillance of protests in Baltimore after the death of Freddie Gray in 2015. (Photo: Still from FBI footage / ACLU)

Amplifying these risks is a recent partnershipbetween DJI and Axon, one of the leading producers of police body cameras. Axon also provides cloud computing services designed to allow law enforcement to sync data from a variety of sources, including cameras, and has spent years developing facial recognition technology for its products. With this partnership, which will allow DJI drone footage to sync with the Axon system, police drones with built-in facial recognition technology could soon become the norm.

Invasive Aerial Surveillance Can Track You

Identifying individuals from aerial surveillance footage appears to be on a path to automation and is occurring on a mass scale absent need for human involvement. But is the impact of drones on privacy limited by requiring a person to remotely pilot them and actively work to follow the target being tracked? Unfortunately, the answer is no.

DJI has developed a feature for many of its drones—including models like the Inspire 2 that are commonly used by police—to allow drones to lock onto and automatically follow individuals. This technique, called “Active Track,” enables the drone to automatically follow moving items, including people, absent any human control of the drone. DJI drones in Active Track operate in a mode that allows the drone to travel at roughly 20 miles per hour, more than enough to keep pace with an individual on foot. Some drones are even programmed to automatically avoid obstacles while continuously tracking their locked-on target.

Active Track allows drones to tag and track individuals without human piloting. (Note: The video claims to use a DJI Spark drone with attached camera) (Source: DC Rainmaker / YouTube)

As with automated identification, Active Track technology decreases reliance on human labor in another aspect of aerial surveillance which has traditionally served as an impediment to mass monitoring of individuals. And this technology will only become more powerful over time.

Drones with “swarm capabilities,” which further enhance automated flight power by allowing a single pilot to control multiple drones, are already in development, such as the military’s Low-Cost Unmanned aerial vehicle Swarming Technology (“LOCUST”). In the future, a single officer might be able to command a large swarm of drones, inconspicuously identifying and following many individuals over a long period of time.

Invasive Aerial Surveillance Can Be Limited

With these serious and growing risks to personal privacy, it’s important that lawmakers begin to take the threats of aerial surveillance more seriously. Luckily, drones can be fairly easily regulated. Several states have placed limits on drone-based surveillance. For example, Florida, Maine, North Dakota, and Virginia have all enacted some form of a warrant requirement for police use of drones, and Rhode Island has proposed legislation prohibiting the use of facial recognition on any images captured by drones. To be fully effective, drone regulations should take into account and allow important public safety uses that don’t threaten privacy rights, like natural disaster response and search and rescue.

Unfortunately, as we’ve previously written, the increasing use of powerful manned aerial surveillance programs remains a serious issue that drone regulations will not solve. Reasonable limits on law enforcement drone use is an excellent way to begin setting reasonable limits on all forms of aerial surveillance, but it is also just the first step in addressing larger civil liberties issues looming above.”

https://www.pogo.org/analysis/2018/09/these-police-drones-are-watching-you/

Is LinkedIn Trying to Protect Your Data — Or Hoard It?

Standard

Linked In Data

(David Paul Morris/Bloomberg)

“WASHINGTON POST”

“When you create a public profile on a social network such as LinkedIn, it isn’t just your friends and contacts who can see that data. For better or for worse, other companies can legally download that information and use it for themselves, too.

That’s according to a federal judge who ruled Monday against LinkedIn, the professional networking site, in a case that has big implications for corporate power and consumer privacy in the tech-driven economy.

LinkedIn had claimed that another company, hiQ Labs, was illegally downloading information about LinkedIn users to help drive its business. The issue was a concern for LinkedIn, which is owned by Microsoft, in part because many of today’s tech companies depend on customer data to compete and even outmaneuver their rivals. As a result, being able to control that information and determine who else can see it is of paramount importance to firms like these.

“Microsoft is further transforming LinkedIn into a data-driven marketing powerhouse that harvests all its data to drive ad revenues,” said Jeffrey Chester, executive director of the Center for Digital Democracy.

Where LinkedIn and hiQ clashed was over hiQ’s product, which almost exclusively depends on LinkedIn’s data, according to U.S. District Judge Edward Chen. HiQ essentially helps employers predict, using the data, which of their employees are likely to leave for other jobs. While this HR tool might sound relatively boring to you and me, it’s key to industries whose success depends on recruiting and retaining the best talent. A Gallup survey last year found that 93 percent of job-switchers left their old company for a new one; just 7 percent took a new job within the same organization.

HiQ has raised more than $12 million since its founding in 2012. LinkedIn itself is making moves to develop a similar capability, Chen said, meaning that LinkedIn’s attempt to block hiQ from accessing its data could be interpreted as a self-interested move to kneecap a competitor. If hiQ can’t get the professional data it needs to fuel its analytic engine, its business could “go under,” Chen said.

To allow hiQ access to LinkedIn’s data would be a gross violation of LinkedIn users’ privacy, LinkedIn argued. But Chen didn’t buy it, saying that LinkedIn already chooses to provide data to third parties of its own accord. What’s more, he added, people who make their profiles public on LinkedIn probably want their information seen by others, which undermines LinkedIn’s claim to be protecting user privacy.

Allowing LinkedIn to selectively block members of the public from accessing public profiles — under penalty of the country’s anti-hacking laws, no less — “could pose an ominous threat to public discourse and the free flow of information promised by the Internet,” wrote Chen in his ruling.

LinkedIn vowed to keep fighting in court.

“We’re disappointed in the court’s ruling,” it said in a statement. “This case is not over. We will continue to fight to protect our members’ ability to control the information they make available on LinkedIn.”

The case raises deep questions about who truly represents users’ interests. From one perspective, LinkedIn is duty-bound to protect its customers’ data and prevent it from falling into the wrong hands — perhaps all the more so if, as it appears with hiQ, the information could give employers more leverage over their workers.

But LinkedIn’s position requires that it have a tremendous say over how users’ own information can be used and distributed. Concentrating power in this way benefits not only LinkedIn, but also the owners of other platforms such as Facebook, Google and other sites that host user-supplied content.

“If LinkedIn’s view of the law is correct, nothing would prevent Facebook from barring hiQ in the same way LinkedIn has,” said Chen.

That’s why this case is so important: How it turns out could set a precedent for the entire Internet, and a global economy that depends on data.”

https://www.washingtonpost.com/news/the-switch/wp/2017/08/15/is-linkedin-trying-to-protect-your-data-or-hoard-it/?utm_term=.9d95e9c0d196