Secretive face-matching startup has customer list stolen

Clearview, a secretive facial-recognition startup that claims to scrape the Internet for images to use, has itself now had data unexpectedly scraped, in a manner of speaking. Someone apparently popped into the company’s system and stole its entire client list, which Clearview to date has refused to share.

Clearview notified its customers about the leak today, according to The Daily Beast, which obtained a copy of the notification. The memo says an intruder accessed the list of customers, as well as the number of user accounts those customers set up and the number of searches those accounts have conducted.

“Unfortunately, data breaches are part of life in the 21st century,” Tor Ekeland, an attorney for Clearview, told The Daily Beast. “Our servers were never accessed. We patched the flaw and continue to work to strengthen our security.”

Clearview vaulted from obscurity to the front page following a report by The New York Times in January. The paper described Clearview as a “groundbreaking” service that could completely erode privacy in any meaningful way.

The company at the time claimed to have in place 600 agreements with law enforcement agencies to use its software, which allegedly aggregated more than 3 billion facial images from other apps, platforms, and services. Those other platforms and their parent companies—including Twitter, Google (YouTube), Facebook (and Instagram), Microsoft (LinkedIn), and Venmo—all sent Clearview cease and desist letters, claiming its aggregation of images from their services violates their policies.

Clearview, which stresses its service is “available only to law enforcement agencies and select security professionals,” refused repeatedly to share client lists with reporters from several outlets. Reporters from The New York Times and BuzzFeed both dove into several of the company’s marketing claims and found some strong exaggerations. Clearview boasts that its technology helped lead to the arrest of a would-be terrorist in New York City, for example, but the NYPD told BuzzFeed Clearview had nothing to do with the case.

In the face of public criticism, the company made exactly two blog posts, each precisely two paragraphs long. The first, under the subject line “Clearview is not a consumer application,” insists, “Clearview is NOT available to the public,” emphasis theirs. It adds, “While many people have advised us that a public version would be more profitable, we have rejected the idea.”

Four days later, the company added another post, stressing that its code of conduct “mandates that investigators use our technology in a safe and ethical manner.” While “powerful tools always have the potential to be abused,” the company wrote, its app “has built-in safeguards to ensure these trained professionals only use it for its intended purpose.”

Clearview did not at any point say what these safeguards might be, however, nor has it explained who qualifies as “select security professionals.”

Other companies that partner with law enforcement for surveillance technologies have also not always been successful in attempts to keep their client lists on the down-low. Amazon, for example, attempted just that with its Ring line of products. After repeated media reports tried to draw out the details, however, Ring finally went public with a list of 405 agencies last August and through February 13 at least has kept updating the list of those (now 967) deals.

via Ars Technica https://arstechnica.com

February 26, 2020 at 12:01PM

ACLU sues feds to get information about facial-recognition programs

The use of facial recognition has spread from photo albums and social media to airports, doorbells, schools, and law enforcement. Now, the American Civil Liberties Union wants top US agencies to share records detailing what face data they’re collecting and what they’re doing with it.

The ACLU in January submitted Freedom of Information Act (FOIA) requests to the Department of Justice, the Drug Enforcement Administration, and the FBI seeking records relating to the agencies’ “use of face recognition programs and other biometric identification and tracking technology.” Almost 10 months later, the ACLU has received no response. And so the organization today filed suit against all three agencies, seeking the records.

The records are “important to assist the public in understanding the government’s use of highly invasive biometric identification and tracking technologies,” says the complaint, filed in federal court in Massachusetts. Through the records, the ACLU seeks to “understand and inform the public about, among other things, how face recognition and other biometric identification technologies are currently being used by the government and what, if any, safeguards are currently in place to prevent their abuse and protect core constitutional rights.”

Read 8 remaining paragraphs | Comments

via Policy – Ars Technica https://arstechnica.com

October 31, 2019 at 04:16PM

Facebook must face $35B facial-recognition lawsuit following court ruling

Facebook’s most recent attempt to extricate itself from a potentially landmark lawsuit has come to a dead end, as a federal court declined to hear another appeal to stop the $35 billion class action.

In San Francisco last week, the US Court of Appeals for the 9th Circuit denied Facebook’s petition for an en banc hearing in the case. Usually, appeals cases are heard by a panel of three judges out of all the judges who work in a given circuit. An en banc hearing is a kind of appeal in which a much larger group of judges hears a case. In the 9th Circuit, 11 of the 29 judges sit on en banc cases.

Facebook had requested an en banc hearing to appeal the 9th’s Circuit’s August ruling, in which the court determined that the plaintiffs had standing to sue, even though Facebook’s alleged actions did not cause them any quantifiable financial harm. The class-action suit can now move forward.

Read 8 remaining paragraphs | Comments

via Policy – Ars Technica https://arstechnica.com

October 22, 2019 at 04:15PM

California: No Face Recognition on Body-Worn Cameras

EFF has joined a coalition of civil rights and civil liberties organizations to support a California bill that would prohibit law enforcement from applying face recognition and other biometric surveillance technologies to footage collected by body-worn cameras.

About five years ago, body cameras began to flood into police and sheriff departments across the country. In California alone, the Bureau of Justice Assistance provided more than $7.4 million in grants for these cameras to 31 agencies. The technology was pitched to the public as a means to ensure police accountability and document police misconduct. However, if enough cops have cameras, a police force can become a roving surveillance network, and the thousands of hours of footage they log can be algorithmically analyzed, converted into metadata, and stored in searchable databases.

Today, we stand at a crossroads as face recognition technology can now be interfaced with body-worn cameras in real time. Recognizing the impending threat to our fundamental rights, California Assemblymember Phil Ting introduced A.B. 1215 to prohibit the use of face recognition, or other forms of biometric technology, such as gait recognition or tattoo recognition, on a camera worn or carried by a police officer.

“The use of facial recognition and other biometric surveillance is the functional equivalent of requiring every person to show a personal photo identification card at all times in violation of recognized constitutional rights,” the lawmaker writes in the introduction to the bill. “This technology also allows people to be tracked without consent. It would also generate massive databases about law-abiding Californians, and may chill the exercise of free speech in public places.”

Ting’s bill has the wind in its sails. The Assembly passed the bill with a 45-17 vote on May 9, and only a few days later the San Francisco Board of Supervisors made history by banning government use of face recognition. Meanwhile, law enforcement face recognition has come under heavy criticism at the federal level by the House Oversight Committee and the Government Accountability Office.

The bill is now before the California Senate, where it will be heard by the Public Safety Committee on Tuesday, June 11.

EFF, along with a coalition of civil liberties organizations including the ACLU, Advancing Justice – Asian Law Caucus, CAIR California, Data for Black Lives, and a number of our Electronic Frontier Alliance allies have joined forces in supporting this critical legislation.

Face recognition technology has disproportionately high error rates for women and people of color. Making matters worse, law enforcement agencies conducting face surveillance often rely on images pulled from mugshot databases, which include a disproportionate number of people of color due to racial discrimination in our criminal justice system. So face surveillance will exacerbate historical biases born of, and contributing to, unfair policing practices in Black and Latinx neighborhoods.

Polling commissioned by the ACLU of Northern California in March of this year shows the people of California, across party lines, support these important limitations. The ACLU’s polling found that 62% of respondents agreed that body cameras should be used solely to record how police treat people, and as a tool for public oversight and accountability, rather than to give law enforcement a means to identify and track people. In the same poll, 82% of respondents said they disagree with the government being able to monitor and track a person using their biometric information.

Last month, Reuters reported that Microsoft rejected an unidentified California law enforcement agency’s request to apply face recognition to body cameras due to human rights concerns.

“Anytime they pulled anyone over, they wanted to run a face scan,” Microsoft President Brad Smith said. “We said this technology is not your answer.”

We agree that ubiquitous face surveillance is a mistake, but we shouldn’t have to rely on the ethical standards of tech giants to address this problem. Lawmakers in Sacramento must use this opportunity to prevent the threat of mass biometric surveillance from becoming the new normal. We urge the California Senate to pass A.B. 1215.

via EFF.org Updates http://bit.ly/US8QQS

June 10, 2019 at 07:17PM

Skip the Surveillance By Opting Out of Face Recognition At Airports

Government agencies and airlines have ignored years of warnings from privacy groups and Senators that using face recognition technology on travelers would massively violate their privacy. Now, the passengers are in revolt as well, and they’re demanding answers.

Last week, a lengthy exchange on Twitter between a traveler who was concerned about her privacy and a spokesperson for the airline JetBlue went viral, and many of the questions asked by the traveler and others were the same ones that we’ve posed to Customs and Border Protection (CBP) officials: Where did you get my data? How is it protected? Which airports will use this? Where in the airports will it be used? Most importantly, how do I opt-out?

Right now, the key to opting out of face recognition is to be vigilant.


How to Opt Out

These questions should be simple to answer, but we haven’t gotten simple answers. When we asked CBP for more information, they told us: “visit our website.” We did, and we still have many of the same questions. Representatives for airlines, which partner directly with the government agencies, also seem unable to answer the concerns, as the JetBlue spokesperson made evident. Both agencies and airlines seemed to expect no pushback from passengers when they implemented this boarding-pass-replacing-panopticon. The convenience would win out, they seemed to assume, not expecting people to mind having their face scanned “the same way you unlock your phone.” But now that “your face is your boarding pass” (as JetBlue awkwardly puts it), at least in some airports, the invasive nature of the system is much more clear, and travelers are understandably upset.

It might sound trite, but right now, the key to opting out of face recognition is to be vigilant. There’s no single box you can check, and importantly, it may not be possible for non-U.S. persons to opt out of face recognition entirely. For those who can opt out, you’ll need to spot the surveillance when it’s happening. To start, TSA PreCheck, Clear, and other ways of “skipping the line” often require biometric identification, and are often being used as test cases for these sorts of programs. Once you’re at the airport, be on the lookout for any time a TSA, CBP, or airline employee asks you to look into a device, or when there’s a kiosk or signage like those below. That means your biometric data is probably about to be scanned.

Another example of signage CBP gave us as a response to our letter.

A sign in an airport explaining that there is face recognition being used in the area. This is an example of signage CBP gave us as a response to our letter.

At the moment, face recognition is most likely to happen at specific airports, including Atlanta, Chicago, Seattle, San Francisco, Las Vegas, Los Angeles, Washington (Dulles and Reagan), Boston, Fort Lauderdale, Houston Hobby, Dallas/Fort Worth, JFK, Miami, San Jose, Orlando, and Detroit; while flying on Delta, JetBlue, Lufthansa, British Airways and American Airlines; and in particular, on international flights. But, that doesn’t mean that other airlines and airports won’t implement it sooner rather than later.

A woman stands facing a small screen that's about five feet off the ground on a pedestal. This is an example of a face recognition kiosk from TSA's website.

To skip the surveillance, CBP says you “should notify a CBP Officer or an airline or airport representative in order to seek an alternative means of verifying [your] identity and documents.” Do the same when you encounter this with an airline. While there should be signage near the face recognition area, it may not be clear. If you’re concerned about creating a slight delay for yourself or other passengers, take note: though CBP has claimed to have a 98% accuracy rating in their pilot programs, the Office of the Inspector General could not verify those numbers, and even a 2% error rate would cause thousands of people to be misidentified every day. Most face recognition technology has significantly lower accuracy ratings than that, so you might actually be speeding things up by skipping the surveillance.

The Long And Winding Biometric Pathway

Part of the reason for the confusion about how to opt out is that there are actually (at least) three different face recognition checkpoints looming: Airlines want to use your face as your boarding pass, saying “it’s about convenience.” CBP, which is part of the Department of Homeland Security (DHS), wants to use your face to check against DHS and State Department databases when you’re entering or exiting the country; and the TSA wants to compare your face against your photo identification throughout the airport. And if people are upset now, they will be furious to know this is just the beginning of the “biometric pathway” program: CBP and TSA want to use face recognition and other biometric data to track everyone from check-in, through security, into airport lounges, and onto flights (PDF). They’re moving fast, too, despite (or perhaps because of) the fact that there are no regulations on this sort of technology: DHS is hoping to use facial recognition on 97 percent of departing air passengers within the next four years and 100 percent of all international passengers in the top 20 U.S. airports by 2021.

It’s the customers and passengers who will bear the burden when things go wrong,

If the government agencies get their way, new biometric data could be taken from/used against travelers wherever they are in the airport—and much of that collection will be implemented by private companies (even rental car companies are getting in on the action). CBP will store that facial recognition data for two weeks for U.S. citizens and lawful permanent residents, and for 75+ years for non-U.S. persons. In addition, the biometric data collected by at least some of these systems in the future—which can include your fingerprints, the image of your face, and the scan of your iris—will be stored in FBI and DHS databases and will be searched again and again for immigration, law enforcement, and intelligence checks, including checks against latent prints associated with unsolved crimes.

Passengers Will Bear the Burden of Privacy Invasion, Not Airlines or Government Agencies

It’s easy for companies and agencies to tout the convenience of this sort of massive data collection and sharing scheme. But as we’ve seen in notable privacy fiascos over the last few years—from Facebook’s Cambridge Analytica scandal, to the breaches of the Office of Personnel Management and Equifax in the U.S., to the constant hacking of India’s national biometric database, Aadhar—it’s the customers and passengers who will bear the burden when things go wrong, and they will go wrong. These vast biometric databases will create huge security and privacy risks, with the additional concern that a company leaking your passwords or credit card numbers is nothing compared to it leaking your biometric data. While you can change a password, you can’t easily change your face.

Additionally, these systems are notoriously inaccurate, contain out-of-date information, and due to the fact that immigrants and people of color are disproportionately represented in criminal and immigration databases, and that face recognition systems are less capable of identifying people of color, women, and young people, the weight of these inaccuracies will fall disproportionately on them. It will be the passengers who bear the burden when they are stuck watching the flights they paid for take off without them because there was an error with a database or an algorithm, or because they preferred non-biometric options that weren’t in place.

It’s time for the government agencies and the airlines to pause these programs until they can clearly and adequately give:

  • Photographs of the signage in-situ in the airports in question, as well as any additional information about the opt-out process.
  • An explanation of the locations where CBP will be providing meaningful and clear opt out notice to travelers (for example, at entry points, point-of-sale, ticket counters, security checkpoints, and boarding gates) as well as the specific language travelers can use to opt out of the biometric data collection program.
  • An up-to-date list of all the airports and airlines that currently participate in the biometric exit program.
  • Information about the algorithm CBP is using to compare photos (provided by NEC), as well as the accuracy information associated with that algorithm.
  • Technological specifications for transferring data from point of collection to DHS and with vendors and airlines.

Additional questions—like how data is safeguarded—are laid out in our letter to CBP.

Congress must also demand the answers to these questions. And lawmakers must require agencies and airlines to pause this program until they can not only ensure the biometric privacy of travelers is protected but more importantly justify this huge invasion of privacy. Just last month, three Senators released a joint statement calling on DHS to pause the program until there can be “a rulemaking to establish privacy and security rules of the road,” but so far, they’ve been ignored.

Trading privacy for convenience is a bad bargain, and it can feel like the deal isn’t always one we have a choice in. DHS has said that the only way we can ensure that our biometric data isn’t collected when we travel is to “refrain from traveling.” That’s ridiculous. The time to regulate and restrict the use of  facial recognition technology is now, before it becomes embedded in our everyday lives. We must keep fighting to make sure that in the future, it gets easier, and not harder, to defend our privacy—biometric or otherwise.

via EFF.org Updates http://bit.ly/US8QQS

April 25, 2019 at 01:41PM

EPIC to Senate: Oversight Board Must Review Government Use of Facial Recognition, AI

In advance of a hearing about the Privacy and Civil Liberties Oversight Board, EPIC sent a statement to the Senate Judiciary Committee outlining priorities. EPIC said the Civil Liberties Board should (1) release the report on Executive Order 12333; (2) review the use of facial recognition technology and propose safeguards; (3) review the use of artificial intelligence and propose safeguards; and (4) monitor proposals for “smart” borders and assess privacy impacts on US residents. The independent agency reviews federal agency programs to ensure adequate safeguards for privacy and civil liberties. EPIC helped establish the PCLOB. In 2003 EPIC testified before the 9-11 Commission and urged the creation of an independent privacy agency to oversee the surveillance powers established after 9/11. EPIC also set out initial priorities for the PCLOB and spoke at the first meeting of the Oversight Board in 2013. In 2016, EPIC awarded former PCLOB Board Member Judge Patricia Wald with the EPIC Champion of Freedom Award.

via epic.org http://epic.org/

February 5, 2019 at 01:15PM

Congress: Amazon didn’t give “sufficient answers” about facial recognition

Enlarge / Jeff Bezos, founder and chief executive of Amazon.com, in May 2018. (credit: Alex Wong/Getty Images) 

Seven members of the House of Representatives and one United States senator have now sent a second letter to Amazon’s CEO, asking for more clarification about the company’s use of facial-recognition technology. Although two House members in the group sent a similar letter to CEO Jeff Bezos back in July, the larger group now says that Amazon “failed to provide sufficient answers” about its commercial facial-recognition program, known as Rekognition. Prior to the July letter, the American Civil Liberties Union used the service in a demonstration of its inadequacy—the service falsely matched 28 members of Congress with mugshots. The new letter, issued on Thursday, was signed by Sen. Edward Markey (D-Mass.), Rep. Jimmy Gomez (D-Calif.), among others. The document states that the legislators have “serious concerns that this type of product has significant accuracy issues, places disproportionate burdens on communities of color, and could stifle Americans’ willingness to exercise their First Amendment rights in public.” Read 2 remaining paragraphs | Comments

Read more… 

Amazon pitched facial recognition tech to ICE despite employee objections

Amazon would really like U.S. law enforcement to use its facial-recognition software, despite how its employees feel. According to internal documents obtained by the Project on Government Oversight, Amazon met with officials from U.S. Immigration and Customs Enforcement (ICE) over the summer in order to pitch facial-recognition technology known as Rekognition. SEE ALSO: Here’s how to set up a VPN and protect your data In June 2018, Amazon Web Services sales representatives met with ICE officials to discuss the government agency’s use of the face-scanning technology. In an email to ICE’s Homeland Security Investigations that followed, Amazon sent “action items” which included “Rekognition Video tagging/analysis, scalability, custom object libraries.” The Amazon sales representative went on to thank the agency for its interest in using the company’s technology “to support ICE and the HSI mission.” 

Read more… 

Amazon wrongly IDed 28 members of Congress as arrestees, and they’re not happy about it

Amazon) In the wake of the American Civil Liberties Union’s test where 28 members of Congress were wrongly identified as arrestees via its Rekognition facial recognition service, some of those lawmakers have now raised serious concerns to the tech giant. For now, Amazon has neither answered their questions, nor Ars’. “We request an immediate meeting with you to discuss how to address the defects of this technology in order to prevent inaccurate outcomes,” wrote Rep. Jimmy Gomez (D-California) and Rep. John Lewis (D-Georgia), in a letter to CEO Jeff Bezos. 

Read more… 

HART: Homeland Security’s Massive New Database Will Include Face Recognition, DNA, and Peoples’ “Non-Obvious Relationships”

So why do we know so little about it?The U.S. Department of Homeland Security (DHS) is quietly building what will likely become the largest database of biometric and biographic data on citizens and foreigners in the United States. The agency’s new Homeland Advanced Recognition Technology (HART) database will include multiple forms of biometrics—from face recognition to DNA, data from questionable sources, and highly personal data on innocent people. It will be shared with federal agencies outside of DHS as well as state and local law enforcement and foreign governments. And yet, we still know very little about it.

The records DHS plans to include in HART will chill and deter people from exercising their First Amendment protected rights to speak, assemble, and associate. Data like face recognition makes it possible to identify and track people in real time, including at lawful political protests and other gatherings. Other data DHS is planning to collect—including information about people’s “relationship patterns” and from officer “encounters” with the public—can be used to identify political affiliations, religious activities, and familial and friendly relationships. These data points are also frequently colored by conjecture and bias.

Read more…