FBI Violated Americans’ Privacy Rights in Data Search: Court

Some FBI electronic surveillance activities violated the constitutional privacy rights of Americans swept up in a controversial foreign intelligence program, a secretive surveillance court ruled. The ruling deals a rare rebuke to U.S. spying programs that have withstood legal challenge and review since they were dramatically expanded after the Sept. 11, 2001, attacks. The opinion resulted in the FBI agreeing to apply new procedures, including recording how the database is searched to detect future compliance issues, reports the Wall Street Journal. The intelligence community disclosed Tuesday that the Foreign Intelligence Surveillance Court last year found that FBI’s efforts to search data about Americans ensnared in a warrantless internet-surveillance program intended to target foreign suspects violated the law as well as the Constitution’s Fourth Amendment protections against unreasonable searches. The issue was made public by the government only after it lost an appeal of the judgment before another secret court.

The court said that in at least a handful of cases, the FBI had been improperly searching a database of raw intelligence for information on Americans—raising concerns about oversight of the program, which operates in near total secrecy. The October 2018 court ruling identified improper searches of raw intelligence databases by the bureau in 2017 and 2018 that were deemed problematic in part because of their breadth. They involved queries related to thousands or tens of thousands of pieces of data, such as emails or telephone numbers. In one case, the FBI was using the intelligence information to vet its personnel and cooperating sources. Federal law requires that the database only be searched by the FBI as part of seeking evidence of a crime or for foreign-intelligence information. The opinion was written by U.S. District Judge James Boasberg, who serves on the FISA Court.

via The Crime Report https://ift.tt/2myW3Gx

October 9, 2019 at 08:22AM

Twitter “Unintentionally” Used Your Phone Number for Targeted Advertising

Stop us if you’ve heard this before: you give a tech company your personal information in order to use two-factor authentication, and later find out that they were using that security information for targeted advertising.

That’s exactly what Twitter fessed up to yesterday in an understated blog post: the company has been taking email addresses and phone numbers that users provided for “safety and security purposes” like two-factor authentication, and using them for its ad tracking systems, known as Tailored Audiences and Partner Audiences.

Twitter claims this was an “unintentional,” “inadvertent” mistake. But whether this was avarice or incompetence on Twitter’s part, the result confirms some users’ worst fears: that taking advantage of a bread-and-butter security measure could expose them to privacy violations. Twitter’s abuse of phone numbers for ad tracking threatens to undermine people’s trust in the critical protections that two-factor authentication offers.

How Did Your 2FA Phone Number End Up in Twitter’s Ad Tracking Systems?!

Here’s how it works. Two-factor authentication (2FA) lets you log in, or “authenticate,” your identity with another piece of information, or “factor,” in addition to your password. It sometimes goes by different names on different platforms—Twitter calls it “login verification.”

There are many different types of 2FA. SMS-based 2FA involves receiving a text with a code that you enter along with your password when you log in. Since it relies on SMS text messages, this type of 2FA requires a phone number. Other types of 2FA—like authenticator apps and hardware tokens—do not require a phone number to work.

No matter what type of 2FA you choose, however, Twitter makes you hand over your phone number anyway. (Twitter now also requires a phone number for new accounts.) And that pushes users who need 2FA security the most into an unnecessary and painful choice between giving up an important security feature or surrendering part of their privacy.

In this case, security phone numbers and email addresses got swept up into two of Twitter’s ad systems: Tailored Audiences, a tool to let an advertiser target Twitter users based on their own marketing list, and Partner Audiences, which lets an advertiser target users based on other advertisers’ marketing lists. Twitter claims the “error” occurred in matching people on Twitter to these marketing lists based on phone numbers or emails they provided for “safety and security purposes.”

Twitter doesn’t say what they mean by “safety and security purposes,” but it is not necessarily limited to 2FA. In addition to 2FA information, it could potentially include the phone number you have to provide to unlock your account if Twitter has incorrectly marked it as a bot. Since Twitter forces many people into providing such a phone number to regain access to their account, it would be particularly pernicious if Twitter was using phone numbers gathered from that system for advertising.

What We Don’t Know

Twitter’s post downplays the problem, leaving out numbers about the scope of the harm, and details about who was affected and for how long. For instance, if Twitter locked you out of your account and required that you add a phone number to get back in, was your phone number misused for advertising? If Twitter required you to add a phone number when you signed up, for anti-spam purposes, was your phone number misused? When is an email address considered “fair game” for ad targeting and when is it not?

Twitter claims it “cannot say with certainty how many people were impacted by this.” That may be true if they are trying to parse finely who actually received an ad. But that’s an excessively narrow view of “impact.” Every user whose phone number was included in this inappropriate targeting should be considered impacted, and Twitter should disclose that number.

2FA is Not the Problem

Based on what we know, and what else we can reasonably guess about how Twitter users’ security information was misused for ad tracking, Twitter’s explanation stretches the meaning of “unintentionally.” After all, the targeted advertising business model embraced by Twitter (and by most other large social media companies) incentivizes ad technology teams to scoop up data from as many places as they can get away with—and sometimes they can get away with quite a lot.

The important conclusion for users is: this is not a reason to turn off or avoid 2FA. The problem here is not 2FA. Instead, the problem is how Twitter and other companies have misused users’ information with no regard for their reasonable security and privacy expectations.

What Next

Twitter needs to come clean about exactly what happened, when, and to how many people. It needs to explain what processes it is putting in place to ensure this doesn’t happen again. And it needs to implement 2FA methods that do not require giving Twitter your phone number.

via EFF.org Updates https://ift.tt/US8QQS

October 9, 2019 at 05:10PM

Vimeo collected detailed facial scans without consent, lawsuit alleges

Vimeo is collecting and storing thousands of people’s facial biometrics without their permission or knowledge, a recently filed lawsuit alleges.

The “highly detailed geometric” facial maps, according to a complaint, are being collected and stored in violation of the Illinois Biometric Information Privacy ACT, or BIPA, according to a complaint filed last week in Illinois state court. The law bars companies from obtaining or possessing an individuals’ biometric identifiers or information unless the company (1) informs the person in writing of its plans to do so, (2) states in writing the purpose and length of term for the collection and storage, (3) receives written permission from the user, and (4) publishes retention schedules and guidelines for destroying the biometric identifiers and information.

The complaint alleges Vimeo is violating the law by collecting, storing, and using the facial biometrics of thousands of unwitting individuals throughout the United States whose faces appear in photos or videos uploaded to the Magisto video-editor application. Vimeo acquired Magisto in April and claimed the editor had more than 100 million users.

Read 9 remaining paragraphs | Comments

via Policy – Ars Technica https://arstechnica.com

September 26, 2019 at 05:04PM

Google wins case as court rules “right to be forgotten” is EU-only

The Internet is forever, we tell social media users: be careful what you put online, because you can’t ever take it back off. And while that’s gospel for US users, there’s some nuance to that dictum across the Atlantic. In Europe, individuals have a right to be forgotten and can request that information about themselves be taken down—but only, a court has now ruled, within Europe.

The Court of Justice of the European Union, the EU’s highest court, issued a ruling today finding that there is no obligation under EU law for a search service to carry out a valid European de-listing request globally.

“EU law requires a search engine operator to carry out such a de-referencing on the versions of its search engine corresponding to all the Member States,” the Court wrote in a statement (PDF), “and to take sufficiently effective measures to ensure the effective protection of the data subject’s fundamental rights.”

“Since 2014, we’ve worked hard to implement the right to be forgotten in Europe, and to strike a sensible balance between people’s rights of access to information and privacy,” Google said in a statement. “It’s good to see that the court agreed with our arguments.”

The right to be forgotten

The right to be forgotten—to request that information about you be de-listed and made unsearchable—has been a point of contention between Google and the EU since the idea was first proposed back in 2010. In 2014, the Court of Justice ruled that search engines have an obligation to remove links that are old, out of date, irrelevant, not in the public interest, and could harm individuals.

Google was inundated with requests to remove links after it made an online request form available to European consumers, and it began to comply. The company has continued to review all takedown requests, cumulatively honoring 45% of requests to delist results, or about 846,000 links, through September 7 of this year.

The company, however, uses geofencing practices to limit where in the world links are scrubbed from. Users in the EU cannot see links blocked by takedown requests under EU law, but users in other regions, including Asia and North and South America, can.

In 2015, French regulator CNIL (Commission Nationale de l’Informatique et des Libertés) ordered Google to apply takedown requests globally. The commission argued that users in the country could use VPNs or other similar tools as a workaround to access links they would otherwise be blocked by law from seeing. In 2016, the CNIL imposed a €100,000 fine on Google (about $113,000 at the time) for failing to comply.

Google appealed the CNIL ruling, and the Court of Justice in 2017 agreed to hear the case.

While the court today held that search engines such as Google are not required to honor de-listing requests worldwide, it did add a reminder that companies are expected to have measures in place “discouraging Internet users from gaining access, from one of the Member States, to the links in question which appear on versions of that search engine outside the EU.”

via Ars Technica https://arstechnica.com

September 24, 2019 at 03:37PM

License Plate Camera Footage Helps Fight Crime in Florida

(TNS) — Just before midnight over Labor Day weekend, two men in a Nissan 370z pulled onto Paddock Oaks Drive in Tampa, Fla., and stopped in front of the first house on the block.

In under 10 minutes, one man got out of the vehicle, removed the tailgate of a Ford pickup truck sitting in the driveway, and left. A neighbor’s home security system caught the moment.

Identifying the men might have taken significant legwork, but an automatic license plate reader positioned at the entrance of the community captured key evidence. Two weeks later, the Hillsborough County, Fla., Sheriff’s Office made two arrests.

Paddock Oaks is one of 14 Tampa Bay communities that have signed on with Atlanta-based Flock Safety to provide high-speed, high-definition cameras for surveillance. It’s a new twist on a technology that companies have historically sold to law enforcement, repossession companies or toll operators. And it may be coming to a neighborhood near you.

A company called Flock Safety is selling automatic license plate readers to neighborhood associations to cut down on crime, and Riverview neighborhood Paddock Oaks is one of their customers. The license plate footage in Riverview was “very, very important for us to make the arrest because it helped us to be able to identify who the vehicles belonged to,” said Joseanett Diaz-Sanchez, spokeswoman for the Hillsborough County Sheriff’s Office.

Flock, one of the first companies to market the technology to neighborhoods, touts the incident as a success.

“We are a crime-solving company,” said Joshua Miller, Flock spokesman. “We want to help everybody of any kind deter would-be criminals.”

But privacy advocates say the cameras amount to a dragnet surveillance network put in place without public discussion, carrying a significant potential for abuse and monitoring of innocent people, especially by law enforcement.

License plate readers capture every plate that passes in front of them. They use machine learning to turn a photo of a license plate into a line of code that is stored in a searchable database. Typically, they are used to enforce tolls and parking, but are increasingly used by law enforcement to find vehicles of interest.

Flock, which was founded in 2017, markets them to neighborhoods in 33 states as a way to tamp down on nonviolent crime.

Their cameras are placed at the entrances and exits of participating neighborhoods. Residents of the neighborhood can volunteer their license plate numbers so the cameras can distinguish between people who live there and those just visiting, and residents have the option of having their footage deleted immediately instead of having it uploaded to the company’s cloud storage. Flock charges around $2,000 per camera annually, and the neighborhood owns the footage — not Flock.

The company stores the footage for 30 days, a much shorter period than competitors, which often store the data indefinitely. This, spokesman Miller said, was a conscious choice to ensure the data is not misused.

“Privacy is as concerning to us as other people,” Miller said.

Flock declined to disclose which Tampa Bay neighborhoods it has cameras in or how many cameras are in use. It does, however, give police departments a list of available cameras.

Bill Staley, president of the Paddock Oaks Homeowners’ Association, became interested in the system when a neighbor who works in consumer electronics mentioned it to him. His neighborhood’s camera is placed at the sole entrance, attached to a pole at least 12 feet high.

“We were pretty much sold based on the fact that it’s able to pick up a license plate in the dark,” he said. “The technology is amazing.”

James Carey, a resident of Paddock Oaks had his tailgate stolen recently from his truck. Using a combination of home surveillance footage from his neighbor’s house and two license plates captured on the Flock Security camera in his neighborhood, he was able to report the crime to detectives leading to two arrests. James poses for a portrait in front of his home by his truck that had the tailgate stolen. LUIS SANTANA | Times | Tampa Bay TimesJames Carey, who owns the truck whose tailgate was stolen, initially thought the camera was intrusive.

Now, he said, “I think it’s a great idea. … I think 99 percent of your day you’re on camera and filmed anyway. The cameras are everywhere. It’s just another one.”

While customers own the data, Miller said the company has the technical ability to access footage if needed, and its privacy policy allows it to hand over information to law enforcement if it believes the footage is related to a crime. It can also retain the footage indefinitely for this purpose, though Miller said the company has not done so.

Privacy advocates consider the technology invasive. One of the primary concerns for Nathan Wessler, attorney with the American Civil Liberties Union, is the potential to surveil an innocent individual’s movements over a period of time.

“This technology allows the recording of people’s most private patterns of movement,” he said, “whether it’s to the doctor’s office or the lover’s house.”

Wessler works on the ACLU’s Speech, Privacy and Technology Project, which examines the use of and privacy issues with surveillance technology, such as license plate readers. He sees the greatest potential for abuse when law enforcement interacts with the footage.

For Flock, law enforcement can interact with the technology in two ways: Request footage from neighborhoods that use Flock, or contract directly with Flock on installing cameras. No Tampa Bay law enforcement agencies currently contract with Flock. Of the 100 law enforcement agencies Flock partners with around the country, just one — in Florida City — is in the state. Flock is, however, “looking to expand outward through Florida.”

These warning signs are posted in the Riverview neighborhood of Paddock Oaks, which uses the Flock Safety automatic license plate readers. Law enforcement agencies that have partnerships with Flock receive alerts within minutes if a recorded license plate shows up on the Federal Bureau of Investigation’s National Crime Information Center database, a shared listing of fugitives, missing people, stolen property, violent or wanted people, sex offenders and those on supervised release. Police that contract with the company can also create custom “hot lists” for cars they are looking for.

Several privacy issues crop up when a neighborhood is in control of the footage. For one, there’s the potential for discrimination by unduly focusing on someone because they don’t live in a given neighborhood.

“Every contact with law enforcement that is unnecessary carries risk,” Wessler said. “Those risks are particularly acute for people of color.”

Then there’s which neighborhoods have access to the cameras. The cost may be prohibitive for less affluent neighborhoods. Flock spokesman Miller said there is at least one instance where police have stepped in and contracted with Flock for a neighborhood. Even that may not be as benevolent as it might seem, Wessler said, as that means the residents are being watched and may not have had a say in getting the product.

Incorrectly identifying or flagging a license plate also is a possibility.

“There’s a real temptation by law enforcement and the public to see these computerized surveillance systems as being unfailing and inherently accurate,” Wessler said, “when in fact, they’re not.”

Because the residential use of license plate readers is relatively new, many scenarios haven’t been tested yet.

One of the largest concerns is how easily police are able to access footage. According to Flock, each camera has a limited number of people authorized to access it, and giving the information to police is voluntary. Before police are allowed to request footage (even if they directly contract with Flock), the agencies are required to suspect there is a possible crime, Flock said. But there isn’t necessarily a burden of proof beyond that, as Flock doesn’t require a significant explanation of the suspected crime and relies on customer complaints of any misuse of data.

“If police are using it with nefarious purposes, we will end that contract and take our equipment back,” Miller said. To date, the company has not found an instance of this.

A company called Flock Safety is selling automatic license plate readers to neighborhood associations to cut down on crime, and Riverview neighborhood Paddock Oaks is one of their customers. Joe Giacalone, a professor at the City University of New York’s John Jay College of Criminal Justice, said that the license plate readers can be a boon to crime stopping. But, they should come with backstops, such as limited access, a way to monitor for misuse and transparency about such policies when police are involved.

Wessler points out that the cameras do not stop, sleep or take breaks, and the volume of information they collect is unique.

“It’s not a great solution to say this neighborhood can consent to turning over data about tons of people who haven’t consented to being surveilled,” Wessler said.

Maj. Robert Ura, who works for Hillsborough County Sheriff’s Office, said he understands privacy concerns, but doesn’t see the cameras as any different than someone standing outside photographing license plates in public.

“I think in 2019 and beyond,” Ura said, “if you’re not aware that you’re under surveillance of one kind or another — Ring cameras, driveway cameras, traffic cameras — then you’re probably a little naive.”

©2019 the Tampa Bay Times (St. Petersburg, Fla.). Distributed by Tribune Content Agency, LLC.

via “Crime” – Google News https://ift.tt/2ObhGtj

September 20, 2019 at 06:04PM

Feds seek to seize all profits from Snowden’s book over NDA violation

The US Department of Justice may never be able to prosecute Edward Snowden for his procurement and distribution of highly classified information from the network of the National Security Agency. But DOJ lawyers have found a way to reach out and touch his income—and that of Macmillan Publishers—by filing a civil suit today against them for publication of his book, Permanent Record.

The lawsuit, filed in the US Court for the District of Eastern Virginia, does not seek to stop publication or distribution of Permanent Record. Instead, as a DOJ spokesperson said in a press release, “under well-established Supreme Court precedent [in the case] Snepp v. United States, the government seeks to recover all proceeds earned by Snowden because of his failure to submit his publication for pre-publication review in violation of his alleged contractual and fiduciary obligations.”

The suit—which also names Macmillan, its Henry Holt and Company imprint, and its parent company Holtzbrinck Publishers—claims Snowden was in violation of both CIA and NSA secrecy agreements he signed as terms of his employment. In the CIA Secrecy Agreements Snowden signed, he acknowledged that “Snowden was required to submit his material for prepublication review ‘prior to discussing [the work] with or showing it to anyone who is not authorized to have access to’ classified information,” DOJ attorneys wrote in their filing. “Snowden was also required not to ‘take any steps towards public disclosure until [he] received written permission to do so from the Central Intelligence Agency.'”

Read 4 remaining paragraphs | Comments

via Policy – Ars Technica https://arstechnica.com

September 17, 2019 at 03:19PM

Google Seeks to Establish Facial Recognition in Homes

With opposition growing to facial recognition, Google has decided instead to build facial recognition into Nest Hub Max, an “always on” device intended for use in the home. Google’s “face match” constantly targets the facial images of each person in the household. Any interaction with the Google device is added to the secret user profile Google maintains for ad targeting. In 2014, EPIC filed a complaint with the FTC and said the “Commission clearly failed to address the significant privacy concerns presented in the Google acquisition of Nest,” a related device that enabled surveillance in the home. EPIC later asked the Federal Trade Commission to require Google to spin-off Nest and to disgorge the data obtained from Nest users. A 2017 complaint to the Consumer Product Safety Commission from EPIC and consumer organizations pointed out that the “touchpad on the Google device is permanently set to ‘on’ so that it records all conversations without a consumer’s knowledge or consent.”

via epic.org http://epic.org/

September 16, 2019 at 02:39PM

Facebook Must Face Class Action Over Unwanted Text Messages, Panel Says

A federal appeals court has revived a proposed class action lawsuit against Facebook Inc. brought on behalf of non-Facebook users who claim they’ve gotten unsolicited texts from the company in violation of a federal robocalling statute.

The U.S. Court of Appeals for the Ninth Circuit reversed a lower court decision that had tossed a lawsuit brought by Noah Duguid, a non-Facebook user who claimed the company violated the Telephone Consumer Protection Act (TCPA) by mistakenly sending him security messages meant to alert users when their account had been accessed from an unrecognized device or browser. Duguid claimed that Facebook failed to respond to his multiple text and email requests to stop sending him the texts.

“The messages Duguid received were automated, unsolicited, and unwanted,” wrote Judge M. Margaret McKeown, adding that the messages fell outside an exemption to TCPA liability for emergency messages that has been outlined by the Federal Communications Commission. “Duguid did not have a Facebook account, so his account could not have faced a security issue, and Facebook’s messages fall outside even the broad construction the FCC has afforded the emergency exception,” McKeown wrote.

The court, however, joined with the Fourth Circuit in finding that an exemption for calls “made solely to collect a debt owed to or guaranteed by the United States” added to the TCPA by Congress in a 2015 amendment violated the First Amendment. But also like the Fourth Circuit, the court found that the federal debt collection exemption was severable from the TCPA, refusing a request from Facebook and its lawyers at Latham & Watkins to find the entire statute unconstitutional.

Facebook representatives didn’t immediately respond to a request for comment Thursday. The company and its lawyers have argued that the statute, which has statutory penalties of $500 per violation and was initially aimed at curbing unwanted calls from telemarketers, was never meant to put companies in Facebook’s position on the hook potentially for millions in liability.

Duguid’s lawyer, Sergei Lemberg of Wilton, Connecticut-based Lemberg Law, said that Facebook has indicated that there are a significant number of people who, like his client, received unwanted texts.

“What’s important is the message: Man versus machine. Man wins. Privacy matters,” Lemberg said. “I think Facebook for years and years was pretty cavalier, to say the least, about individuals’ privacy and this case is different from some of the stuff that’s out there publicly, but it’s cut from the same cloth.”

Lawyers from the U.S. Department of Justice intervened in the case to defend the constitutionality of the statute, but took no position on whether Facebook violated the TCPA. The Chamber of Commerce, represented by counsel from Jones Day, filed an amicus brief asking the Ninth Circuit to invalidate the restriction on using an automatic telephone dialing system to call cellphones.

via Law.com – Newswire https://www.law.com/

June 13, 2019 at 03:06PM

Alexa, Do You Record Kids’ Conversations? These Law Firms Think So.

A pair of class actions filed Tuesday allege that Amazon’s Alexa-enabled devices, like Echo and Echo Dot, illegally record the conversations of children.

“Alexa routinely records and voiceprints millions of children without their consent or the consent of their parents,” both complaints say.

Travis Lenkner, of Chicago’s Keller Lenkner, which filed the suits along with Quinn Emanuel Urquhart & Sullivan, said the cases are the first of their kind. The suits, filed in the U.S. District Court for the Western District of Washington and Los Angeles Superior Court, seek damages under the privacy laws of nine states, including California, Florida and Pennsylvania.

“What all nine have in common is they are what’s known as two-party consent states,” Lenkner said. “An audio recording of a conversation or of another person requires the consent of both sides to that interaction in these states and when such consent is not obtained these state laws contain penalties, including set amounts of statutory damages per violation.”

A spokeswoman for Amazon, based in Seattle, referred requests for comment to a blog post about Amazon FreeTime, a “dedicated service that helps parents manage the ways their kids interact with technology, including limiting screen time,” and was expanded to include Alexa last year. Amazon said its FreeTime and Echo Dot Kids applications require parental consent and, in some cases, don’t collect personal information. Parents also can delete their child’s profile or recordings, the blog says.

The suits come as a coalition of 19 consumer and public health groups petitioned the Federal Trade Commission last month to investigate Amazon’s Echo Dot Kids, which they claim violates the federal Children’s Online Privacy Protection Act, known as COPPA—an allegation that Amazon has denied.

An 8-year-old in California and a 10-year-old in Massachusetts, identified as R.A. and C.O., filed the class actions through their guardians. Both cases said the children used Alexa devices to play music, tell jokes or answer questions, but never consented to having their discussions recorded.

Their parents also had no idea the devices were saving permanent recordings of the conversations to Amazon’s servers and sending them to a subsidiary in Sunnyvale, California, also named in the complaints, called A2Z Development Center Inc., which does business as Amazon Lab126.

“Amazon has thus built a massive database of billions of voice recordings containing the private details of millions of Americans,” the complaints say.

The complaints note that other devices, such as Apple’s Siri, record conversations temporarily and later delete them.

The lawsuits seek a court order mandating that Amazon destroy the recorded conversations of children and pay statutory damages, which range from $100 to $5,000 per violation, depending on the state.

The other states in the class are Illinois, Michigan, Maryland, Massachusetts, New Hampshire and Washington.

via Law.com – Newswire https://www.law.com/

June 11, 2019 at 10:12PM

California: No Face Recognition on Body-Worn Cameras

EFF has joined a coalition of civil rights and civil liberties organizations to support a California bill that would prohibit law enforcement from applying face recognition and other biometric surveillance technologies to footage collected by body-worn cameras.

About five years ago, body cameras began to flood into police and sheriff departments across the country. In California alone, the Bureau of Justice Assistance provided more than $7.4 million in grants for these cameras to 31 agencies. The technology was pitched to the public as a means to ensure police accountability and document police misconduct. However, if enough cops have cameras, a police force can become a roving surveillance network, and the thousands of hours of footage they log can be algorithmically analyzed, converted into metadata, and stored in searchable databases.

Today, we stand at a crossroads as face recognition technology can now be interfaced with body-worn cameras in real time. Recognizing the impending threat to our fundamental rights, California Assemblymember Phil Ting introduced A.B. 1215 to prohibit the use of face recognition, or other forms of biometric technology, such as gait recognition or tattoo recognition, on a camera worn or carried by a police officer.

“The use of facial recognition and other biometric surveillance is the functional equivalent of requiring every person to show a personal photo identification card at all times in violation of recognized constitutional rights,” the lawmaker writes in the introduction to the bill. “This technology also allows people to be tracked without consent. It would also generate massive databases about law-abiding Californians, and may chill the exercise of free speech in public places.”

Ting’s bill has the wind in its sails. The Assembly passed the bill with a 45-17 vote on May 9, and only a few days later the San Francisco Board of Supervisors made history by banning government use of face recognition. Meanwhile, law enforcement face recognition has come under heavy criticism at the federal level by the House Oversight Committee and the Government Accountability Office.

The bill is now before the California Senate, where it will be heard by the Public Safety Committee on Tuesday, June 11.

EFF, along with a coalition of civil liberties organizations including the ACLU, Advancing Justice – Asian Law Caucus, CAIR California, Data for Black Lives, and a number of our Electronic Frontier Alliance allies have joined forces in supporting this critical legislation.

Face recognition technology has disproportionately high error rates for women and people of color. Making matters worse, law enforcement agencies conducting face surveillance often rely on images pulled from mugshot databases, which include a disproportionate number of people of color due to racial discrimination in our criminal justice system. So face surveillance will exacerbate historical biases born of, and contributing to, unfair policing practices in Black and Latinx neighborhoods.

Polling commissioned by the ACLU of Northern California in March of this year shows the people of California, across party lines, support these important limitations. The ACLU’s polling found that 62% of respondents agreed that body cameras should be used solely to record how police treat people, and as a tool for public oversight and accountability, rather than to give law enforcement a means to identify and track people. In the same poll, 82% of respondents said they disagree with the government being able to monitor and track a person using their biometric information.

Last month, Reuters reported that Microsoft rejected an unidentified California law enforcement agency’s request to apply face recognition to body cameras due to human rights concerns.

“Anytime they pulled anyone over, they wanted to run a face scan,” Microsoft President Brad Smith said. “We said this technology is not your answer.”

We agree that ubiquitous face surveillance is a mistake, but we shouldn’t have to rely on the ethical standards of tech giants to address this problem. Lawmakers in Sacramento must use this opportunity to prevent the threat of mass biometric surveillance from becoming the new normal. We urge the California Senate to pass A.B. 1215.

via EFF.org Updates http://bit.ly/US8QQS

June 10, 2019 at 07:17PM