Facebook Must Face Class Action Over Unwanted Text Messages, Panel Says

A federal appeals court has revived a proposed class action lawsuit against Facebook Inc. brought on behalf of non-Facebook users who claim they’ve gotten unsolicited texts from the company in violation of a federal robocalling statute.

The U.S. Court of Appeals for the Ninth Circuit reversed a lower court decision that had tossed a lawsuit brought by Noah Duguid, a non-Facebook user who claimed the company violated the Telephone Consumer Protection Act (TCPA) by mistakenly sending him security messages meant to alert users when their account had been accessed from an unrecognized device or browser. Duguid claimed that Facebook failed to respond to his multiple text and email requests to stop sending him the texts.

“The messages Duguid received were automated, unsolicited, and unwanted,” wrote Judge M. Margaret McKeown, adding that the messages fell outside an exemption to TCPA liability for emergency messages that has been outlined by the Federal Communications Commission. “Duguid did not have a Facebook account, so his account could not have faced a security issue, and Facebook’s messages fall outside even the broad construction the FCC has afforded the emergency exception,” McKeown wrote.

The court, however, joined with the Fourth Circuit in finding that an exemption for calls “made solely to collect a debt owed to or guaranteed by the United States” added to the TCPA by Congress in a 2015 amendment violated the First Amendment. But also like the Fourth Circuit, the court found that the federal debt collection exemption was severable from the TCPA, refusing a request from Facebook and its lawyers at Latham & Watkins to find the entire statute unconstitutional.

Facebook representatives didn’t immediately respond to a request for comment Thursday. The company and its lawyers have argued that the statute, which has statutory penalties of $500 per violation and was initially aimed at curbing unwanted calls from telemarketers, was never meant to put companies in Facebook’s position on the hook potentially for millions in liability.

Duguid’s lawyer, Sergei Lemberg of Wilton, Connecticut-based Lemberg Law, said that Facebook has indicated that there are a significant number of people who, like his client, received unwanted texts.

“What’s important is the message: Man versus machine. Man wins. Privacy matters,” Lemberg said. “I think Facebook for years and years was pretty cavalier, to say the least, about individuals’ privacy and this case is different from some of the stuff that’s out there publicly, but it’s cut from the same cloth.”

Lawyers from the U.S. Department of Justice intervened in the case to defend the constitutionality of the statute, but took no position on whether Facebook violated the TCPA. The Chamber of Commerce, represented by counsel from Jones Day, filed an amicus brief asking the Ninth Circuit to invalidate the restriction on using an automatic telephone dialing system to call cellphones.

via Law.com – Newswire https://www.law.com/

June 13, 2019 at 03:06PM

Alexa, Do You Record Kids’ Conversations? These Law Firms Think So.

A pair of class actions filed Tuesday allege that Amazon’s Alexa-enabled devices, like Echo and Echo Dot, illegally record the conversations of children.

“Alexa routinely records and voiceprints millions of children without their consent or the consent of their parents,” both complaints say.

Travis Lenkner, of Chicago’s Keller Lenkner, which filed the suits along with Quinn Emanuel Urquhart & Sullivan, said the cases are the first of their kind. The suits, filed in the U.S. District Court for the Western District of Washington and Los Angeles Superior Court, seek damages under the privacy laws of nine states, including California, Florida and Pennsylvania.

“What all nine have in common is they are what’s known as two-party consent states,” Lenkner said. “An audio recording of a conversation or of another person requires the consent of both sides to that interaction in these states and when such consent is not obtained these state laws contain penalties, including set amounts of statutory damages per violation.”

A spokeswoman for Amazon, based in Seattle, referred requests for comment to a blog post about Amazon FreeTime, a “dedicated service that helps parents manage the ways their kids interact with technology, including limiting screen time,” and was expanded to include Alexa last year. Amazon said its FreeTime and Echo Dot Kids applications require parental consent and, in some cases, don’t collect personal information. Parents also can delete their child’s profile or recordings, the blog says.

The suits come as a coalition of 19 consumer and public health groups petitioned the Federal Trade Commission last month to investigate Amazon’s Echo Dot Kids, which they claim violates the federal Children’s Online Privacy Protection Act, known as COPPA—an allegation that Amazon has denied.

An 8-year-old in California and a 10-year-old in Massachusetts, identified as R.A. and C.O., filed the class actions through their guardians. Both cases said the children used Alexa devices to play music, tell jokes or answer questions, but never consented to having their discussions recorded.

Their parents also had no idea the devices were saving permanent recordings of the conversations to Amazon’s servers and sending them to a subsidiary in Sunnyvale, California, also named in the complaints, called A2Z Development Center Inc., which does business as Amazon Lab126.

“Amazon has thus built a massive database of billions of voice recordings containing the private details of millions of Americans,” the complaints say.

The complaints note that other devices, such as Apple’s Siri, record conversations temporarily and later delete them.

The lawsuits seek a court order mandating that Amazon destroy the recorded conversations of children and pay statutory damages, which range from $100 to $5,000 per violation, depending on the state.

The other states in the class are Illinois, Michigan, Maryland, Massachusetts, New Hampshire and Washington.

via Law.com – Newswire https://www.law.com/

June 11, 2019 at 10:12PM

California: No Face Recognition on Body-Worn Cameras

EFF has joined a coalition of civil rights and civil liberties organizations to support a California bill that would prohibit law enforcement from applying face recognition and other biometric surveillance technologies to footage collected by body-worn cameras.

About five years ago, body cameras began to flood into police and sheriff departments across the country. In California alone, the Bureau of Justice Assistance provided more than $7.4 million in grants for these cameras to 31 agencies. The technology was pitched to the public as a means to ensure police accountability and document police misconduct. However, if enough cops have cameras, a police force can become a roving surveillance network, and the thousands of hours of footage they log can be algorithmically analyzed, converted into metadata, and stored in searchable databases.

Today, we stand at a crossroads as face recognition technology can now be interfaced with body-worn cameras in real time. Recognizing the impending threat to our fundamental rights, California Assemblymember Phil Ting introduced A.B. 1215 to prohibit the use of face recognition, or other forms of biometric technology, such as gait recognition or tattoo recognition, on a camera worn or carried by a police officer.

“The use of facial recognition and other biometric surveillance is the functional equivalent of requiring every person to show a personal photo identification card at all times in violation of recognized constitutional rights,” the lawmaker writes in the introduction to the bill. “This technology also allows people to be tracked without consent. It would also generate massive databases about law-abiding Californians, and may chill the exercise of free speech in public places.”

Ting’s bill has the wind in its sails. The Assembly passed the bill with a 45-17 vote on May 9, and only a few days later the San Francisco Board of Supervisors made history by banning government use of face recognition. Meanwhile, law enforcement face recognition has come under heavy criticism at the federal level by the House Oversight Committee and the Government Accountability Office.

The bill is now before the California Senate, where it will be heard by the Public Safety Committee on Tuesday, June 11.

EFF, along with a coalition of civil liberties organizations including the ACLU, Advancing Justice – Asian Law Caucus, CAIR California, Data for Black Lives, and a number of our Electronic Frontier Alliance allies have joined forces in supporting this critical legislation.

Face recognition technology has disproportionately high error rates for women and people of color. Making matters worse, law enforcement agencies conducting face surveillance often rely on images pulled from mugshot databases, which include a disproportionate number of people of color due to racial discrimination in our criminal justice system. So face surveillance will exacerbate historical biases born of, and contributing to, unfair policing practices in Black and Latinx neighborhoods.

Polling commissioned by the ACLU of Northern California in March of this year shows the people of California, across party lines, support these important limitations. The ACLU’s polling found that 62% of respondents agreed that body cameras should be used solely to record how police treat people, and as a tool for public oversight and accountability, rather than to give law enforcement a means to identify and track people. In the same poll, 82% of respondents said they disagree with the government being able to monitor and track a person using their biometric information.

Last month, Reuters reported that Microsoft rejected an unidentified California law enforcement agency’s request to apply face recognition to body cameras due to human rights concerns.

“Anytime they pulled anyone over, they wanted to run a face scan,” Microsoft President Brad Smith said. “We said this technology is not your answer.”

We agree that ubiquitous face surveillance is a mistake, but we shouldn’t have to rely on the ethical standards of tech giants to address this problem. Lawmakers in Sacramento must use this opportunity to prevent the threat of mass biometric surveillance from becoming the new normal. We urge the California Senate to pass A.B. 1215.

via EFF.org Updates http://bit.ly/US8QQS

June 10, 2019 at 07:17PM

Details of Justice Department Efforts To Break Encryption of Facebook Messenger Must Be Made Public, EFF Tells Court

San Francisco—The Electronic Frontier Foundation asked a federal appeals court today to make public a ruling that reportedly forbade the Justice Department from forcing Facebook to break the encryption of a communications service for users.

Media widely reported last fall that a federal court in Fresno, California denied the government’s effort to compromise the security and privacy promised to users of Facebook’s Messenger application. But the court’s order and details about the legal dispute have been kept secret, preventing people from learning about how DOJ sought to break encryption, and why a federal judge rejected those efforts.

EFF, the ACLU, and Stanford cybersecurity scholar Riana Pfefferkorn told the appeals court in a filing today that the public has First Amendment and common law rights to access judicial opinions and court records about the laws that govern us. Unsealing documents in the Facebook Messenger case is especially important because the public deserves to know when law enforcement tries to compel a company that hosts massive amounts of private communications to circumvent its own security features and hand over users’ private data, EFF said in a filing  to the U.S. Court of Appeals for the Ninth Circuit. ACLU and Pfefferkorn, Associate Director of Surveillance and Cybersecurity at Stanford University’s Center for Internet and Society, joined EFF’s request to unseal. A federal judge in Fresno denied a motion to unseal the documents, leading to this appeal.

Media reports last year revealed DOJ’s attempt to get Facebook to turn over customer data and unencrypted Messenger voice calls based on a wiretap order in an investigation of suspected M-13 gang activity. Facebook refused the government’s request, leading DOJ to try to hold the company in contempt. Because the judge’s ruling denying the government’s request is entirely under seal, the public has no way of knowing how the government tried to justify its request or why the judge turned it down—both of which could impact users’ ability to protect their communications from prying eyes.

“The ruling likely interprets the scope of the Wiretap Act, which impacts the privacy and security of Americans’ communications, and it involves an application used by hundreds of millions of people around the world,” said EFF Senior Staff Attorney Andrew Crocker. “Unsealing the court records could help us understand how this case fits into the government’s larger campaign to make sure it can access any encrypted communication.’’

In 2016 the FBI attempted to force Apple to disable security features of its mobile operating system to allow access to a locked iPhone belonging to one of the shooters alleged to have killed 14 people in San Bernardino, California. Apple fought the order, and EFF supported the company’s efforts. Eventually the FBI announced that it had received a third-party tip with a method to unlock the phone without Apple’s assistance. We believed that the FBI’s intention with the litigation was to obtain legal precedent that it could compel Apple to sabotage its own security mechanisms.

“The government should not be able to rely on a secret body of law for accessing encrypted communications and surveilling Americans,” said EFF Staff Attorney Aaron Mackey. “We are asking the court to rule that every American has a right to know about rules governing who can access their private conversations.

For the motion:


Senior Staff Attorney
Staff Attorney

via EFF.org Updates http://bit.ly/US8QQS

June 12, 2019 at 08:06PM

New Documents Reveal DHS Asserting Broad, Unconstitutional Authority to Search Travelers’ Phones and Laptops

EFF, ACLU Move for Summary Judgement to Block Warrantless Searches of Electronic Devices at Airports, U.S. Ports of Entry

BOSTON — The Electronic Frontier Foundation (EFF) and the ACLU today asked a federal court to rule without trial that the Department of Homeland Security violates the First and Fourth Amendments by searching travelers’ smartphones and laptops at airports and other U.S. ports of entry without a warrant.

The request for summary judgment comes after the groups obtained documents and deposition testimony revealing that U.S. Customs and Border Protection and U.S. Immigration and Customs Enforcement authorize border officials to search travelers’ phones and laptops for general law enforcement purposes, and consider requests from other government agencies when deciding whether to conduct such warrantless searches.

“The evidence we have presented the court shows that the scope of ICE and CBP border searches is unconstitutionally broad,” said EFF Senior Staff Attorney Adam Schwartz. “ICE and CBP policies and practices allow unfettered, warrantless searches of travelers’ digital devices, and empower officers to dodge the Fourth Amendment when rifling through highly personal information contained on laptops and phones.”

The previously undisclosed government information was obtained as part of a lawsuit, Alasaad v. McAleenan, EFF, ACLU, and ACLU of Massachusetts filed in September 2017 on behalf of 11 travelers—10 U.S. citizens and one lawful permanent resident—whose smartphones and laptops were searched without warrants at U.S. ports of entry.

“This new evidence reveals that government agencies are using the pretext of the border to make an end run around the First and Fourth Amendments,” said Esha Bhandari, staff attorney with the ACLU’s Speech, Privacy, and Technology Project. “The border is not a lawless place, ICE and CBP are not exempt from the Constitution, and the information on our electronic devices is not devoid of Fourth Amendment protections. We’re asking the court to stop these unlawful searches and require the government to get a warrant.”

The government documents and testimony, portions of which were publicly filed in court today, reveal CBP and ICE are asserting broad and unconstitutional authority to search and seize travelers’ devices. The evidence includes ICE and CBP policies and practices that authorize border officers to conduct warrantless and suspicionless device searches for purposes beyond the enforcement of immigration and customs laws. Officials can search devices for general law enforcement purposes, such as enforcing bankruptcy, environmental, and consumer protection laws, and for intelligence gathering or to advance pre-existing investigations. Officers also consider requests from other government agencies to search devices. In addition, the agencies assert the authority to search electronic devices when the subject of interest is someone other than the traveler—such as when the traveler is a journalist or scholar with foreign sources who are of interest to the U.S. government, or even when the traveler is the business partner of someone under investigation. Both agencies further allow officers to retain information from travelers’ electronic devices and share it with other government entities, including state, local, and foreign law enforcement agencies.

The plaintiffs are asking the court to rule that the government must have a warrant based on probable cause before conducting searches of electronic devices, which contain highly detailed personal information about people’s lives. The plaintiffs, which include a limousine driver, a military veteran, journalists, students, an artist, a NASA engineer, and a business owner, are also requesting the court to hold that the government must have probable cause to confiscate a traveler’s device.

The district court previously rejected the government’s motion to dismiss the lawsuit.

The number of electronic device searches at the border has increased dramatically in the last few years. Last year, CBP conducted more than 33,000 border device searches, almost four times the number from just three years prior. CBP and ICE policies allow border officers to manually search anyone’s smartphone with no suspicion at all, and to conduct a forensic search with reasonable suspicion of wrongdoing. CBP also allows suspicionless device searches for a “national security concern.”

Below is a full list of the plaintiffs. Their individual stories can be found here:

  • Ghassan and Nadia Alasaad are a married couple who live in Massachusetts, where he is a limousine driver and she is a nursing student.
  • Suhaib Allababidi, who lives in Texas, owns and operates a business that sells security technology, including to federal government clients.
  • Sidd Bikkannavar is an engineer for NASA’s Jet Propulsion Laboratory in California.
  • Jeremy Dupin is a journalist living in Massachusetts.
  • Aaron Gach is an artist living in California.
  • Isma’il Kushkush is a journalist living in Virginia.
  • Diane Maye is a college professor and former captain in the U. S. Air Force living in Florida.
  • Zainab Merchant is a writer and a graduate student at Harvard.
  • Akram Shibly is a filmmaker from New York.
  • Matthew Wright is a computer programmer in Colorado.

    For the motion for summary judgment and statement of material facts:

    For more information about this case:

Senior Staff Attorney
ACLU of Massachusetts

via EFF.org Updates http://bit.ly/US8QQS

April 30, 2019 at 01:32PM

Skip the Surveillance By Opting Out of Face Recognition At Airports

Government agencies and airlines have ignored years of warnings from privacy groups and Senators that using face recognition technology on travelers would massively violate their privacy. Now, the passengers are in revolt as well, and they’re demanding answers.

Last week, a lengthy exchange on Twitter between a traveler who was concerned about her privacy and a spokesperson for the airline JetBlue went viral, and many of the questions asked by the traveler and others were the same ones that we’ve posed to Customs and Border Protection (CBP) officials: Where did you get my data? How is it protected? Which airports will use this? Where in the airports will it be used? Most importantly, how do I opt-out?

Right now, the key to opting out of face recognition is to be vigilant.

How to Opt Out

These questions should be simple to answer, but we haven’t gotten simple answers. When we asked CBP for more information, they told us: “visit our website.” We did, and we still have many of the same questions. Representatives for airlines, which partner directly with the government agencies, also seem unable to answer the concerns, as the JetBlue spokesperson made evident. Both agencies and airlines seemed to expect no pushback from passengers when they implemented this boarding-pass-replacing-panopticon. The convenience would win out, they seemed to assume, not expecting people to mind having their face scanned “the same way you unlock your phone.” But now that “your face is your boarding pass” (as JetBlue awkwardly puts it), at least in some airports, the invasive nature of the system is much more clear, and travelers are understandably upset.

It might sound trite, but right now, the key to opting out of face recognition is to be vigilant. There’s no single box you can check, and importantly, it may not be possible for non-U.S. persons to opt out of face recognition entirely. For those who can opt out, you’ll need to spot the surveillance when it’s happening. To start, TSA PreCheck, Clear, and other ways of “skipping the line” often require biometric identification, and are often being used as test cases for these sorts of programs. Once you’re at the airport, be on the lookout for any time a TSA, CBP, or airline employee asks you to look into a device, or when there’s a kiosk or signage like those below. That means your biometric data is probably about to be scanned.

Another example of signage CBP gave us as a response to our letter.

A sign in an airport explaining that there is face recognition being used in the area. This is an example of signage CBP gave us as a response to our letter.

At the moment, face recognition is most likely to happen at specific airports, including Atlanta, Chicago, Seattle, San Francisco, Las Vegas, Los Angeles, Washington (Dulles and Reagan), Boston, Fort Lauderdale, Houston Hobby, Dallas/Fort Worth, JFK, Miami, San Jose, Orlando, and Detroit; while flying on Delta, JetBlue, Lufthansa, British Airways and American Airlines; and in particular, on international flights. But, that doesn’t mean that other airlines and airports won’t implement it sooner rather than later.

A woman stands facing a small screen that's about five feet off the ground on a pedestal. This is an example of a face recognition kiosk from TSA's website.

To skip the surveillance, CBP says you “should notify a CBP Officer or an airline or airport representative in order to seek an alternative means of verifying [your] identity and documents.” Do the same when you encounter this with an airline. While there should be signage near the face recognition area, it may not be clear. If you’re concerned about creating a slight delay for yourself or other passengers, take note: though CBP has claimed to have a 98% accuracy rating in their pilot programs, the Office of the Inspector General could not verify those numbers, and even a 2% error rate would cause thousands of people to be misidentified every day. Most face recognition technology has significantly lower accuracy ratings than that, so you might actually be speeding things up by skipping the surveillance.

The Long And Winding Biometric Pathway

Part of the reason for the confusion about how to opt out is that there are actually (at least) three different face recognition checkpoints looming: Airlines want to use your face as your boarding pass, saying “it’s about convenience.” CBP, which is part of the Department of Homeland Security (DHS), wants to use your face to check against DHS and State Department databases when you’re entering or exiting the country; and the TSA wants to compare your face against your photo identification throughout the airport. And if people are upset now, they will be furious to know this is just the beginning of the “biometric pathway” program: CBP and TSA want to use face recognition and other biometric data to track everyone from check-in, through security, into airport lounges, and onto flights (PDF). They’re moving fast, too, despite (or perhaps because of) the fact that there are no regulations on this sort of technology: DHS is hoping to use facial recognition on 97 percent of departing air passengers within the next four years and 100 percent of all international passengers in the top 20 U.S. airports by 2021.

It’s the customers and passengers who will bear the burden when things go wrong,

If the government agencies get their way, new biometric data could be taken from/used against travelers wherever they are in the airport—and much of that collection will be implemented by private companies (even rental car companies are getting in on the action). CBP will store that facial recognition data for two weeks for U.S. citizens and lawful permanent residents, and for 75+ years for non-U.S. persons. In addition, the biometric data collected by at least some of these systems in the future—which can include your fingerprints, the image of your face, and the scan of your iris—will be stored in FBI and DHS databases and will be searched again and again for immigration, law enforcement, and intelligence checks, including checks against latent prints associated with unsolved crimes.

Passengers Will Bear the Burden of Privacy Invasion, Not Airlines or Government Agencies

It’s easy for companies and agencies to tout the convenience of this sort of massive data collection and sharing scheme. But as we’ve seen in notable privacy fiascos over the last few years—from Facebook’s Cambridge Analytica scandal, to the breaches of the Office of Personnel Management and Equifax in the U.S., to the constant hacking of India’s national biometric database, Aadhar—it’s the customers and passengers who will bear the burden when things go wrong, and they will go wrong. These vast biometric databases will create huge security and privacy risks, with the additional concern that a company leaking your passwords or credit card numbers is nothing compared to it leaking your biometric data. While you can change a password, you can’t easily change your face.

Additionally, these systems are notoriously inaccurate, contain out-of-date information, and due to the fact that immigrants and people of color are disproportionately represented in criminal and immigration databases, and that face recognition systems are less capable of identifying people of color, women, and young people, the weight of these inaccuracies will fall disproportionately on them. It will be the passengers who bear the burden when they are stuck watching the flights they paid for take off without them because there was an error with a database or an algorithm, or because they preferred non-biometric options that weren’t in place.

It’s time for the government agencies and the airlines to pause these programs until they can clearly and adequately give:

  • Photographs of the signage in-situ in the airports in question, as well as any additional information about the opt-out process.
  • An explanation of the locations where CBP will be providing meaningful and clear opt out notice to travelers (for example, at entry points, point-of-sale, ticket counters, security checkpoints, and boarding gates) as well as the specific language travelers can use to opt out of the biometric data collection program.
  • An up-to-date list of all the airports and airlines that currently participate in the biometric exit program.
  • Information about the algorithm CBP is using to compare photos (provided by NEC), as well as the accuracy information associated with that algorithm.
  • Technological specifications for transferring data from point of collection to DHS and with vendors and airlines.

Additional questions—like how data is safeguarded—are laid out in our letter to CBP.

Congress must also demand the answers to these questions. And lawmakers must require agencies and airlines to pause this program until they can not only ensure the biometric privacy of travelers is protected but more importantly justify this huge invasion of privacy. Just last month, three Senators released a joint statement calling on DHS to pause the program until there can be “a rulemaking to establish privacy and security rules of the road,” but so far, they’ve been ignored.

Trading privacy for convenience is a bad bargain, and it can feel like the deal isn’t always one we have a choice in. DHS has said that the only way we can ensure that our biometric data isn’t collected when we travel is to “refrain from traveling.” That’s ridiculous. The time to regulate and restrict the use of  facial recognition technology is now, before it becomes embedded in our everyday lives. We must keep fighting to make sure that in the future, it gets easier, and not harder, to defend our privacy—biometric or otherwise.

via EFF.org Updates http://bit.ly/US8QQS

April 25, 2019 at 01:41PM

EPIC to Senate: Oversight Board Must Review Government Use of Facial Recognition, AI

In advance of a hearing about the Privacy and Civil Liberties Oversight Board, EPIC sent a statement to the Senate Judiciary Committee outlining priorities. EPIC said the Civil Liberties Board should (1) release the report on Executive Order 12333; (2) review the use of facial recognition technology and propose safeguards; (3) review the use of artificial intelligence and propose safeguards; and (4) monitor proposals for “smart” borders and assess privacy impacts on US residents. The independent agency reviews federal agency programs to ensure adequate safeguards for privacy and civil liberties. EPIC helped establish the PCLOB. In 2003 EPIC testified before the 9-11 Commission and urged the creation of an independent privacy agency to oversee the surveillance powers established after 9/11. EPIC also set out initial priorities for the PCLOB and spoke at the first meeting of the Oversight Board in 2013. In 2016, EPIC awarded former PCLOB Board Member Judge Patricia Wald with the EPIC Champion of Freedom Award.

via epic.org http://epic.org/

February 5, 2019 at 01:15PM

Feds forcing mass fingerprint unlocks is an “abuse of power,” judge rules

An employee demonstrates fingerprint security software on a smartphone at the MasterCard Inc. stand at the Mobile World Congress in this arranged photograph in Barcelona, Spain, on Wednesday, February 24, 2016.

An employee demonstrates fingerprint security software on a smartphone at the MasterCard Inc. stand at the Mobile World Congress in this arranged photograph in Barcelona, Spain, on Wednesday, February 24, 2016.

According to a new ruling issued last week by a federal magistrate in Oakland, California, the government can’t get a warrant granting permission to turn up at a local house allegedly connected to a criminal suspect, seize all digital devices, and force anyone found at the house to use biometrics to try to unlock those devices.

The nine-page order, which was issued on January 10 and first reported by Forbes on Monday, involves a criminal case that is otherwise sealed. There is a lot that remains unknown about the particulars, including the names of the suspects, why federal authorities believe that the two suspects committed extortion via Facebook Messenger, and what Oakland house is involved.

US Magistrate Judge Kandis Westmore found that the government request here “runs afoul of the Fourth and Fifth Amendments,” which protect against unreasonable searches and self-incrimination, respectively.

She continued, noting that the government request was “overbroad.”

“The Government cannot be permitted to search and seize a mobile phone or other device that is on a non-suspect’s person simply because they are present during an otherwise lawful search,” the judge wrote.

Blake Reid, a law professor at the University of Colorado, told Ars that it was a positive step that another judge was understanding the possible ramifications of allowing the government to rifle through someone’s phone.

“Accessing people’s phones is, in my opinion, much more like accessing the contents of their brains than it is the contents of their file cabinets,” he emailed.

Multiple times, Judge Westmore cited a 2018 Supreme Court decision known as Carpenter, which found that law enforcement needs a warrant to obtain more than 120 days of cell-site location information.

“Citizens do not contemplate waiving their civil rights when using new technology, and the Supreme Court has concluded that, to find otherwise, would leave individuals ‘at the mercy of advancing technology,'” she wrote, citing the Carpenter opinion.

Judge Westmore’s order is reminiscent of a 2017 order

in a seemingly similar federal case in Illinois: there, a federal magistrate also denied government efforts to conduct a nearly identical biometric dragnet.

In the earlier case, US Magistrate Judge M. David Weisman quoted from the government’s own warrant application, which specifically said that such biometric search language was now “standard.”

Judge Westmore cited his opinion in her own, as she reached her conclusion.

“While the Court sympathizes with the Government’s interest in accessing the contents of any electronic devices it might lawfully seize, there are other ways that the Government might access the content that do not trample on the Fifth Amendment,” she wrote.

“In the instant matter, the Government may obtain any Facebook Messenger communications from Facebook under the Stored Communications Act or warrant based on probable cause. While it may be more expedient to circumvent Facebook and attempt to gain access by infringing on the Fifth Amendment’s privilege against self-incrimination, it is an abuse of power and is unconstitutional.”

via Policy – Ars Technica https://arstechnica.com

January 14, 2019 at 06:38PM

T-Mobile, Sprint, and AT&T still selling your location data, report says

In June 2018, all four major US wireless carriers pledged to stop selling their mobile customers’ location information to third-party data brokers. The carriers were pressured into making the change after a security problem leaked the real-time location of US cell phone users.

But an investigation by Motherboard found that “T-Mobile, Sprint, and AT&T are [still] selling access to their customers’ location data and that data is ending up in the hands of bounty hunters and others not authorized to possess it, letting them track most phones in the country.”

The Motherboard report, published today, is extensive and worth reading in full. Motherboard reporter Joseph Cox gave a real T-Mobile phone number to a “bounty hunter,” who was able to locate the phone to within a few hundred meters.

This was accomplished with a “tracking tool [that] relies on real-time location data sold to bounty hunters that ultimately originated from the telcos themselves, including T-Mobile, AT&T, and Sprint,” Motherboard wrote.

A credit-reporting company called MicroBilt “is selling phone geolocation services with little oversight to a spread of different private industries, ranging from car salesmen and property managers to bail bondsmen and bounty hunters,” the article continued. “Compounding that already highly questionable business practice, this spying capability is also being resold to others on the black market who are not licensed by the company to use it, including me, seemingly without MicroBilt’s knowledge.”

Motherboard described how the data is passed along a chain of private companies. “In the case of the phone we tracked, six different entities had potential access to the phone’s data,” the report said. “T-Mobile shares location data with an aggregator called Zumigo, which shares information with MicroBilt. MicroBilt shared that data with a customer using its mobile phone tracking product. The bounty hunter then shared this information with a bail industry source, who shared it with Motherboard.”

The middleman charged $300 to find the phone—”a sizeable markup on the usual MicroBilt price,” Motherboard wrote.

It’s not clear whether Verizon location data can also be purchased in this way. “MicroBilt’s product documentation suggests the phone-location service works on all mobile networks, however the middleman was unable or unwilling to conduct a search for a Verizon device,” Motherboard also wrote.

MicroBilt told Motherboard that customers using its service for fraud prevention must obtain consent from phone users, the news site wrote.

But when Motherboard arranged for a phone to be located, “the target phone received no warning it was being tracked,” the news site wrote. (The phone’s owner had given consent to Motherboard for the experiment.)

MicroBilt investigated the case and found that a private bail-bond company made the request for the phone’s location, according to Motherboard.

“MicroBilt was unaware that its terms of use were being violated by the rogue individual that submitted the request under false pretenses, does not approve of such use cases, and has a clear policy that such violations will result in loss of access to all MicroBilt services and termination of the requesting party’s end-user agreement,” MicroBilt told Motherboard. “Upon investigating the alleged abuse and learning of the violation of our contract, we terminated the customer’s access to our products, and they will not be eligible for reinstatement based on this violation.”

Carriers made “empty promises to consumers”

Of course, mobile carriers themselves could prevent such privacy problems by not selling their customers’ location data in the first place.

Carriers were pressured into changing their policies last year after it was revealed that prison phone company Securus offers a service enabling law enforcement officers to locate most American cell phones within seconds. Securus’ service relies on data from LocationSmart. It was also reported that a LocationSmart bug could have allowed anyone to surreptitiously track the real-time whereabouts of cell phone users.

At the time, US Sen. Ron Wyden (D-Ore.) urged all four major carriers to stop selling their customers’ location data. They all said that they would, with limited exceptions: for example, AT&T said it would “be ending our work with aggregators” but continue to allow “important, potential lifesaving services like emergency roadside assistance.”

Today, Wyden said he’s disappointed that carriers are apparently still selling location data to data brokers.

“Major carriers pledged to end these practices, but it appears to have been more empty promises to consumers,” Wyden wrote on Twitter. “It’s time for Congress to take action by passing my bill to safeguard consumer data and hold companies accountable.” Wyden’s proposed privacy law could issue steep fines to companies and send their top executives to prison for up to 20 years if they violate Americans’ privacy.

AT&T told Ars that it has “shut down access for MicroBilt as we investigate these allegations.”

“We only permit sharing of location when a customer gives permission for cases like fraud prevention or emergency roadside assistance or when required by law,” AT&T also said. “Over the past few months, as we committed to do, we have been shutting down everything else.”

We also contacted T-Mobile and Sprint about the Motherboard article today and will update this story with any responses we get.

Sprint told Motherboard that it “does not have a direct relationship with MicroBilt” and “will take appropriate action” if it determines that any customers violated contractual requirements.

T-Mobile told Motherboard that it “will not tolerate any misuse of our customers’ data. While T-Mobile does not have a direct relationship with MicroBilt, our vendor Zumigo was working with them and has confirmed with us that they have already shut down all transmission of T-Mobile data. T-Mobile has also blocked access to device location data for any request submitted by Zumigo on behalf of MicroBilt as an additional precaution.”

via Policy – Ars Technica https://arstechnica.com

January 8, 2019 at 05:17PM

Border Agency Finalizes Social Media Collection Rule

Despite comments from EPIC and others, Customs and Border Protection will collect social media information from Americans and place that data outside legal protections provided by the Privacy Act. EPIC proposed opposed the collection of personal data and said that CBP should narrow the Privacy Act exemptions. The agency responded briefly to public comments, failing to defend the agency’s decision. In a related FOIA lawsuit against DHS, EPIC obtained documents which revealed that federal agencies gather social media comments to identify individuals critical of the government.

via epic.org http://epic.org/

January 3, 2019 at 04:26PM