‘Cheap Clicks’: How the Media ‘Stigmatizes’ Suspects

Recent decisions by news outlets to shield the names of those arrested for misdemeanors  raise provocative questions about the media’s longtime reliance on crime news, much of it sensational, writes Jack Shafer, senior media writer for Politico.

Critics both inside and outside the news industry would like to staunch the usual flow of crime reporting with some new practices, Shafer wrote in a recent column.  

“Their argument boils down to this: Too much crime reporting unnecessarily stigmatizes both suspects and the convicted.”

Associated Press recently announced that “AP will no longer name suspects in minor crime stories, which we sometimes cover and pick up from member news organizations as one-off briefs because they are ‘odd’ and of interest to our customers.”

“Usually, we don’t follow up with coverage about the outcome of the cases. We may not know if the charges were later dropped or reduced, as they often are, or if the suspect was later acquitted,” said the Associated Press.

What makes people’s lives especially difficult in this regard is Google.

AP wrote: “These minor stories, which only cover an arrest, have long lives on the internet. AP’s broad distribution network can make it difficult for the suspects named in such items to later gain employment or just move on in their lives.”

As Shafer put it, Google search has “frozen into electronic amber every reported misdemeanor or felony, condemning the suspects to an eternity of shame.”

Under the “right to be forgotten” rubric, Cleveland.com (and sister institution, the Cleveland Plain Dealer), The Boston Globe, the Bangor Daily News and other newspapers have altered the way crime news should be preserved and retrieved.

Cleveland.com announced a Right to be Forgotten policy, in which it removes names of people from dated, embarrassing stories about them.

“In the days before the Internet, newspapers carried stories about minor crimes or stupid antics, and the stories almost immediately faded from memory,” Cleveland.com wrote.

“People paid the court-required penalties for their crimes and moved on, and you could find evidence of the mistakes only by searching court dockets or combing through newspaper microfilm. Few people did that.”

However, Shafer also raises the question of protection of the public.

He writes, “Can we justify rewriting newspaper archives because old but accurate stories embarrass people?”

“Exercising a right to be forgotten obviously helps those who wish to shield their past from scrutiny. But it inflicts potential harms on job recruiters, loan officers, prospective business partners, dates and others who want an accurate gauge of someone’s long-term reputation.”

The movement to revise old stories wouldn’t be happening if newspapers hadn’t made their archives available to the Google search engine, which made mass searches possible, writes Shafer.

“Had newspapers withheld their archives from the Google crawler, people would have to search one newspaper at a time for names.”

Shafer underscores a point mentioned by the Associated Press: the failure of news media to follow up on arrests: was the person exonerated? This too often is shrugged off, leaving the arrest in Google for eternity.

“The right-to-be-forgotten proponents have done us a great service by underscoring how good journalists are at publishing news about arrests but how wretched they are recording innocence, exoneration or dropped charges,” Shafer writes.

But he suggests that solution to this journalistic shortcoming isn’t to blank the names of suspects from old stories or to hide them from Google.

“If the arrest of a suspect is newsworthy enough to be reported in a newspaper, the exoneration or the dropping of charges should be equally newsworthy.”

via The Crime Report https://ift.tt/2myW3Gx

June 28, 2021 at 10:46AM

“Cops Say Encryption Hinders Investigations. These Documents Say Otherwise.”

Despite much whining on the part of law enforcement about the alleged perils to public order posed by encryption, it’s no secret that cops can often bypass measures intended to protect privacy. Now, documents obtained by Vice‘s Motherboard describe just how police agencies use one tool to extract data from Apple devices. It’s more evidence that officials aren’t stymied by encryption half as often as they claim, but just want to paw through our information without effort or expense.

“‘How to unlock and EXTRACT DATA from Apple Mobile Devices with GrayKey,’ the instructions, seemingly written by the San Diego Police Department, read,” Vice‘s Joseph Cox reveals of the documentation obtained with a public records request. “The instructions describe the various conditions it claims allow a GrayKey connection: the device being turned off (known as Before First Unlock, or BFU); the phone is turned on (After First Unlock, or AFU); the device having a damaged display, and when the phone has low battery,” he adds.


| Permalink

via CrimProf Blog https://ift.tt/31qUjQa

June 28, 2021 at 08:36PM

TikTok Quietly Updated Its Privacy Policy to Collect Users’ Biometric Data

Popular short-form video-sharing service TikTok quietly revised its privacy policy in the U.S., allowing it to automatically collect biometric information such as faceprints and voiceprints from the content its users post on the platform.

The policy change, first spotted by TechCrunch, went into effect on June 2. TikTok users who reside in the European Economic Area (EEA), the U.K., Switzerland, and other geographies (excluding India) where the service operates are exempted from the changes.

“We may collect biometric identifiers and biometric information as defined under U.S. laws, such as faceprints and voiceprints, from your User Content. Where required by law, we will seek any required permissions from you prior to any such collection,” the ByteDance-owned company said in a newly introduced section called “Image and Audio Information.”

On top of this, the company’s privacy policy also notes that it may collect information about “the nature of the audio, and the text of the words spoken in your User Content” so as to “enable special video effects, for content moderation, for demographic classification, for content and ad recommendations, and for other non-personally-identifying operations.”

Besides not clearly defining the exact nature of biometrics being collected or offering a convincing reason as to why this data gathering is necessary in the first place, the vaguely worded language could allow TikTok to amass such sensitive data without users’ explicit consent.

Given that only a handful of states in the U.S. — California, Illinois, New York, Texas, and Washington — have laws restricting companies from collecting such data, the move could mean that TikTok doesn’t have to ask permission from its users in other states, as noted by TechCrunch. In other words, users are agreeing to have their biometric data collected simply by agreeing to its terms of service.

The revisions to its privacy policy come months after TikTok agreed to pay $92 million to settle a class-action lawsuit that alleged the app violated the Illinois’ Biometric Information Privacy Act (BIPA) by clandestinely capturing biometric and personal data from users in the U.S. to targets ads without meeting the informed consent requirements of the state law.

As part of the settlement, TikTok complied to avoid collecting or storing biometric information, biometric identifiers, geolocation, or GPS data unless expressly disclosed in its privacy policy. Viewed in this light, it’s possible that the changes are a result of this agreement.

Found this article interesting? Follow THN on






to read more exclusive content we post.

via The Hacker News https://ift.tt/1jm7smN

June 5, 2021 at 07:33AM

The FBI Should Stop Attacking Encryption and Tell Congress About All the Encrypted Phones It’s Already Hacking Into

Federal law enforcement has been asking for a backdoor to read Americans’ encrypted communications for years now. FBI Director Christopher Wray did it again last week in testimony to the Senate Judiciary Committee. As usual, the FBI’s complaints involved end-to-end encryption employed by popular messaging platforms, as well as the at-rest encryption of digital devices, which Wray described as offering “user-only access.”

The FBI wants these terms to sound scary, but they actually describe security best practices. End-to-end encryption is what allows users to exchange messages without having them intercepted and read by repressive governments, corporations, and other bad actors. And “user-only access” is actually a perfect encapsulation of how device encryption should work; otherwise, anyone who got their hands on your phone or laptop—a thief, an abusive partner, or an employer—could access its most sensitive data. When you intentionally weaken these systems, it hurts our security and privacy, because there’s no magical kind of access that only works for the good guys. If Wray gets his special pass to listen in on our conversations and access our devices, corporations, criminals, and authoritarians will be able to get the same access.

It’s remarkable that Wray keeps getting invited to Congress to sing the same song. Notably, Wray was invited there to talk, in part, about the January 6th insurrection, a serious domestic attack in which the attackers—far from being concerned about secrecy—proudly broadcast many of their crimes, resulting in hundreds of arrests.

It’s also remarkable what Wray, once more, chose to leave out of this narrative. While Wray continues to express frustration about what his agents can’t get access to, he fails to brief Senators about the shocking frequency with which his agency already accesses Americans’ smartphones. Nevertheless, the scope of police snooping on Americans’ mobile phones is becoming clear, and it’s not just the FBI who is doing it. Instead of inviting Wray up to Capitol Hill to ask for special ways to invade our privacy and security, Senators should be asking Wray about the private data his agents are already trawling through.

Police Have An Incredible Number of Ways to Break Into Encrypted Phones

In all 50 states, police are breaking into phones on a vast scale. An October report from the non-profit Upturn, “Mass Extraction,” has revealed details of how invasive and widespread police hacking of our phones has become. Police can easily purchase forensic tools that extract data from nearly every popular phone. In March 2016, Cellebrite, a popular forensic tool company, supported “logical extractions” for 8,393 different devices, and “physical extractions,” which involves copying all the data on a phone bit-by-bit, for 4,254 devices. Cellebrite can bypass lock screens on about 1,500 different devices.

How do they bypass encryption? Often, they just guess the password. In 2018, Prof. Matthew Green estimated it would take no more than 22 hours for forensic tools to break into some older iPhones with a 6-digit passcode simply by continuously guessing passwords (i.e. “brute-force” entry). A 4-digit passcode would fail in about 13 minutes.

That brute force guessing was enabled by a hardware flaw that has been fixed since 2018, and the rate of password guessing is much more limited now. But even as smartphone companies like Apple improve their security, device hacking remains very much a cat-and-mouse game. As recently as September 2020, Cellebrite marketing materials boasted its tools can break into iPhone devices up to “the latest iPhone 11/ 11 Pro / Max running the latest iOS versions up to the latest 13.4.1”

Even when passwords can’t be broken, vendors like Cellebrite offer “advanced services” that can unlock even the newest iOS and Samsung devices. Upturn research suggests the base price on such services is $1,950, but it can be cheaper in bulk.

Buying electronic break-in technology on a wholesale basis represents the best deal for police departments around the U.S., and they avail themselves of these bargains regularly. In 2018, the Seattle Police Department purchased 20 such “actions” from Cellebrite for $33,000, allowing them to extract phone data within weeks or even days. Law enforcement agencies that want to unlock phones en masse can bring Cellebrite’s “advanced unlocking” in-house, for prices that range from $75,000 to $150,000.

That means for most police departments, breaking into phones isn’t just convenient, it’s relatively inexpensive. Even a mid-sized police department like Virginia Beach, VA has a police budget of more than $100 million; New York City’s police budget is over $5 billion. The FBI’s 2020 budget request is about $9 billion.

When the FBI says it’s “going dark” because it can’t beat encryption, what it’s really asking for is a method of breaking in that’s cheaper, easier, and more reliable than the methods they already have. The only way to fully meet the FBI’s demands would be to require a backdoor in all platforms, applications, and devices. Especially at a time when police abuses nationwide have come into new focus, this type of complaint should be a non-starter with elected officials. Instead, they should be questioning how and why police are already dodging encryption. These techniques aren’t just being used against criminals.

Phone Searches By Police Are Widespread and Commonplace

Upturn has documented more than 2,000 agencies across the U.S. that have purchased products or services from mobile device forensic tool vendors, including every one of the 50 largest police departments, and at least 25 of the 50 largest sheriffs’ offices.

Law enforcement officials like Wray want to convince us that encryption needs to be bypassed or broken for threats like terrorism or crimes against children, but in fact, Upturn’s public records requests show that police use forensic tools to search phones for everyday low-level crimes. Even when police don’t need to bypass encryption—such as when they convince someone to “consent” to the search of a phone and unlock it—these invasive police phone searches are used “as an all-purpose investigative tool, for an astonishingly broad array of offenses, often without a warrant,” as Upturn put it.

The 44 law enforcement agencies who provided records to Upturn revealed at least 50,000 extractions of cell phones between 2015 and 2019. And there’s no question that this number is a “severe undercount,” counting only 44 agencies, when at least 2,000 agencies have the tools. Many of the largest police departments, including New York, Chicago, Washington D.C., Baltimore, and Boston, either denied Upturn’s record requests or did not respond.

“Law enforcement… use these tools to investigate cases involving graffiti, shoplifting, marijuana possession, prostitution, vandalism, car crashes, parole violations, petty theft, public intoxication, and the full gamut of drug-related offenses,” Upturn reports. In Suffolk County, NY, 20 percent of the phones searched by police were for narcotics cases. Authorities in Santa Clara County, CA, San Bernardino County, CA, and Fort Worth, TX all reported that drug crimes were among the most common reasons for cell phone data extractions. Here are just a few examples of the everyday offenses in which Upturn found police searched phones:

  • In one case, police officers sought to search two phones for evidence of drug sales after a $220 undercover marijuana bust.
  • Police stopped a vehicle for a “left lane violation,” then “due to nervousness and inconsistent stories, a free air sniff was conducted by a … K9 with positive alert to narcotics.” The officers found bags of marijuana in the car, then seized eight phones from the car’s occupants, and sought to extract data from them for “evidence of drug transactions.”
  • Officers looking for a juvenile who allegedly violated terms of his electronic monitoring found him after a “short foot pursuit” in which the youngster threw his phone to the ground. Officers sought to search the phone for evidence of “escape in the second degree.”

And these searches often take place without judicial warrants, despite the U.S. Supreme Court’s clear ruling in Riley v. California that a warrant is required to search a cell phone. That’s because police frequently abuse rules around so-called consent searches. These types of searches are widespread, but they’re hardly consensual. In January, we wrote about how these so-called “consent searches” are extraordinary violations of our privacy.

Forensic searches of cell phones are increasingly common. The Las Vegas police, for instance, examined 260% more cell phones in 2018-2019 compared with 2015-2016.

The searches are often overbroad, as well. It’s not uncommon for data unrelated to the initial suspicions to be copied, kept, and used for other purposes later. For instance, police can deem unrelated data to be “gang related,” and keep it in a “gang database,” which have often vague standards. Being placed in such a database can easily affect peoples’ future employment options. Many police departments don’t have any policies in place about when forensic phone-searching tools can be used.

It’s Time for Oversight On Police Phone Searches

Rather than listening to a litany of requests for special access to personal data from federal agencies like the FBI, Congress should assert oversight over the inappropriate types of access that are already taking place.

The first step is to start keeping track of what’s happening. Congress should require that federal law enforcement agencies create detailed audit logs and screen recordings of digital searches. And we agree with Upturn that agencies nationwide should collect and publish aggregated information about how many phones were searched, and whether those searches involved warrants (with published warrant numbers), or so-called consent searches. Agencies should also disclose what tools were used for data extraction and analysis.

Congress should also consider placing sharp limits on when consent searches can take place at all. In our January blog post, we suggest that such searches be banned entirely in high-coercion settings like traffic stops, and suggest some specific limits that should be set in less-coercive settings.

via EFF.org Updates https://ift.tt/US8QQS

March 8, 2021 at 12:56PM

Scholars Under Surveillance: How Campus Police Use High Tech to Spy on Students

It may be many months before college campuses across the U.S. fully reopen, but when they do, many students will be returning to a learning environment that is under near constant scrutiny by law enforcement.

A fear of school shootings, and other campus crimes, have led administrators and campus police to install sophisticated surveillance systems that go far beyond run-of-the-mill security camera networks to include drones, gunshot detection sensors, and much more. Campuses have also adopted automated license plate readers, ostensibly to enforce parking rules, but often that data feeds into the criminal justice system. Some campuses use advanced biometric software to verify whether students are eligible to eat in the cafeteria. Police have even adopted new technologies to investigate activism on campus. Often, there is little or no justification for why a school needs such technology, other than novelty or asserted convenience.

In July 2020, the Electronic Frontier Foundation and the Reynolds School of Journalism at University of Nevada, Reno launched the Atlas of Surveillance, a database of now more than 7,000 surveillance technologies deployed by law enforcement agencies across the United States. In the process of compiling this data we noticed a peculiar trend: college campuses are acquiring a surprising number of surveillance technologies more common to metropolitan areas that experience high levels of violent crime.

So, we began collecting data from universities and community colleges using a variety of methods, including running specific search terms across .edu domains and assigning small research tasks to a large number of students using EFF’s Report Back tool. We documented more than 250 technology purchases, ranging from body-worn cameras to face recognition, adopted by more than 200 universities in 37 states. As big as these numbers are, they are only a sliver of what is happening on college campuses around the world.

A map of the United States with icons marking various surveillance technologies.

Click the image to launch an interactive map (Google’s Privacy policy applies)


Download the U.S. Campus Police Surveillance dataset as a CSV.


Body-worn cameras


Maybe your school has a film department, but the most prolific cinematographers on your college campus are probably the police.

Since the early 2010s, body-worn cameras (BWCs) have become more and more common in the United States. This holds true for law enforcement agencies on university and college campuses. These cameras are attached to officers’ uniforms (often the chest or shoulder, but sometimes head-mounted) and capture interactions between police and members of the public. While BWC programs are often pitched as an accountability measure to reduce police brutality, in practice these cameras are more often used to capture evidence later used in prosecutions.

Policies on these cameras vary from campus to campus—such as whether a camera should be always recording, or only during certain circumstances. But students and faculty should be aware than any interaction, or even near-interaction, with a police officer could be on camera. That footage could be used in a criminal case, but in many states, journalists and members of the public are also able to obtain BWC footage through an open records request.

Aside from your run-of-the-mill, closed-circuit surveillance camera networks, BWCs were the most prevalent technology we identified in use by campus police departments. This isn’t surprising, since researchers have observed similar trends in municipal law enforcement. We documented 152 campus police departments using BWCs, but as noted, this is only a fraction of what is being used throughout the country. One of the largest rollouts began last summer when Pennsylvania State University announced that police on all 22 campuses would start wearing the devices.

One of the main ways that universities have purchased BWCs is through funding from the U.S. Department of Justice’s Bureau of Justice Assistance. Since 2015, more than 20 universities and community colleges have received funds through the bureau’s Body-Worn Camera Grant Program established during the Obama administration. In Oregon, these funds helped the Portland State University Police Department adopt the technology well ahead of their municipal counterparts. PSU police received $20,000 in 2015 for BWCs, while the Portland Police Department does not use BWCs at all (Portland PD’s latest attempt to acquire them in 2021 was scuttled due to budget concerns).


Drones, also known as unmanned aerial vehicles (UAVs), are remote-controlled flying devices that can be used to surveil crowds from above or locations that would otherwise be difficult or dangerous to observe by a human on the ground. On many campuses, drones are purchased for research purposes, and it’s not unusual to see a quadrotor (a drone with four propellers) buzzing around the quad. However, campus police have also purchased drones for surveillance and criminal investigations.

Our data, which was based on a study conducted by the Center for the Study of The Drone at Bard College, identified 10 campus police departments that have drones:

  • California State Monterey University Police Department
  • Colorado State University Police Department
  • Cuyahoga Community College Police Department
  • Lehigh University Police Department
  • New Mexico State University Police Department
  • Northwest Florida State College Campus Police Department
  • Pennsylvania State University Police Department
  • University of Alabama, Huntsville Police Department
  • University of Arkansas, Fort Smith Police Department
  • University of North Dakota Police Department

One of the earliest campus drone programs originated at the University of North Dakota, where the campus police began deploying a drone in 2012 as part of a regional UAV unit that also included members of local police and sheriffs’ offices. According to UnmannedAerial.com, the unit moved from a “reactive” to a “proactive” approach in 2018, allowing officers to carry drones with them on patrol, rather than retrieving them in response to specific incidents.

The Northwest Florida State University Police Department was notable in acquiring the most drones. While most universities had one, NFSU police began using four drones in 2019, primarily to aid in searching for missing people, assessing traffic accidents, photographing crime scenes, and mapping evacuation routes.

The New Mexico State University Police Department launched its drone program in 2017 and, with the help of a local Eagle Scout in Las Cruces, built a drone training facility for local law enforcement in the region. In response to a local resident who questioned on Facebook whether the program was unnerving, a NMSU spokesperson wrote in 2019:

[The program] thus far has been used to investigate serious traffic crashes (you can really see the skid marks from above), search for people in remote areas, and monitor traffic conditions at large events. They aren’t very useful for monitoring campus residents (even if we wanted to, which we don’t), since so many stay inside.

Not all agencies have taken such a limited approach. The Lehigh University Police Department acquired a drone in 2015, and equipped it with a thermal imaging camera. Police Chief Edward Shupp told a student journalist at The Brown and Right that the only limits on the drone are Federal Aviation Administration regulations, that there are no privacy regulations for officers to follow, and that the department can use the drones “for any purpose” on and off campus.

Even when a university police department does not have its own drones, it may seek help from other local law enforcement agencies. Such was the case in 2017, when the University of California Berkeley Police Department requested drone assistance from the Alameda County Sheriff’s Office to surveil protests on campus.

Automated License Plate Readers

SLS ALPR regular cop

Students and faculty may complain about the price tag of parking passes, but there is also an unseen cost of driving on campus: privacy.

Automated license plate readers (ALPRs) are cameras attached to fixed locations or to security or parking patrol cars that capture every license plate that passes. The data is then uploaded to searchable databases with the time, date, and GPS coordinates. Through our research, we identified ALPRs at 49 universities and colleges throughout the country.

ALPRs are used in two main capacities on college campuses. First, transportation and parking divisions have begun using ALPRs for parking enforcement, either attaching the cameras to parking enforcement vehicles or installing cameras at the entrances and exits to parking lots and garages. For example, the University of Connecticut Parking Services uses NuPark, a system that uses ALPRs to manage virtual permits and citations.

Second, campus police are using ALPRs for public safety purposes. The Towson University Police Department in Maryland, for example, scanned over 3 million license plates using automated license plate readers in 2018 and sent that data to the Maryland Coordination and Analysis Center, a fusion center operated by the Maryland State Police. The University has a total of 6 fixed ALPR sites, with 10 cameras and one mobile unit.

These two uses are not always separate: in some cases, parking officials share data with their police counterparts. At Florida Atlantic University, ALPRs are used for parking enforcement, but the police department also has access to this technology through their Communications Center, which monitors all emergency calls to the department, as well as fire alarms, intrusion alarms, and panic alarm systems. In California, the San Jose/Evergreen Community College District Police Department shares ALPR data with its regional fusion center, the Northern California Regional Intelligence Center.

Social Media Monitoring

Social Media Surveillance | Electronic Frontier Foundation

Colleges and universities are also watching their students on social media, and it is not just to retweet or like a cute Instagram post about your summer internship. Campus public safety divisions employ social media software, such as Social Sentinel, to look for possible threats to the university, such as posts where students indicate suicidal ideation or threats of gun violence. We identified 21 colleges that use social media monitoring to watch their students and surrounding community for threats. This does not include higher education programs to monitor social media for marketing purposes.

This technology is used for public safety by both private and public universities. The Massachusetts Institute of Technology has used Social Sentinel since 2015, while the Des Moines Area Community College Campus Security spent $15,000 on Social Sentinel software in 2020.

Social media monitoring technology may also be used to monitor students’ political activities. Social Sentinel software was used to watch activists on the University of North Carolina campus who were protesting a Confederate memorial on campus, Silent Sam. As NBC reported, UNC Police and the North Carolina State Bureau of Investigation used a technique called “geofencing” to monitor the social media of people in the vicinity of the protests.

“This information was monitored in an attempt to prevent any potential acts of violence (such as those that have occurred at other public protests around the country, including Charlottesville) and to ensure the safety of all participants,” a law enforcement spokesperson told NBC, adding that investigators only looked at public-facing posts and no records of the posts were kept after the event. However, the spokesperson declined to elaborate on how the technology may have been used at other public events.

Biometric Identification

When we say that a student body is under surveillance, we also mean that literally. The term “biometrics” refers to physical and behavioral characteristics (your body and what you do with it) that can be used to identify you. Fingerprints are among the types of biometrics most familiar to people, but police agencies around the country are adopting computer systems capable of identifying people using face recognition and other sophisticated biometrics.

At least four police departments at universities in Florida–University of South Florida, University of North Florida, University of Central Florida, and Florida Atlantic University–have access to a statewide face recognition network called Face Analysis Comparison and Examination System (FACES), which is operated by the Pinellas County Sheriff’s Office. Through FACES, investigators can upload an image and search a database of Florida driver’s license photos and mugshots.

University of Southern California in Los Angeles confirmed to The Fix that its public safety department uses face recognition, however the practice was more prevalent in the San Diego, California area up until recently.

In San Diego, at least five universities and college campuses participated in a face recognition program involving mobile devices. San Diego State University stood out for having conducted more than 180 face recognition searches in 2018. However, in 2019, this practice was suspended in California under a three-year statewide moratorium.

Faces aren’t the only biometric being scanned. In 2017, the University of Georgia introduced iris scanning stations in dining halls, encouraging students to check-in with their eyes to use their meal plans. This replaced an earlier program requiring hand scans, another form of biometric identification.

Gunshot Detection

shot spotter

Gunshot detection is a technology that involves installing acoustic sensors (essentially microphones) around a neighborhood or building. When a loud noise goes off, such as a gunshot or a firework, the sensors attempt to determine the location and then police receive an alert.

Universities and colleges have begun using this technology in part as a response to fears of campus shootings. However, these technologies often are not as accurate as their sellers claim and could result in dangerous confrontations based on errors. Also, these devices can capture human voices engaged in private conversations, and prosecutors have attempted to use such recordings in court.

Our dataset has identified eight universities and colleges that have purchased gunshot-detection technology:

  • East Carolina University Police Department
  • Hampton University Police Department
  • Truett McConnell University Campus Safety Department
  • University of California San Diego Police Department
  • University of Connecticut Police Department
  • University of Maryland Police Department
  • University of West Georgia Police Department
  • Georgia Tech Police Department

Some universities and colleges purchase their own gunshot detection technology, while others have access to the software through partnerships with other law enforcement agencies. For example, the Georgia Tech Police Department has access to gunshot detection through the Fūsus Real-Time Crime Center. The University of California San Diego Police Department, on the other hand, installed its own ShotSpotter gunshot detection technology on campus in 2017.

When a university funds surveillance technology, it can impact the communities nearby. For example, University of Nevada, Reno journalism student Henry Stone obtained documents through Nevada’s public records law that showed that UNR Cooperative Extension spent $500,000 in 2017 to install and operate Shotspotter sensors in a 3-mile impoverished neighborhood of Las Vegas. The system is controlled by the Las Vegas Metropolitan Police Department.

Video Analytics

Surveillance Cameras | Electronic Frontier Foundation

While most college campuses employ some sort of camera network, we identified two particular universities that are applying for extra credit in surveilling students: the University of Miami Police Department in Florida and Grand Valley State University Department of Public Safety in Michigan. These universities apply advanced software to the camera footage—sometimes called video analytics or computer vision—that use an algorithm to achieve round-the-clock monitoring that many officers viewing cameras could never achieve. Often employing artificial intelligence, video analytics systems can track objects and people from camera to camera, identify patterns and anomalies, and potentially conduct face recognition.

Grand Valley State University began using Avigilon video analytics technology in 2018. The University of Miami Police Department uses video analytics software combined with more than 1,300 cameras.

Three university police departments in Maryland also maintain lists of cameras owned by local residents and businesses. With these camera registries, private parties are asked to voluntarily provide information about the location of their security cameras, so that police can access or request footage during investigations. The University of Maryland, Baltimore Police Department, the University of Maryland, College Park Police Department and the Johns Hopkins University Campus Police are all listed on Motorola Solutions’ CityProtect site as maintaining such camera registries.

Two San Francisco schools—UC Hastings School of Law and UC San Francisco—explored leasing Knightscope surveillance robots in 2019 and 2020 to patrol their campuses, though the plans seem to have been scuttled by COVID-19. The robots are equipped with cameras, artificial intelligence, and, depending on the model, the ability to capture license plate data, conduct facial recognition, or recognize nearby phones.


Universities in the United States pride themselves on the free exchange of ideas and the ability for students to explore different concepts and social movements over the course of their academic careers. Unfortunately, for decades upon decades, police and intelligence agencies have also spied on students and professors engaged in social movements. High-tech surveillance only exacerbates the threat to academic freedom.

Around the country, cities are pushing back against surveillance by passing local ordinances requiring a public process and governing body approval before a police agency can acquire a new surveillance technology. Many community colleges do have elected bodies, and we urge these policymakers to enact similar policies to ensure adequate oversight of police surveillance.

However, these kinds of policy-making opportunities often aren’t available to students (or faculty) at state and private universities, whose leadership is appointed, not elected. We urge student and faculty associations to press their police departments to limit the types of data collected on students and to ensure a rigorous oversight process that allows students, faculty, and other staff to weigh in before decisions are made to adopt technologies that can harm their rights.

Hailey Rodis, a student at the University of Nevada, Reno Reynolds School of Journalism, was the primary researcher on this report. We extend our gratitude to the dozens of other UNR students and volunteers who contributed data on campus police to the Atlas of Surveillance project. 

via EFF.org Updates https://ift.tt/US8QQS

March 9, 2021 at 12:04PM

Privacy Experts: Sensor Devices Threaten ‘New Age of Excessive Police Surveillance’

The increased use of Ring doorbell cameras and similar devices in criminal investigations threatens to  “usher in a new age of excessive police surveillance” say privacy advocates.

And they could also be dangerous for law enforcement.

Last month a suspect fired on FBI agents who were approaching his door to serve a search warrant, reports The Washington Post.

The use of a camera – although not specified as a Ring camera, likely gave the suspect a warning that the officers were arriving.

Devices like Ring,  along with many competitors, are cheap, motion-activated cameras that alert residents of a home when someone is on or near their doorstep,

Although traditionally created to give residents of a home a sense of security, the cameras have become a staple in criminal investigations, garnering more than 20,000 police requests for footage last year alone from Ring cameras.

According to the Washington Post, over 2,000 police departments have partnerships with Ring, allowing for officers to request footage from any residence that occurs within a half square mile radius of a crime.

Similar competing brands of doorbell surveillance cameras often do not have the same agreements with police that Ring has.

While officers are free to request that footage, residents are not required to provide the video footage or even respond to an officer’s request.

In fact, officers in the article compared requesting Ring video footage to going door-knocking through a neighborhood where a crime was committed – if the person doesn’t want to comment, they just don’t answer their door.

This however doesn’t change the fact that Ring cameras can capture everything, whether it’s a violent crime happening right outside your door or your neighbor’s 10-year-old son riding by on their bike.

The cameras “highlight the growing way public authorities are capitalizing on privately run camera networks and databases,” said Matthew Guariglia, said an analyst for the Electronic Frontier Foundation .

“Homeowners probably would object if police officers installed a high-resolution camera aimed at their front door. But they may not object when their neighbors do the installing, even though the end result is the same; in fact, many people pay to do it themselves.”

In addition to capturing day-to-day life in a neighborhood, constant surveillance also means that people can be caught on camera while peacefully protesting without their consent.

While Ring doesn’t allow police requests for footage of peaceful or lawful protests, police departments can still find ways to work around it, as evident by the LAPD in the Washington Post piece, who requested footage of a peaceful protest to identify perpetrators of “physical injuries and property crimes.”

LAPD requested Ring footage during Black Lives Matter protests that occurred during May and June last year, said an EFF report. This brings up concerns of whether or not the existence of Ring cameras imposes on the First Amendment Right to peacefully protest.

“People are less likely to exercise their right to political speech, protest, and assembly if they know that police can get video of these actions with just an email to people with Ring cameras,” said Guariglia.

While providing officers with potential crime footage or help identifying a perpetrator, the cameras don’t always work in favor of the police.

Cases such as the individual who fired on officers he saw approaching his door, as well as those where officers are investigating possible domestic abuse, mental health issues and other sensitive situations can pose a threat to officers tending to the situation.

“Just the act of knocking on someone’s door can be dangerous. They’re trained how to do that. But this introduces another cautionary note for how that technology can have unintended consequences,” said Chuck Wexler, the executive director for Police Executive Research Forum, told the Washington Post.

The Ring company also offers an app called “Neighbors,” which allows for Ring users in a neighborhood to share footage that they deem suspicious.

While this can be helpful for identifying lost pets or finding footage of a person stealing packages off your doorstep, it also runs the risk of racial profiling.

NextDoor, a social media platform that allows for residents in a neighborhood to report suspicious activity, has suffered claims of racial profiling from users who report “sketchy” or “suspicious” behavior.

The Neighbors feature of Ring has the same potential as NextDoor, but with the addition of actual footage taken of a person – without their consent – uploaded to a social platform.

Like other surveillance devices, Ring owners also face the risk of hacking, the company even facing claims of sharing data with third parties, said an article by the Guardian.

See also: Police and Fire Departments in 48 states Involved in Amazon’s Ring Program.

Emily Riley is a TCR justice reporting intern.

via The Crime Report https://ift.tt/2myW3Gx

March 2, 2021 at 11:04AM

Secretive face-matching startup has customer list stolen

Clearview, a secretive facial-recognition startup that claims to scrape the Internet for images to use, has itself now had data unexpectedly scraped, in a manner of speaking. Someone apparently popped into the company’s system and stole its entire client list, which Clearview to date has refused to share.

Clearview notified its customers about the leak today, according to The Daily Beast, which obtained a copy of the notification. The memo says an intruder accessed the list of customers, as well as the number of user accounts those customers set up and the number of searches those accounts have conducted.

“Unfortunately, data breaches are part of life in the 21st century,” Tor Ekeland, an attorney for Clearview, told The Daily Beast. “Our servers were never accessed. We patched the flaw and continue to work to strengthen our security.”

Clearview vaulted from obscurity to the front page following a report by The New York Times in January. The paper described Clearview as a “groundbreaking” service that could completely erode privacy in any meaningful way.

The company at the time claimed to have in place 600 agreements with law enforcement agencies to use its software, which allegedly aggregated more than 3 billion facial images from other apps, platforms, and services. Those other platforms and their parent companies—including Twitter, Google (YouTube), Facebook (and Instagram), Microsoft (LinkedIn), and Venmo—all sent Clearview cease and desist letters, claiming its aggregation of images from their services violates their policies.

Clearview, which stresses its service is “available only to law enforcement agencies and select security professionals,” refused repeatedly to share client lists with reporters from several outlets. Reporters from The New York Times and BuzzFeed both dove into several of the company’s marketing claims and found some strong exaggerations. Clearview boasts that its technology helped lead to the arrest of a would-be terrorist in New York City, for example, but the NYPD told BuzzFeed Clearview had nothing to do with the case.

In the face of public criticism, the company made exactly two blog posts, each precisely two paragraphs long. The first, under the subject line “Clearview is not a consumer application,” insists, “Clearview is NOT available to the public,” emphasis theirs. It adds, “While many people have advised us that a public version would be more profitable, we have rejected the idea.”

Four days later, the company added another post, stressing that its code of conduct “mandates that investigators use our technology in a safe and ethical manner.” While “powerful tools always have the potential to be abused,” the company wrote, its app “has built-in safeguards to ensure these trained professionals only use it for its intended purpose.”

Clearview did not at any point say what these safeguards might be, however, nor has it explained who qualifies as “select security professionals.”

Other companies that partner with law enforcement for surveillance technologies have also not always been successful in attempts to keep their client lists on the down-low. Amazon, for example, attempted just that with its Ring line of products. After repeated media reports tried to draw out the details, however, Ring finally went public with a list of 405 agencies last August and through February 13 at least has kept updating the list of those (now 967) deals.

via Ars Technica https://arstechnica.com

February 26, 2020 at 12:01PM

Removing a GPS tracking device from your car isn’t theft, court rules

An Indiana man may beat a drug prosecution after the state’s highest court threw out a search warrant against him late last week. The search warrant was based on the idea that the man had “stolen” a GPS tracking device belonging to the government. But Indiana’s Supreme Court concluded that he’d done no such thing—and the cops should have known it.

Last November, we wrote about the case of Derek Heuring, an Indiana man the Warrick County Sheriff’s Office suspected of selling meth. Authorities got a warrant to put a GPS tracker on Heuring’s car, getting a stream of data on his location for six days. But then the data stopped.

Officers suspected Heuring had discovered and removed the tracking device. After waiting for a few more days, they got a warrant to search his home and a barn belonging to his father. They argued the disappearance of the tracking device was evidence that Heuring had stolen it.

During their search, police found the tracking device and some methamphetamine. They charged Heuring with drug-related crimes as well as theft of the GPS device.

But at trial, Heuring’s lawyers argued that the warrant to search the home and barn had been illegal. An application for a search warrant must provide probable cause to believe a crime was committed. But removing a small, unmarked object from your personal vehicle is no crime at all, Heuring’s lawyers argued. Heuring had no way of knowing what the device was or who it belonged to—and certainly no obligation to leave the device on his vehicle.

An Indiana appeals court ruled against Heuring last year. But Indiana’s Supreme Court seemed more sympathetic to Heuring’s case during oral arguments last November.

“I’m really struggling with how is that theft,” said Justice Steven David during November’s oral arguments.

“We find it reckless”

Last Thursday, Indiana’s highest court made it official, ruling that the search warrant that allowed police to recover Heuring’s meth was illegal. The police had no more than a hunch that Heuring had removed the device, the court said, and that wasn’t enough to get a search warrant.

Even if the police could have proved that Heuring had removed the device, that wouldn’t prove he stole it, the high court said. It’s hard to “steal” something if you have no idea to whom it belongs. Classifying his action as theft would lead to absurd results, the court noted.

“To find a fair probability of unauthorized control here, we would need to conclude the Hoosiers don’t have the authority to remove unknown, unmarked objects from their personal vehicles,” Chief Justice Loretta Rush wrote for a unanimous court.

The high court’s ruling has big implications for Heuring’s case. Under a principle known as the exclusionary rule, evidence uncovered using an invalid search warrant is excluded from trial. Without the meth recovered in this search, prosecutors might not have enough evidence to mount a case against him.

The law allows a good-faith exception to the exclusionary rule in some cases where police rely on a warrant that later proves defective. But Justice Rush concluded that exception doesn’t apply here.

“We find it reckless for an officer-affiant to search a suspect’s home and his father’s barn based on nothing more than a hunch that a crime has been committed,” the court wrote. “We are confident that applying the exclusionary rule here will deter similar reckless conduct in the future.”

via Policy – Ars Technica https://arstechnica.com

February 24, 2020 at 08:05PM

New bill would create Digital Privacy Agency to enforce privacy rights

Congress is taking yet another stab at addressing the near-complete lack of federal laws covering the absolutely massive trove of data that companies now collect on every one of us, which forms the backbone of basically the entire big tech era.

Representatives Anna Eshoo and Zoe Lofgren, both Democrats from California, introduced the Online Privacy Act today. The act would create a new federal agency, the Digital Privacy Agency, to enforce privacy rights. The act would also authorize the agency to hire up to 1,600 employees.

“Every American is vulnerable to privacy violations with few tools to defend themselves. Too often, our private information online is stolen, abused, used for profit, or grossly mishandled,” Eshoo said in a statement. “Our legislation ensures that every American has control over their own data, companies are held accountable, and the government provides tough but fair oversight.”

Read 10 remaining paragraphs | Comments

via Policy – Ars Technica https://arstechnica.com

November 5, 2019 at 05:17PM