Safari

When It Comes to Privacy, Safari Is Only the Fourth-Best Browser (yahoo.com) 36

Apple's elaborate new ad campaign promises that Safari is "a browser that protects your privacy." And the Washington Post says Apple "deserves credit for making many privacy protections automatic with Safari..."

"But Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project, said Safari is no better than the fourth-best web browser for your privacy." "If browser privacy were a sport at the Olympics, Apple isn't getting on the medal stand," Cahn said. (Apple did not comment about this.)

Safari stops third-party cookies anywhere you go on the web. So do Mozilla's Firefox and the Brave browser... Chrome allows third-party cookies in most cases unless you turn them off... Even without cookies, a website can pull information like the resolution of your computer screen, the fonts you have installed, add-on software you use and other technical details that in aggregate can help identify your device and what you're doing on it. The measures, typically called "fingerprinting," are privacy-eroding tracking by another name. Nick Doty with the Center for Democracy & Technology said there's generally not much you can do about fingerprinting. Usually you don't know you're being tracked that way. Apple says it defends against common fingerprinting techniques but Cahn said Firefox, Brave and the Tor Browser all are better at protecting you from digital surveillance. That's why he said Safari is no better than the fourth-best browser for privacy.

Safari's does offer extra privacy protections in its "private" mode, the article points out. "When you use this option, Apple says it does more to block use of 'advanced' fingerprinting techniques. It also steps up defenses against tracking that adds bits of identifying information to the web links you click."

The article concludes that Safari users can "feel reasonably good about the privacy (and security) protections, but you can probably do better — either by tweaking your Apple settings or using a web browser that's even more private than Safari."
The Courts

US Sues TikTok Over 'Massive-Scale' Privacy Violations of Kids Under 13 (reuters.com) 10

An anonymous reader quotes a report from Reuters: The U.S. Justice Department filed a lawsuit Friday against TikTok and parent company ByteDance for failing to protect children's privacy on the social media app as the Biden administration continues its crackdown on the social media site. The government said TikTok violated the Children's Online Privacy Protection Act that requires services aimed at children to obtain parental consent to collect personal information from users under age 13. The suit (PDF), which was joined by the Federal Trade Commission, said it was aimed at putting an end "to TikTok's unlawful massive-scale invasions of children's privacy." Representative Frank Pallone, the top Democrat on the Energy and Commerce Committee, said the suit "underscores the importance of divesting TikTok from Chinese Communist Party control. We simply cannot continue to allow our adversaries to harvest vast troves of Americans' sensitive data."

The DOJ said TikTok knowingly permitted children to create regular TikTok accounts, and then create and share short-form videos and messages with adults and others on the regular TikTok platform. TikTok collected personal information from these children without obtaining consent from their parents. The U.S. alleges that for years millions of American children under 13 have been using TikTok and the site "has been collecting and retaining children's personal information." The FTC is seeking penalties of up to $51,744 per violation per day from TikTok for improperly collecting data, which could theoretically total billions of dollars if TikTok were found liable.
TikTok said Friday it disagrees "with these allegations, many of which relate to past events and practices that are factually inaccurate or have been addressed. We are proud of our efforts to protect children, and we will continue to update and improve the platform."
Government

Secret Service's Tech Issues Helped Shooter Go Undetected At Trump Rally (theguardian.com) 155

An anonymous reader quotes a report from The Guardian: The technology flaws of the U.S. Secret Service helped the gunman who attempted to assassinate Donald Trump during a rally in Butler, Pennsylvania, last month evade detection. An officer broadcast "long gun!" over the local law enforcement radio system, according to congressional testimony from the Secret Service this week, the New York Times reported. The radio message should have travelled to a command center shared between local police and the Secret Service, but the message was never received by the Secret Service. About 30 seconds later, the shooter, Thomas Crooks, fired his first shots.

It was one of several technology issues facing the Secret Service on 13 July due to either malfunction, improper deployment or the Secret Service opting not to utilize them. The Secret Service had also previously rejected requests from the Trump campaign for more resources over the past two years. The use of a surveillance drone was turned down by the Secret Service at the rally site and the agency also did not bring in a system to boost the signals of agents' devices as the area had poor cell service. And a system to detect drone use in the area by others did not work, according to the report in the New York Times, due to the communications network in the area being overwhelmed by the number of people gathered at the rally. The federal agency did not use technology it had to bolster their communications system. The shooter flew his own drone over the site for 11 minutes without being detected, about two hours before Trump appeared at the rally.
Ronald Rowe Jr, the acting Secret Service director, said it never utilized the technological tools that could have spotted the shooter beforehand.

A former Secret Service officer also told the New York Times he "resigned in 2017 over frustration with the agency's delays in evaluating new technology and getting clearance and funding to obtain it and then train officers on it," notes The Guardian. Furthermore, the Secret Service failed to record communications between federal and local law enforcement at the rally.
Government

US Progressives Push For Nvidia Antitrust Investigation (reuters.com) 42

Progressive groups and Senator Elizabeth Warren are urging the Department of Justice to investigate Nvidia for potential antitrust violations due to its dominant position in the AI chip market. The groups criticize Nvidia's bundling of software and hardware, claiming it stifles innovation and locks in customers. Reuters reports: Demand Progress and nine other groups wrote a letter (PDF) this week, opens new tab urging Department of Justice antitrust chief Jonathan Kanter to probe business practices at Nvidia, whose market value hit $3 trillion this summer on demand for chips able to run the complex models behind generative AI. The groups, which oppose monopolies and promote government oversight of tech companies, among other issues, took aim at Nvidia's bundling of software and hardware, a practice that French antitrust enforcers have flagged as they prepare to bring charges.

"This aggressively proprietary approach, which is strongly contrary to industry norms about collaboration and interoperability, acts to lock in customers and stifles innovation," the groups wrote. Nvidia has roughly 80% of the AI chip market, including the custom AI processors made by cloud computing companies like Google, Microsoft and Amazon.com. The chips made by the cloud giants are not available for sale themselves but typically rented through each platform.
A spokesperson for Nvidia said: "Regulators need not be concerned, as we scrupulously adhere to all laws and ensure that NVIDIA is openly available in every cloud and on-prem for every enterprise. We'll continue to support aspiring innovators in every industry and market and are happy to provide any information regulators need."
Government

Senators Propose 'Digital Replication Right' For Likeness, Extending 70 Years After Death 46

An anonymous reader quotes a report from Ars Technica: On Wednesday, US Sens. Chris Coons (D-Del.), Marsha Blackburn (R.-Tenn.), Amy Klobuchar (D-Minn.), and Thom Tillis (R-NC) introduced the Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act of 2024. The bipartisan legislation, up for consideration in the US Senate, aims to protect individuals from unauthorized AI-generated replicas of their voice or likeness. The NO FAKES Act would create legal recourse for people whose digital representations are created without consent. It would hold both individuals and companies liable for producing, hosting, or sharing these unauthorized digital replicas, including those created by generative AI. Due to generative AI technology that has become mainstream in the past two years, creating audio or image media fakes of people has become fairly trivial, with easy photorealistic video replicas likely next to arrive. [...]

To protect a person's digital likeness, the NO FAKES Act introduces a "digital replication right" that gives individuals exclusive control over the use of their voice or visual likeness in digital replicas. This right extends 10 years after death, with possible five-year extensions if actively used. It can be licensed during life and inherited after death, lasting up to 70 years after an individual's death. Along the way, the bill defines what it considers to be a "digital replica": "DIGITAL REPLICA.-The term "digital replica" means a newly created, computer-generated, highly realistic electronic representation that is readily identifiable as the voice or visual likeness of an individual that- (A) is embodied in a sound recording, image, audiovisual work, including an audiovisual work that does not have any accompanying sounds, or transmission- (i) in which the actual individual did not actually perform or appear; or (ii) that is a version of a sound recording, image, or audiovisual work in which the actual individual did perform or appear, in which the fundamental character of the performance or appearance has been materially altered; and (B) does not include the electronic reproduction, use of a sample of one sound recording or audiovisual work into another, remixing, mastering, or digital remastering of a sound recording or audiovisual work authorized by the copyright holder."
The NO FAKES Act "includes provisions that aim to balance IP protection with free speech," notes Ars. "It provides exclusions for recognized First Amendment protections, such as documentaries, biographical works, and content created for purposes of comment, criticism, or parody."
Google

Google Defeats RNC Lawsuit Claiming Email Spam Filters Harmed Republican Fundraising 84

A U.S. judge has thrown out a Republican National Committee lawsuit accusing Alphabet's Google of intentionally misdirecting the political party's email messages to users' spam folders. From a report: U.S. District Judge Daniel Calabretta in Sacramento, California, on Wednesday dismissed the RNC's lawsuit for a second time, and said the organization would not be allowed to refile it. While expressing some sympathy for the RNC's allegations, he said it had not made an adequate case that Google violated California's unfair competition law.

The lawsuit alleged Google had intentionally or negligently sent RNC fundraising emails to Gmail users' spam folders and cost the group hundreds of thousands of dollars in potential donations. Google denied any wrongdoing.
The Courts

CrowdStrike Is Sued By Shareholders Over Huge Software Outage (reuters.com) 134

Shareholders have sued CrowdStrike on Tuesday, claiming the cybersecurity company defrauded them by concealing how its inadequate software testing could cause the global software outage earlier this month that crashed millions of computers. Reuters reports: In a proposed class action filed on Tuesday night in the Austin, Texas federal court, shareholders said they learned that CrowdStrike's assurances about its technology were materially false and misleading when a flawed software update disrupted airlines, banks, hospitals and emergency lines around the world. They said CrowdStrike's share price fell 32% over the next 12 days, wiping out $25 billion of market value, as the outage's effects became known, Chief Executive George Kurtz was called to testify to the U.S. Congress, and Delta Air Lines reportedly hired prominent lawyer David Boies to seek damages.

The complaint cites statements including from a March 5 conference call where Kurtz characterized CrowdStrike's software as "validated, tested and certified." The lawsuit led by the Plymouth County Retirement Association of Plymouth, Massachusetts, seeks unspecified damages for holders of CrowdStrike Class A shares between Nov. 29, 2023 and July 29, 2024.
Further reading: Delta CEO Says CrowdStrike-Microsoft Outage Cost the Airline $500 Million
Privacy

Bumble and Hinge Allowed Stalkers To Pinpoint Users' Locations Down To 2 Meters, Researchers Say (techcrunch.com) 23

An anonymous reader quotes a report from TechCrunch: A group of researchers said they found that vulnerabilities in the design of some dating apps, including the popular Bumble and Hinge, allowed malicious users or stalkers to pinpoint the location of their victims down to two meters. In a new academic paper, researchers from the Belgian university KU Leuven detailed their findings (PDF) when they analyzed 15 popular dating apps. Of those, Badoo, Bumble, Grindr, happn, Hinge and Hily all had the same vulnerability that could have helped a malicious user to identify the near-exact location of another user, according to the researchers. While neither of those apps share exact locations when displaying the distance between users on their profiles, they did use exact locations for the "filters" feature of the apps. Generally speaking, by using filters, users can tailor their search for a partner based on criteria like age, height, what type of relationship they are looking for and, crucially, distance.

To pinpoint the exact location of a target user, the researchers used a novel technique they call "oracle trilateration." In general, trilateration, which for example is used in GPS, works by using three points and measuring their distance relative to the target. This creates three circles, which intersect at the point where the target is located. Oracle trilateration works slightly differently. The researchers wrote in their paper that the first step for the person who wants to identify their target's location "roughly estimates the victim's location," for example, based on the location displayed in the target's profile. Then, the attacker moves in increments "until the oracle indicates that the victim is no longer within proximity, and this for three different directions. The attacker now has three positions with a known exact distance, i.e., the preselected proximity distance, and can trilaterate the victim," the researchers wrote.

"It was somewhat surprising that known issues were still present in these popular apps," Karel Dhondt, one of the researchers, told TechCrunch. While this technique doesn't reveal the exact GPS coordinates of the victim, "I'd say 2 meters is close enough to pinpoint the user," Dhondt said. The good news is that all the apps that had these issues, and that the researchers reached out to, have now changed how distance filters work and are not vulnerable to the oracle trilateration technique. The fix, according to the researchers, was to round up the exact coordinates by three decimals, making them less precise and accurate.

Open Source

A New White House Report Embraces Open-Source AI 15

An anonymous reader quotes a report from ZDNet: According to a new statement, the White House realizes open source is key to artificial intelligence (AI) development -- much like many businesses using the technology. On Tuesday, the National Telecommunications and Information Administration (NTIA) issued a report supporting open-source and open models to promote innovation in AI while emphasizing the need for vigilant risk monitoring. The report recommends that the US continue to support AI openness while working on new capabilities to monitor potential AI risks but refrain from restricting the availability of open model weights.
Privacy

Meta To Pay Record $1.4 Billion To Settle Texas Facial Recognition Suit (texastribune.org) 43

Meta will pay Texas $1.4 billion to settle a lawsuit alleging the company used personal biometric data without user consent, marking the largest privacy-related settlement ever obtained by a state. The Texas Tribune reports: The 2022 lawsuit, filed by Texas Attorney General Ken Paxton in state court, alleged that Meta had been using facial recognition software on photos uploaded to Facebook without Texans' consent. The settlement will be paid over five years. The attorney general's office did not say whether the money from the settlement would go into the state's general fund or if it would be distributed in some other way. The settlement, announced Tuesday, does not act as an admission of guilt and Meta maintains no wrongdoing. This was the first lawsuit Paxton's office argued under a 2009 state law that protects Texans' biometric data, like fingerprints and facial scans. The law requires businesses to inform and get consent from individuals before collecting such data. It also limits sharing this data, except in certain cases like helping law enforcement or completing financial transactions. Businesses must protect this data and destroy it within a year after it's no longer needed.

In 2011, Meta introduced a feature known as Tag Suggestions to make it easier for users to tag people in their photos. According to Paxton's office, the feature was turned on by default and ran facial recognition on users' photos, automatically capturing data protected by the 2009 law. That system was discontinued in 2021, with Meta saying it deleted over 1 billion people's individual facial recognition data. As part of the settlement, Meta must notify the attorney general's office of anticipated or ongoing activities that may fall under the state's biometric data laws. If Texas objects, the parties have 60 days to attempt to resolve the issue. Meta officials said the settlement will make it easier for the company to discuss the implications and requirements of the state's biometric data laws with the attorney general's office, adding that data protection and privacy are core priorities for the firm.

Government

Senate Passes the Kids Online Safety Act (theverge.com) 84

An anonymous reader quotes a report from The Verge: The Senate passed the Kids Online Safety Act (KOSA) and the Children and Teens' Online Privacy Protection Act (also known as COPPA 2.0), the first major internet bills meant to protect children to reach that milestone in two decades. A legislative vehicle that included both KOSA and COPPA 2.0 passed 91-3. Senate Majority Leader Chuck Schumer (D-NY) called it "a momentous day" in a speech ahead of the vote, saying that "the Senate keeps its promise to every parent who's lost a child because of the risks of social media." He called for the House to pass the bills "as soon as they can."

KOSA is a landmark piece of legislation that a persistent group of parent advocates played a key role in pushing forward -- meeting with lawmakers, showing up at hearings with tech CEOs, and bringing along photos of their children, who, in many cases, died by suicide after experiencing cyberbullying or other harms from social media. These parents say that a bill like KOSA could have saved their own children from suffering and hope it will do the same for other children. The bill works by creating a duty of care for online platforms that are used by minors, requiring they take "reasonable" measures in how they design their products to mitigate a list of harms, including online bullying, sexual exploitation, drug promotion, and eating disorders. It specifies that the bill doesn't prevent platforms from letting minors search for any specific content or providing resources to mitigate any of the listed harms, "including evidence-informed information and clinical resources."
The legislation faces significant opposition from digital rights, free speech, and LGBTQ+ advocates who fear it could lead to censorship and privacy issues. Critics argue that the duty of care may result in aggressive content filtering and mandatory age verification, potentially blocking important educational and lifesaving content.

The bill may also face legal challenges from tech platforms citing First Amendment violations.
Privacy

HealthEquity Data Breach Affects 4.3 Million People (techcrunch.com) 16

HealthEquity is notifying 4.3 million people following a March data breach that affects their personal and protected health information. From a report: In its data breach notice, filed with Maine's attorney general, the Utah-based healthcare benefits administrator said that although the compromised data varies by person, it largely consists of sign-up information for accounts and information about benefits that the company administers.

HealthEquity said the data may include customer names, addresses, phone numbers, their Social Security number, information about the person's employer and the person's dependent (if any), and some payment card information. HealthEquity provides employees at companies across the United States access to workplace benefits, like health savings accounts and commuter options for public transit and parking. At its February earnings, HealthEquity said it had more than 15 million total customer accounts.

The Internet

Low-Income Homes Drop Internet Service After Congress Kills Discount Program (arstechnica.com) 240

An anonymous reader quotes a report from Ars Technica: The death of the US government's Affordable Connectivity Program (ACP) is starting to result in disconnection of Internet service for Americans with low incomes. On Friday, Charter Communications reported a net loss of 154,000 Internet subscribers that it said was mostly driven by customers canceling after losing the federal discount. About 100,000 of those subscribers were reportedly getting the discount, which in some cases made Internet service free to the consumer. The $30 monthly broadband discounts provided by the ACP ended in May after Congress failed to allocate more funding. The Biden administration requested (PDF) $6 billion to fund the ACP through December 2024, but Republicans called the program "wasteful."

Republican lawmakers' main complaint was that most of the ACP money went to households that already had broadband before the subsidy was created. FCC Chairwoman Jessica Rosenworcel warned that killing the discounts would reduce Internet access, saying (PDF) an FCC survey found that 77 percent of participating households would change their plan or drop Internet service entirely once the discounts expired. Charter's Q2 2024 earnings report provides some of the first evidence of users dropping Internet service after losing the discount. "Second quarter residential Internet customers decreased by 154,000, largely driven by the end of the FCC's Affordable Connectivity Program subsidies in the second quarter, compared to an increase of 70,000 during the second quarter of 2023," Charter said.

Across all ISPs, there were 23 million US households enrolled in the ACP. Research released in January 2024 found that Charter was serving over 4 million ACP recipients and that up to 300,000 of those Charter customers would be "at risk" of dropping Internet service if the discounts expired. Given that ACP recipients must meet low-income eligibility requirements, losing the discounts could put a strain on their overall finances even if they choose to keep paying for Internet service. [...] Light Reading reported that Charter attributed about 100,000 of the 154,000 customer losses to the ACP shutdown. Charter said it retained most of its ACP subscribers so far, but that low-income households might not be able to continue paying for Internet service without a new subsidy for much longer.

AI

From Sci-Fi To State Law: California's Plan To Prevent AI Catastrophe (arstechnica.com) 39

An anonymous reader quotes a report from Ars Technica: California's "Safe and Secure Innovation for Frontier Artificial Intelligence Models Act" (a.k.a. SB-1047) has led to a flurry of headlines and debate concerning the overall "safety" of large artificial intelligence models. But critics are concerned that the bill's overblown focus on existential threats by future AI models could severely limit research and development for more prosaic, non-threatening AI uses today. SB-1047, introduced by State Senator Scott Wiener, passed the California Senate in May with a 32-1 vote and seems well positioned for a final vote in the State Assembly in August. The text of the bill requires companies behind sufficiently large AI models (currently set at $100 million in training costs and the rough computing power implied by those costs today) to put testing procedures and systems in place to prevent and respond to "safety incidents."

The bill lays out a legalistic definition of those safety incidents that in turn focuses on defining a set of "critical harms" that an AI system might enable. That includes harms leading to "mass casualties or at least $500 million of damage," such as "the creation or use of chemical, biological, radiological, or nuclear weapon" (hello, Skynet?) or "precise instructions for conducting a cyberattack... on critical infrastructure." The bill also alludes to "other grave harms to public safety and security that are of comparable severity" to those laid out explicitly. An AI model's creator can't be held liable for harm caused through the sharing of "publicly accessible" information from outside the model -- simply asking an LLM to summarize The Anarchist's Cookbook probably wouldn't put it in violation of the law, for instance. Instead, the bill seems most concerned with future AIs that could come up with "novel threats to public safety and security." More than a human using an AI to brainstorm harmful ideas, SB-1047 focuses on the idea of an AI "autonomously engaging in behavior other than at the request of a user" while acting "with limited human oversight, intervention, or supervision."

To prevent this straight-out-of-science-fiction eventuality, anyone training a sufficiently large model must "implement the capability to promptly enact a full shutdown" and have policies in place for when such a shutdown would be enacted, among other precautions and tests. The bill also focuses at points on AI actions that would require "intent, recklessness, or gross negligence" if performed by a human, suggesting a degree of agency that does not exist in today's large language models.
The bill's supporters include AI experts Geoffrey Hinton and Yoshua Bengio, who believe the bill is a necessary precaution against potential catastrophic AI risks.

Bill critics include tech policy expert Nirit Weiss-Blatt and AI community voice Daniel Jeffries. They argue that the bill is based on science fiction fears and could harm technological advancement. Ars Technica contributor Timothy Lee and Meta's Yann LeCun say that the bill's regulations could hinder "open weight" AI models and innovation in AI research.

Instead, some experts suggest a better approach would be to focus on regulating harmful AI applications rather than the technology itself -- for example, outlawing nonconsensual deepfake pornography and improving AI safety research.
Crime

Burglars are Jamming Wi-FI Security Cameras (pcworld.com) 92

An anonymous reader shared this report from PC World: According to a tweet sent out by the Los Angeles Police Department's Wilshire division (spotted by Tom's Hardware), a small band of burglars is using Wi-Fi jamming devices to nullify wireless security cameras before breaking and entering.

The thieves seem to be well above the level of your typical smash-and-grab job. They have lookout teams, they enter through the second story, and they go for small, high-value items like jewelry and designer purses. Wireless signal jammers are illegal in the United States. Wireless bands are tightly regulated and the FCC doesn't allow any consumer device to intentionally disrupt radio waves from other devices. Similar laws are in place in most other countries. But signal jammers are electronically simple and relatively easy to build or buy from less-than-scrupulous sources.

The police division went on to recommend tagging value items like a vehicle or purse with Apple Air Tags — and "talk to your Wi-Fi provider about hard-wiring your burglar alarm system."

And among their other suggestions: Don't post on social media that you're going on vacation...
Crime

29 Felony Charges Filed Over 'Swat' Calls Made By an 11-Year-Old (cnn.com) 121

Law enforcement officials have identified the criminal behind "more than 20 bomb or shooting threats to schools and other places," reports CNN.

It was an 11-year-old boy: Investigators tracked the calls to a home in Henrico County, Virginia, just outside Richmond. Local deputies searched the home this month, and the 11-year-old boy who lived there admitted to placing the Florida swatting calls, as well as a threat made to the Maryland State House, authorities said. Investigators later determined that the boy also made swatting calls in Nebraska, Kansas, Alabama, Tennessee and Alaska. The boy faces 29 felony counts and 14 misdemeanors, officials said. He's being held in a Virginia juvenile detention facility while Florida officials arrange for his extradition...

A 13-year-old boy was arrested in Florida in May, several days after the initial call, for making a copycat threat to Buddy Taylor Middle School, official said.

The Courts

Courts Close the Loophole Letting the Feds Search Your Phone At the Border (reason.com) 46

On Wednesday, Judge Nina Morrison ruled that cellphone searches at the border are "nonroutine" and require probable cause and a warrant, likening them to more invasive searches due to their heavy privacy impact. As reported by Reason, this decision closes the loophole in the Fourth Amendment's protection against unreasonable searches and seizures, which Customs and Border Protection (CBP) agents have exploited. Courts have previously ruled that the government has the right to conduct routine warrantless searches for contraband at the border. From the report: Although the interests of stopping contraband are "undoubtedly served when the government searches the luggage or pockets of a person crossing the border carrying objects that can only be introduced to this country by being physically moved across its borders, the extent to which those interests are served when the government searches data stored on a person's cell phone is far less clear," the judge declared. Morrison noted that "reviewing the information in a person's cell phone is the best approximation government officials have for mindreading," so searching through cellphone data has an even heavier privacy impact than rummaging through physical possessions. Therefore, the court ruled, a cellphone search at the border requires both probable cause and a warrant. Morrison did not distinguish between scanning a phone's contents with special software and manually flipping through it.

And in a victory for journalists, the judge specifically acknowledged the First Amendment implications of cellphone searches too. She cited reporting by The Intercept and VICE about CPB searching journalists' cellphones "based on these journalists' ongoing coverage of politically sensitive issues" and warned that those phone searches could put confidential sources at risk. Wednesday's ruling adds to a stream of cases restricting the feds' ability to search travelers' electronics. The 4th and 9th Circuits, which cover the mid-Atlantic and Western states, have ruled that border police need at least "reasonable suspicion" of a crime to search cellphones. Last year, a judge in the Southern District of New York also ruled (PDF) that the government "may not copy and search an American citizen's cell phone at the border without a warrant absent exigent circumstances."

Privacy

Data From Deleted GitHub Repos May Not Actually Be Deleted, Researchers Claim (theregister.com) 23

Thomas Claburn reports via The Register: Researchers at Truffle Security have found, or arguably rediscovered, that data from deleted GitHub repositories (public or private) and from deleted copies (forks) of repositories isn't necessarily deleted. Joe Leon, a security researcher with the outfit, said in an advisory on Wednesday that being able to access deleted repo data -- such as APIs keys -- represents a security risk. And he proposed a new term to describe the alleged vulnerability: Cross Fork Object Reference (CFOR). "A CFOR vulnerability occurs when one repository fork can access sensitive data from another fork (including data from private and deleted forks)," Leon explained.

For example, the firm showed how one can fork a repository, commit data to it, delete the fork, and then access the supposedly deleted commit data via the original repository. The researchers also created a repo, forked it, and showed how data not synced with the fork continues to be accessible through the fork after the original repo is deleted. You can watch that particular demo [here].

According to Leon, this scenario came up last week with the submission of a critical vulnerability report to a major technology company involving a private key for an employee GitHub account that had broad access across the organization. The key had been publicly committed to a GitHub repository. Upon learning of the blunder, the tech biz nuked the repo thinking that would take care of the leak. "They immediately deleted the repository, but since it had been forked, I could still access the commit containing the sensitive data via a fork, despite the fork never syncing with the original 'upstream' repository," Leon explained. Leon added that after reviewing three widely forked public repos from large AI companies, Truffle Security researchers found 40 valid API keys from deleted forks.
GitHub said it considers this situation a feature, not a bug: "GitHub is committed to investigating reported security issues. We are aware of this report and have validated that this is expected and documented behavior inherent to how fork networks work. You can read more about how deleting or changing visibility affects repository forks in our [documentation]."

Truffle Security argues that they should reconsider their position "because the average user expects there to be a distinction between public and private repos in terms of data security, which isn't always true," reports The Register. "And there's also the expectation that the act of deletion should remove commit data, which again has been shown to not always be the case."
Transportation

Automakers Sold Driver Data For Pennies, Senators Say (jalopnik.com) 58

An anonymous reader quotes a report from the New York Times: If you drive a car made by General Motors and it has an internet connection, your car's movements and exact location are being collected and shared anonymously with a data broker. This practice, disclosed in a letter (PDF) sent by Senators Ron Wyden of Oregon and Edward J. Markey of Massachusetts to the Federal Trade Commission on Friday, is yet another way in which automakers are tracking drivers (source may be paywalled; alternative source), often without their knowledge. Previous reporting in The New York Times which the letter cited, revealed how automakers including G.M., Honda and Hyundai collected information about drivers' behavior, such as how often they slammed on the brakes, accelerated rapidly and exceeded the speed limit. It was then sold to the insurance industry, which used it to help gauge individual drivers' riskiness.

The two Democratic senators, both known for privacy advocacy, zeroed in on G.M., Honda and Hyundai because all three had made deals, The Times reported, with Verisk, an analytics company that sold the data to insurers. In the letter, the senators urged the F.T.C.'s chairwoman, Lina Khan, to investigate how the auto industry collects and shares customers' data. One of the surprising findings of an investigation by Mr. Wyden's office was just how little the automakers made from selling driving data. According to the letter, Verisk paid Honda $25,920 over four years for information about 97,000 cars, or 26 cents per car. Hyundai was paid just over $1 million, or 61 cents per car, over six years. G.M. would not reveal how much it had been paid, Mr. Wyden's office said. People familiar with G.M.'s program previously told The Times that driving behavior data had been shared from more than eight million cars, with the company making an amount in the low millions of dollars from the sale. G.M. also previously shared data with LexisNexis Risk Solutions.
"Companies should not be selling Americans' data without their consent, period," the letter from Senators Wyden and Markey stated. "But it is particularly insulting for automakers that are selling cars for tens of thousands of dollars to then squeeze out a few additional pennies of profit with consumers' private data."
The Courts

California Supreme Court Upholds Gig Worker Law In a Win For Ride-Hail Companies (politico.com) 73

In a major victory for ride-hail companies, California Supreme Court upheld a law classifying gig workers as independent contractors, maintaining their ineligibility for benefits such as sick leave and workers' compensation. This decision concludes a prolonged legal battle and supports the 2020 ballot measure Proposition 22, despite opposition from labor groups who argued it was unconstitutional. Politico reports: Thursday's ruling capped a yearslong battle between labor and the companies over the status of workers who are dispatched by apps to deliver food, buy groceries and transport customers. A 2018 Supreme Court ruling and a follow-up bill would have compelled the gig companies to treat those workers as employees. A collection of five firms then spent more than $200 million to escape that mandate by passing the 2020 ballot measure Proposition 22 in one of the most expensive political campaigns in American history. The unanimous ruling on Thursday now upholds the status quo of the gig economy in California.

As independent contractors, gig workers are not entitled to benefits like sick leave, overtime and workers' compensation. The SEIU union and four gig workers, ultimately, challenged Prop 22 based on its conflict with the Legislature's power to administer workers' compensation, specifically. The law, which passed with 58 percent of the vote in 2020, makes gig workers ineligible for workers' comp, which opponents of Prop 22 argued rendered the entire law unconstitutional. [...] Beyond the implications for gig workers, the heavily-funded Prop 22 ballot campaign pushed the limits of what could be spent on an initiative, ultimately becoming the most expensive measure in California history. Uber and Lyft have both threatened to leave any states that pass laws not classifying their drivers as independent contractors. The decision Thursday closes the door to that possibility for California.

Slashdot Top Deals