×
Electronic Frontier Foundation

EFF Warns: 'Think Twice Before Giving Surveillance for the Holidays' (eff.org) 28

"It's easy to default to giving the tech gifts that retailers tend to push on us this time of year..." notes Lifehacker senior writer Thorin Klosowski.

"But before you give one, think twice about what you're opting that person into." A number of these gifts raise red flags for us as privacy-conscious digital advocates. Ring cameras are one of the most obvious examples, but countless others over the years have made the security or privacy naughty list (and many of these same electronics directly clash with your right to repair). One big problem with giving these sorts of gifts is that you're opting another person into a company's intrusive surveillance practice, likely without their full knowledge of what they're really signing up for... And let's not forget about kids. Long subjected to surveillance from elves and their managers, electronics gifts for kids can come with all sorts of surprise issues, like the kid-focused tablet we found this year that was packed with malware and riskware. Kids' smartwatches and a number of connected toys are also potential privacy hazards that may not be worth the risks if not set up carefully.

Of course, you don't have to avoid all technology purchases. There are plenty of products out there that aren't creepy, and a few that just need extra attention during set up to ensure they're as privacy-protecting as possible. While we don't endorse products, you don't have to start your search in a vacuum. One helpful place to start is Mozilla's Privacy Not Included gift guide, which provides a breakdown of the privacy practices and history of products in a number of popular gift categories.... U.S. PIRG also has guidance for shopping for kids, including details about what to look for in popular categories like smart toys and watches....

Your job as a privacy-conscious gift-giver doesn't end at the checkout screen. If you're more tech savvy than the person receiving the item, or you're helping set up a gadget for a child, there's no better gift than helping set it up as privately as possible.... Giving the gift of electronics shouldn't come with so much homework, but until we have a comprehensive data privacy law, we'll likely have to contend with these sorts of set-up hoops. Until that day comes, we can all take the time to help those who need it.

AI

ChatGPT Exploit Finds 24 Email Addresses, Amid Warnings of 'AI Silo' (thehill.com) 67

The New York Times reports: Last month, I received an alarming email from someone I did not know: Rui Zhu, a Ph.D. candidate at Indiana University Bloomington. Mr. Zhu had my email address, he explained, because GPT-3.5 Turbo, one of the latest and most robust large language models (L.L.M.) from OpenAI, had delivered it to him. My contact information was included in a list of business and personal email addresses for more than 30 New York Times employees that a research team, including Mr. Zhu, had managed to extract from GPT-3.5 Turbo in the fall of this year. With some work, the team had been able to "bypass the model's restrictions on responding to privacy-related queries," Mr. Zhu wrote.

My email address is not a secret. But the success of the researchers' experiment should ring alarm bells because it reveals the potential for ChatGPT, and generative A.I. tools like it, to reveal much more sensitive personal information with just a bit of tweaking. When you ask ChatGPT a question, it does not simply search the web to find the answer. Instead, it draws on what it has "learned" from reams of information — training data that was used to feed and develop the model — to generate one. L.L.M.s train on vast amounts of text, which may include personal information pulled from the Internet and other sources. That training data informs how the A.I. tool works, but it is not supposed to be recalled verbatim... In the example output they provided for Times employees, many of the personal email addresses were either off by a few characters or entirely wrong. But 80 percent of the work addresses the model returned were correct.

The researchers used the API for accessing ChatGPT, the article notes, where "requests that would typically be denied in the ChatGPT interface were accepted..."

"The vulnerability is particularly concerning because no one — apart from a limited number of OpenAI employees — really knows what lurks in ChatGPT's training-data memory."

And there was a broader related warning in another article published the same day. Microsoft may be building an AI silo in a walled garden, argues a professor at the University of California, Berkeley's school of information, calling the development "detrimental for technology development, as well as costly and potentially dangerous for society and the economy." [In January] Microsoft sealed its OpenAI relationship with another major investment — this time around $10 billion, much of which was, once again, in the form of cloud credits instead of conventional finance. In return, OpenAI agreed to run and power its AI exclusively through Microsoft's Azure cloud and granted Microsoft certain rights to its intellectual property...

Recent reports that U.K. competition authorities and the U.S. Federal Trade Commission are scrutinizing Microsoft's investment in OpenAI are encouraging. But Microsoft's failure to report these investments for what they are — a de facto acquisition — demonstrates that the company is keenly aware of the stakes and has taken advantage of OpenAI's somewhat peculiar legal status as a non-profit entity to work around the rules...

The U.S. government needs to quickly step in and reverse the negative momentum that is pushing AI into walled gardens. The longer it waits, the harder it will be, both politically and technically, to re-introduce robust competition and the open ecosystem that society needs to maximize the benefits and manage the risks of AI technology.

Television

'Doctor Who' Christmas Special Streams on Disney+ and the BBC (cnet.com) 65

An anonymous Slashdot reader shared this report from CNET: Marking its 60th year on television, the British time-travel series will close out 2023 with one last anniversary special that arrives on Christmas Day. Ncuti Gatwa's Doctor helms the Tardis in The Church on Ruby Road, which centers on an abandoned baby who grows up looking for answers... Disney Plus will stream Doctor Who: The Church on Ruby Road on Monday, Dec. 25, at 12:55 p.m. ET (9:55 a.m. PT) in all regions except the UK and Ireland, where it will air on the BBC. In case you missed it, viewers can also watch David Tennant starring in the other three anniversary specials: The Star Beast, Wild Blue Yonder and The Giggle. All releases are available on Disney Plus.
But what's interesting is CNET goes on to explain "why a VPN could be a useful tool." Perhaps you're traveling abroad and want to stream Disney Plus while away from home. With a VPN, you're able to virtually change your location on your phone, tablet or laptop to get access to the series from anywhere in the world. There are other good reasons to use a VPN for streaming too. A VPN is the best way to encrypt your traffic and stop your ISP from throttling your speeds...

You can use a VPN to stream content legally as long as VPNs are allowed in your country and you have a valid subscription to the streaming service you're using. The U.S. and Canada are among the countries where VPNs are legal

United States

US Water Utilities Hacked After Default Passwords Set to '1111', Cybersecurity Officials Say (fastcompany.com) 84

An anonymous reader shared this report from Fast Company: Providers of critical infrastructure in the United States are doing a sloppy job of defending against cyber intrusions, the National Security Council tells Fast Company, pointing to recent Iran-linked attacks on U.S. water utilities that exploited basic security lapses [earlier this month]. The security council tells Fast Company it's also aware of recent intrusions by hackers linked to China's military at American infrastructure entities that include water and energy utilities in multiple states.

Neither the Iran-linked or China-linked attacks affected critical systems or caused disruptions, according to reports.

"We're seeing companies and critical services facing increased cyber threats from malicious criminals and countries," Anne Neuberger, the deputy national security advisor for cyber and emerging tech, tells Fast Company. The White House had been urging infrastructure providers to upgrade their cyber defenses before these recent hacks, but "clearly, by the most recent success of the criminal cyberattacks, more work needs to be done," she says... The attacks hit at least 11 different entities using Unitronics devices across the United States, which included six local water facilities, a pharmacy, an aquatics center, and a brewery...

Some of the compromised devices had been connected to the open internet with a default password of "1111," federal authorities say, making it easy for hackers to find them and gain access. Fixing that "doesn't cost any money," Neuberger says, "and those are the kinds of basic things that we really want companies urgently to do." But cybersecurity experts say these attacks point to a larger issue: the general vulnerability of the technology that powers physical infrastructure. Much of the hardware was developed before the internet and, though they were retrofitted with digital capabilities, still "have insufficient security controls," says Gary Perkins, chief information security officer at cybersecurity firm CISO Global. Additionally, many infrastructure facilities prioritize "operational ease of use rather than security," since many vendors often need to access the same equipment, says Andy Thompson, an offensive cybersecurity expert at CyberArk. But that can make the systems equally easy for attackers to exploit: freely available web tools allow anyone to generate lists of hardware connected to the public internet, like the Unitronics devices used by water companies.

"Not making critical infrastructure easily accessible via the internet should be standard practice," Thompson says.

AI

AI Companies Would Be Required To Disclose Copyrighted Training Data Under New Bill (theverge.com) 42

An anonymous reader quotes a report from The Verge: Two lawmakers filed a bill requiring creators of foundation models to disclose sources of training data so copyright holders know their information was taken. The AI Foundation Model Transparency Act -- filed by Reps. Anna Eshoo (D-CA) and Don Beyer (D-VA) -- would direct the Federal Trade Commission (FTC) to work with the National Institute of Standards and Technology (NIST) to establish rules for reporting training data transparency. Companies that make foundation models will be required to report sources of training data and how the data is retained during the inference process, describe the limitations or risks of the model, how the model aligns with NIST's planned AI Risk Management Framework and any other federal standards might be established, and provide information on the computational power used to train and run the model. The bill also says AI developers must report efforts to "red team" the model to prevent it from providing "inaccurate or harmful information" around medical or health-related questions, biological synthesis, cybersecurity, elections, policing, financial loan decisions, education, employment decisions, public services, and vulnerable populations such as children.

The bill calls out the importance of training data transparency around copyright as several lawsuits have come out against AI companies alleging copyright infringement. It specifically mentions the case of artists against Stability AI, Midjourney, and Deviant Art, (which was largely dismissed in October, according to VentureBeat), and Getty Images' complaint against Stability AI. The bill still needs to be assigned to a committee and discussed, and it's unclear if that will happen before the busy election campaign season starts. Eshoo and Beyer's bill complements the Biden administration's AI executive order, which helps establish reporting standards for AI models. The executive order, however, is not law, so if the AI Foundation Model Transparency Act passes, it will make transparency requirements for training data a federal rule.

Government

Biden Administration Unveils Hydrogen Tax Credit Plan To Jump-Start Industry (npr.org) 104

An anonymous reader quotes a report from NPR: The Biden administration released its highly anticipated proposal for doling out billions of dollars in tax credits to hydrogen producers Friday, in a massive effort to build out an industry that some hope can be a cleaner alternative to fossil fueled power. The U.S. credit is the most generous in the world for hydrogen production, Jesse Jenkins, a professor at Princeton University who has analyzed the U.S. climate law, said last week. The proposal -- which is part of Democrats' Inflation Reduction Act passed last year -- outlines a tiered system to determine which hydrogen producers get the most credits, with cleaner energy projects receiving more, and smaller, but still meaningful credits going to those that use fossil fuel to produce hydrogen.

Administration officials estimate the hydrogen production credits will deliver $140 billion in revenue and 700,000 jobs by 2030 -- and will help the U.S. produce 50 million metric tons of hydrogen by 2050. "That's equivalent to the amount of energy currently used by every bus, every plane, every train and every ship in the US combined," Energy Deputy Secretary David M. Turk said on a Thursday call with reporters to preview the proposal. [...] As part of the administration's proposal, firms that produce cleaner hydrogen and meet prevailing wage and registered apprenticeship requirements stand to qualify for a large incentive at $3 per kilogram of hydrogen. Firms that produce hydrogen using fossil fuels get less. The credit ranges from $.60 to $3 per kilo, depending on whole lifecycle emissions.

One contentious issue in the proposal was how to deal with the fact that clean, electrolyzer hydrogen draws tremendous amounts of electricity. Few want that to mean that more coal or natural gas-fired power plants run extra hours. The guidance addresses this by calling for producers to document their electricity usage through "energy attribute certificates" -- which will help determine the credits they qualify for. Rachel Fakhry, policy director for emerging technologies at the Natural Resources Defense Council called the proposal "a win for the climate, U.S. consumers, and the budding U.S. hydrogen industry." The Clean Air Task Force likewise called the proposal "an excellent step toward developing a credible clean hydrogen market in the United States."

Crime

Teen GTA VI Hacker Sentenced To Indefinite Hospital Order (theverge.com) 77

Emma Roth reports via The Verge: The 18-year-old Lapsus$ hacker who played a critical role in leaking Grand Theft Auto VI footage has been sentenced to life inside a hospital prison, according to a report from the BBC. A British judge ruled on Thursday that Arion Kurtaj is a high risk to the public because he still wants to commit cybercrimes.

In August, a London jury found that Kurtaj carried out cyberattacks against GTA VI developer Rockstar Games and other companies, including Uber and Nvidia. However, since Kurtaj has autism and was deemed unfit to stand trial, the jury was asked to determine whether he committed the acts in question, not whether he did so with criminal intent. During Thursday's hearing, the court heard Kurtaj "had been violent while in custody with dozens of reports of injury or property damage," the BBC reports. A mental health assessment also found that Kurtaj "continued to express the intent to return to cybercrime as soon as possible." He's required to stay in the hospital prison for life unless doctors determine that he's no longer a danger.

Kurtaj leaked 90 videos of GTA VI gameplay footage last September while out on bail for hacking Nvidia and British telecom provider BT / EE. Although he stayed at a hotel under police protection during this time, Kurtaj still managed to carry out an attack on Rockstar Games by using the room's included Amazon Fire Stick and a "newly purchased smart phone, keyboard and mouse," according to a separate BBC report. Kurtaj was arrested for the final time following the incident. Another 17-year-old involved with Lapsus$ was handed an 18-month community sentence, called a Youth Rehabilitation Order, and a ban from using virtual private networks.

Robotics

Massachusetts Lawmakers Mull 'Killer Robot' Bill (techcrunch.com) 14

An anonymous reader quotes a report from TechCrunch, written by Brian Heater: Back in mid-September, a pair of Massachusetts lawmakers introduced a bill "to ensure the responsible use of advanced robotic technologies." What that means in the simplest and most direct terms is legislation that would bar the manufacture, sale and use of weaponized robots. It's an interesting proposal for a number of reasons. The first is a general lack of U.S. state and national laws governing such growing concerns. It's one of those things that has felt like science fiction to such a degree that many lawmakers had no interest in pursuing it in a pragmatic manner. [...] Earlier this week, I spoke about the bill with Massachusetts state representative Lindsay Sabadosa, who filed it alongside Massachusetts state senator Michael Moore.

What is the status of the bill?
We're in an interesting position, because there are a lot of moving parts with the bill. The bill has had a hearing already, which is wonderful news. We're working with the committee on the language of the bill. They have had some questions about why different pieces were written as they were written. We're doing that technical review of the language now -- and also checking in with all stakeholders to make sure that everyone who needs to be at the table is at the table.

When you say "stakeholders" ...
Stakeholders are companies that produce robotics. The robot Spot, which Boston Dynamics produces, and other robots as well, are used by entities like Boston Police Department or the Massachusetts State Police. They might be used by the fire department. So, we're talking to those people to run through the bill, talk about what the changes are. For the most part, what we're hearing is that the bill doesn't really change a lot for those stakeholders. Really the bill is to prevent regular people from trying to weaponize robots, not to prevent the very good uses that the robots are currently employed for.

Does the bill apply to law enforcement as well?
We're not trying to stop law enforcement from using the robots. And what we've heard from law enforcement repeatedly is that they're often used to deescalate situations. They talk a lot about barricade situations or hostage situations. Not to be gruesome, but if people are still alive, if there are injuries, they say it often helps to deescalate, rather than sending in officers, which we know can often escalate the situation. So, no, we wouldn't change any of those uses. The legislation does ask that law enforcement get warrants for the use of robots if they're using them in place of when they would send in a police officer. That's pretty common already. Law enforcement has to do that if it's not an emergency situation. We're really just saying, "Please follow current protocol. And if you're going to use a robot instead of a human, let's make sure that protocol is still the standard."

I'm sure you've been following the stories out of places like San Francisco and Oakland, where there's an attempt to weaponize robots. Is that included in this?
We haven't had law enforcement weaponize robots, and no one has said, "We'd like to attach a gun to a robot" from law enforcement in Massachusetts. I think because of some of those past conversations there's been a desire to not go down that route. And I think that local communities would probably have a lot to say if the police started to do that. So, while the legislation doesn't outright ban that, we are not condoning it either.
Representative Sabadosa said Boston Dynamics "sought us out" and is "leading the charge on this."

"I'm hopeful that we will be the first to get the legislation across the finish line, too," added Rep. Sabadosa. "We've gotten thank-you notes from companies, but we haven't gotten any pushback from them. And our goal is not to stifle innovation. I think there's lots of wonderful things that robots will be used for. [...]"

You can read the full interview here.
Privacy

UK Police To Be Able To Run Face Recognition Searches on 50 Million Driving Licence Holders (theguardian.com) 24

The police will be able to run facial recognition searches on a database containing images of Britain's 50 million driving licence holders under a law change being quietly introduced by the government. From a report: Should the police wish to put a name to an image collected on CCTV, or shared on social media, the legislation would provide them with the powers to search driving licence records for a match. The move, contained in a single clause in a new criminal justice bill, could put every driver in the country in a permanent police lineup, according to privacy campaigners.

Facial recognition searches match the biometric measurements of an identified photograph, such as that contained on driving licences, to those of an image picked up elsewhere. The intention to allow the police or the National Crime Agency (NCA) to exploit the UK's driving licence records is not explicitly referenced in the bill or in its explanatory notes, raising criticism from leading academics that the government is "sneaking it under the radar." Once the criminal justice bill is enacted, the home secretary, James Cleverly, must establish "driver information regulations" to enable the searches, but he will need only to consult police bodies, according to the bill.

AI

Rite Aid Banned From Using Facial Recognition Software 60

An anonymous reader quotes a report from TechCrunch: Rite Aid has been banned from using facial recognition software for five years, after the Federal Trade Commission (FTC) found that the U.S. drugstore giant's "reckless use of facial surveillance systems" left customers humiliated and put their "sensitive information at risk." The FTC's Order (PDF), which is subject to approval from the U.S. Bankruptcy Court after Rite Aid filed for Chapter 11 bankruptcy protection in October, also instructs Rite Aid to delete any images it collected as part of its facial recognition system rollout, as well as any products that were built from those images. The company must also implement a robust data security program to safeguard any personal data it collects.

A Reuters report from 2020 detailed how the drugstore chain had secretly introduced facial recognition systems across some 200 U.S. stores over an eight-year period starting in 2012, with "largely lower-income, non-white neighborhoods" serving as the technology testbed. With the FTC's increasing focus on the misuse of biometric surveillance, Rite Aid fell firmly in the government agency's crosshairs. Among its allegations are that Rite Aid -- in partnership with two contracted companies -- created a "watchlist database" containing images of customers that the company said had engaged in criminal activity at one of its stores. These images, which were often poor quality, were captured from CCTV or employees' mobile phone cameras.

When a customer entered a store who supposedly matched an existing image on its database, employees would receive an automatic alert instructing them to take action -- and the majority of the time this instruction was to "approach and identify," meaning verifying the customer's identity and asking them to leave. Often, these "matches" were false positives that led to employees incorrectly accusing customers of wrongdoing, creating "embarrassment, harassment, and other harm," according to the FTC. "Employees, acting on false positive alerts, followed consumers around its stores, searched them, ordered them to leave, called the police to confront or remove consumers, and publicly accused them, sometimes in front of friends or family, of shoplifting or other wrongdoing," the complaint reads. Additionally, the FTC said that Rite Aid failed to inform customers that facial recognition technology was in use, while also instructing employees to specifically not reveal this information to customers.
In a press release, Rite Aid said that it was "pleased to reach an agreement with the FTC," but that it disagreed with the crux of the allegations.

"The allegations relate to a facial recognition technology pilot program the Company deployed in a limited number of stores," Rite Aid said in its statement. "Rite Aid stopped using the technology in this small group of stores more than three years ago, before the FTC's investigation regarding the Company's use of the technology began."
The Internet

US Regulators Propose New Online Privacy Safeguards For Children 25

An anonymous reader quotes a report from the New York Times: The Federal Trade Commission on Wednesday proposed sweeping changes to bolster the key federal rule that has protected children's privacy online, in one of the most significant attempts by the U.S. government to strengthen consumer privacy in more than a decade. The changes are intended to fortify the rules underlying the Children's Online Privacy Protection Act of 1998, a law that restricts the online tracking of youngsters by services like social media apps, video game platforms, toy retailers and digital advertising networks. Regulators said the moves would "shift the burden" of online safety from parents to apps and other digital services while curbing how platforms may use and monetize children's data.

The proposed changes would require certain online services to turn off targeted advertising by default for children under 13. They would prohibit the online services from using personal details like a child's cellphone number to induce youngsters to stay on their platforms longer. That means online services would no longer be able to use personal data to bombard young children with push notifications. The proposed updates would also strengthen security requirements for online services that collect children's data as well as limit the length of time online services could keep that information. And they would limit the collection of student data by learning apps and other educational-tech providers, by allowing schools to consent to the collection of children's personal details only for educational purposes, not commercial purposes. [...]

The F.T.C. began reviewing the children's privacy rule in 2019, receiving more than 175,000 comments from tech and advertising industry trade groups, video content developers, consumer advocacy groups and members of Congress. The resulting proposal (PDF) runs more than 150 pages. Proposed changes include narrowing an exception that allows online services to collect persistent identification codes for children for certain internal operations, like product improvement, consumer personalization or fraud prevention, without parental consent. The proposed changes would prohibit online operators from employing such user-tracking codes to maximize the amount of time children spend on their platforms. That means online services would not be able to use techniques like sending mobile phone notifications "to prompt the child to engage with the site or service, without verifiable parental consent," according to the proposal. How online services would comply with the changes is not yet known. Members of the public have 60 days to comment on the proposals, after which the commission will vote.
AI

AI Cannot Be Patent 'Inventor,' UK Supreme Court Rules in Landmark Case (reuters.com) 29

A U.S. computer scientist on Wednesday lost his bid to register patents over inventions created by his artificial intelligence system in a landmark case in Britain about whether AI can own patent rights. From a report: Stephen Thaler wanted to be granted two patents in the UK for inventions he says were devised by his "creativity machine" called DABUS. His attempt to register the patents was refused by Britain's Intellectual Property Office on the grounds that the inventor must be a human or a company, rather than a machine. Thaler appealed to the UK's Supreme Court, which on Wednesday unanimously rejected his appeal as under UK patent law "an inventor must be a natural person."

"This appeal is not concerned with the broader question whether technical advances generated by machines acting autonomously and powered by AI should be patentable," Judge David Kitchin said in the court's written ruling. "Nor is it concerned with the question whether the meaning of the term 'inventor' ought to be expanded ... to include machines powered by AI which generate new and non-obvious products and processes which may be thought to offer benefits over products and processes which are already known." Thaler's lawyers said in a statement that "the judgment establishes that UK patent law is currently wholly unsuitable for protecting inventions generated autonomously by AI machines."

Canada

Meta's News Ban In Canada Remains As Online News Act Goes Into Effect (bbc.com) 147

An anonymous reader quotes a report from the BBC: A bill that mandates tech giants pay news outlets for their content has come into effect in Canada amid an ongoing dispute with Facebook and Instagram owner Meta over the law. Some have hailed it as a game-changer that sets out a permanent framework that will see a steady drip of funds from wealthy tech companies to Canada's struggling journalism industry. But it has also been met with resistance by Google and Meta -- the only two companies big enough to be encompassed by the law. In response, over the summer, Meta blocked access to news on Facebook and Instagram for Canadians. Google looked set to follow, but after months of talks, the federal government was able to negotiate a deal with the search giant as the company has agreed to pay Canadian news outlets $75 million annually.

No such agreement appears to be on the horizon with Meta, which has called the law "fundamentally flawed." If Meta is refusing to budge, so is the government. "We will continue to push Meta, that makes billions of dollars in profits, even though it is refusing to invest in the journalistic rigor and stability of the media," Prime Minister Justin Trudeau told reporters on Friday.
According to a study by the Media Ecosystem Observatory, the views of Canadian news on Facebook dropped 90% after the company blocked access to news on the platform. Local news outlets have been hit particularly hard.

"The loss of journalism on Meta platforms represents a significant decline in the resiliency of the Canadian media ecosystem," said Taylor Owen, a researcher at McGill and the co-author of the study. He believes it also hurts Meta's brand in the long run, pointing to the fact that the Canada's federal government, as well as that of British Columbia, other municipalities and a handful of large Canadian corporations, have all pulled their advertising off Facebook and Instagram in retaliation.
Security

Comcast Discloses Data Breach of Close To 36 Million Xfinity Customers [UPDATE] (techcrunch.com) 40

In a notice on Monday, Xfinity notified customers of a "data security incident" that resulted in the theft of customer information, including usernames, passwords, contact information, and more. The Verge reports: Xfinity traces the breach to a security vulnerability disclosed by cloud computing company Citrix, which began alerting customers of a flaw in software Xfinity and other companies use on October 10th. While Xfinity says it patched the security hole, it later uncovered suspicious activity on its internal systems "that was concluded to be a result of this vulnerability."

The hack resulted in the theft of customer usernames and hashed passwords, according to Xfinity's notice. Meanwhile, "some customers" may have had their names, contact information, last four digits of their social security numbers, dates of birth, and / or secret questions and answers exposed. Xfinity has notified federal law enforcement about the incident and says "data analysis is continuing."

We still don't know how many users were affected by the breach. Xfinity will automatically ask customers to change their passwords the next time they log in to their accounts, and it's also encouraging users to turn on two-factor authentication. You can find the full notice, including contact information for the company's incident response team, on Xfinity's website (PDF).
UPDATE 12/19/23: According to TechCrunch, almost 36 million Xfinity customers had their sensitive information accessed by hackers via a vulnerability known as "CitrixBleed." The vulnerability is "found in Citrix networking devices often used by big corporations and has been under mass-exploitation by hackers since late August," the report says. "Citrix made patches available in early October, but many organizations did not patch in time. Hackers have used the CitrixBleed vulnerability to hack into big-name victims, including aerospace giant Boeing, the Industrial and Commercial Bank of China and international law firm Allen & Overy."

"In a filing with Maine's attorney general, Comcast confirmed that almost 35.8 million customers are affected by the breach. Comcast's latest earnings report shows the company has more than 32 million broadband customers, suggesting this breach has impacted most, if not all Xfinity customers."
Crime

Nikola Founder Trevor Milton Sentenced To 4 Years For Securities Fraud (techcrunch.com) 34

An anonymous reader quotes a report from TechCrunch: Trevor Milton, the disgraced founder and former CEO of electric truck startup Nikola, was sentenced Monday to four years in prison for securities fraud. The sentence, by Judge Edgardo Ramos in the U.S. District Court in Manhattan, caps a multi-year saga that at one point sent Nikola stock soaring 83% only to come crashing down months later over accusations of fraud and canceled contracts. The sentencing hearing comes after four separate delays, during which Milton has remained free under a $100 million bond.

In his ruling, Ramos said he would impose a sentence of 48 months on each count, served concurrently, and a fine of $1 million. Milton is expected to appeal the sentence, which Ramos acknowledged. Milton sobbed as he pled with Judge Ramos for leniency in a long and often confusing statement ahead of the sentencing. At one point, Milton said he stepped down from the CEO post at Nikola not because of fraud allegations, but to support his wife. "I stepped down because my wife was suffering live threatening sickness," he said in his statement, which reporter Matthew Russell Lee of Inner City Press shared on social media post X. She suffered medical malpractice, someone else's plasma. So I stepped down for that -- not because I was a fraud. The truth matters. I chose my wife over money or power."

During the sentencing hearing, defense attorneys said that Milton wasn't trying to defraud investors or intending to harm anyone. Instead, they argued he simply wanted to be loved and praised like Elon Musk. Prosecutors pushed back and said he lied repeatedly and targeted retail investors. Federal prosecutors recommended an 11-year sentence, but Milton faced a maximum term of 60 years in prison. The government also sought a $5 million fine, forfeiture of a ranch in Utah and an undetermined amount of restitution to investors. Restitution will be determined after Monday's sentencing hearing.
Timeline of events:

June, 2016: Nikola Motor Receives Over 7,000 Preorders Worth Over $2.3 Billion For Its Electric Truck
December, 2016: Nikola Motor Company Reveals Hydrogen Fuel Cell Truck With Range of 1,200 Miles
February, 2020: Nikola Motors Unveils Hybrid Fuel-Cell Concept Truck With 600-Mile Range
June, 2020: Nikola Founder Exaggerated the Capability of His Debut Truck
September, 2020: Nikola Motors Accused of Massive Fraud, Ocean of Lies
September, 2020: Nikola Admits Prototype Was Rolling Downhill In Promo Video
September, 2020: Nikola Founder Trevor Milton Steps Down as Chairman in Battle With Short Seller
October, 2020: Nikola Stock Falls 14 Percent After CEO Downplays Badger Truck Plans
November, 2020: Nikola Stock Plunges As Company Cancels Badger Pickup Truck
July, 2021: Nikola Founder Trevor Milton Indicted on Three Counts of Fraud
December, 2021: EV Startup Nikola Agrees To $125 Million Settlement
September, 2022: Nikola Founder Lied To Investors About Tech, Prosecutor Says in Fraud Trial
Patents

Apple To Pause Selling New Versions of Its Watch After Losing Patent Dispute (nytimes.com) 36

An anonymous reader quotes a report from the New York Times: Apple said on Monday that it would pause sales of its flagship smartwatches online starting Thursday and at retail locations on Christmas Eve. Two months ago, Apple lost a patent case over the technology its smartwatches use to detect people's pulse rate. The company was ordered to stop selling the Apple Watch Series 9 and Watch Ultra 2 after Christmas, which could set off a run on sales of the watches in the final week of holiday shopping. The move by Apple follows a ruling by the International Trade Commission in October that found several Apple Watches infringe on patents held by Masimo, a medical technology company in Irvine, Calif.

In court, Masimo detailed how Apple poached its top executives and more than a dozen other employees before later releasing a watch with pulse oximeter capabilities -- whichmeasures the percentage of oxygen that red blood cells carry from the lungs to the body -- that were patented by Masimo. To avoid a complete ban on sales, Apple had two months to cut a deal with Masimo to license its technology, or it could appeal to the Biden administration to reverse the ruling. But Joe Kiani, the chief executive of Masimo, said in an interview that Apple had not engaged in licensing negotiations. Instead, he said that Apple had appealed to President Biden to veto the I.T.C. ruling, which Mr. Kiani knows because the administration contacted Masimo about Apple's request. "They're trying to make the agency look like it's helping patent trolls," Mr. Kiani said of the I.T.C.

Mr. Kiani said that he was willing to sell Apple a chip that Masimo had designed to provide pulse oximeter readings on the Apple Watch. The chip is currently in a Masimo medical watch, called the W1, that is approved by the Food and Drug Administration. The device uses algorithms to process red and near-infrared light to determine how oxygen-rich is the blood in arteries. "If they don't want to use our chip, I'll work with them to make their product good," Mr. Kiani said. "Once it's good enough, I'm happy to give them a license." Apple introduced its first watch with pulse oximetry in 2020. It has included the technology, which it calls "blood oxygen," in subsequent models. But unlike Masimo's W1 device, Apple hasn't had its watches cleared by the F.D.A. for use as a medical device for pulse oximetry.
"The Apple Watch accounts for nearly $20 billion of the company's $383.29 billion in annual sales," notes the NYT. The company is the largest smartwatch seller in the world, accounting for about a third of all smartwatch sales.
Government

Lawmakers Push DOJ To Investigate Apple Following Beeper Shutdowns (theverge.com) 55

Following a tumultuous few weeks for Beeper, which has been trying to provide an iMessage-compatible Android app, a group of US lawmakers are pushing for the DOJ to investigate Apple for "potentially anticompetitive conduct" over its attempts to disable Beeper's services. From a report: Senators Amy Klobuchar (D-MN) and Mike Lee (R-UT) as well as Representatives Jerry Nadler (D-NY) and Ken Buck (R-CO) said in a letter to the DOJ that Beeper's Android messaging app, Beeper Mini, was a threat to Apple's leverage by "creating [a] more competitive mobile applications market, which in turn [creates] a more competitive mobile device market."

In an interview with CBS News on Monday, Beeper CEO Eric Migicovsky and 16-year-old developer James Gill talked about the fight to keep Beeper Mini alive. Migicovsky told CBS News that Beeper is trying to provide a service people want and reiterated his belief that Apple has a monopoly over its iMessage service. The company created Beeper Mini after being contacted by Gill, who said he reverse-engineered the software by "poking at it" using a "real Mac and a real iPhone." [...] The lawmakers' letter also pointed to a Department of Commerce report calling Apple a "gatekeeper," mirroring language used in the EU Digital Markets Act (DMA) that went into force earlier this year, regulating the "core" services of several tech platforms (though, notably, iMessage may not be included in this). They went on to cite Migicovsky's December 2021 Senate Judiciary Committee testimony that "the dominant messaging services would use their position to impose barriers to interoperability" and keep companies like Beeper from offering certain services. "Given Apple's recent actions, that concern appears prescient," they added.

Facebook

Does Meta's New Face Camera Herald a New Age of Surveillance? Or Distraction... (seattletimes.com) 74

"For the past two weeks, I've been using a new camera to secretly snap photos and record videos of strangers in parks, on trains, inside stores and at restaurants," writes a reporter for the New York Times. They were testing the recently released $300 Ray-Ban Meta glasses — "I promise it was all in the name of journalism" — which also includes microphones (and speakers, for listening to audio).

They call the device "part of a broader ambition in Silicon Valley to shift computing away from smartphone and computer screens and toward our faces." Meta, Apple and Magic Leap have all been hyping mixed-reality headsets that use cameras to allow their software to interact with objects in the real world. On Tuesday, Zuckerberg posted a video on Instagram demonstrating how the smart glasses could use AI to scan a shirt and help him pick out a pair of matching pants. Wearable face computers, the companies say, could eventually change the way we live and work... While I was impressed with the comfortable, stylish design of the glasses, I felt bothered by the implications for our privacy...

To inform people that they are being photographed, the Ray-Ban Meta glasses include a tiny LED light embedded in the right frame to indicate when the device is recording. When a photo is snapped, it flashes momentarily. When a video is recording, it is continuously illuminated. As I shot 200 photos and videos with the glasses in public, including on BART trains, on hiking trails and in parks, no one looked at the LED light or confronted me about it. And why would they? It would be rude to comment on a stranger's glasses, let alone stare at them... [A] Meta spokesperson, said the company took privacy seriously and designed safety measures, including a tamper-detection technology, to prevent users from covering up the LED light with tape.

But another concern was how smart glasses might impact our ability to focus: Even when I wasn't using any of the features, I felt distracted while wearing them... I had problems concentrating while driving a car or riding a scooter. Not only was I constantly bracing myself for opportunities to shoot video, but the reflection from other car headlights emitted a harsh, blue strobe effect through the eyeglass lenses. Meta's safety manual for the Ray-Bans advises people to stay focused while driving, but it doesn't mention the glare from headlights. While doing work on a computer, the glasses felt unnecessary because there was rarely anything worth photographing at my desk, but a part of my mind constantly felt preoccupied by the possibility...

Ben Long, a photography teacher in San Francisco, said he was skeptical about the premise of the Meta glasses helping people remain present. "If you've got the camera with you, you're immediately not in the moment," he said. "Now you're wondering, Is this something I can present and record?"

The reporter admits they'll fondly cherish its photos of their dog [including in the original article], but "the main problem is that the glasses don't do much we can't already do with phones... while these types of moments are truly precious, that benefit probably won't be enough to convince a vast majority of consumers to buy smart glasses and wear them regularly, given the potential costs of lost privacy and distraction."
Government

ProPublica Argues US Police 'Have Undermined the Promise of Body Cameras' (propublica.org) 96

A new investigation from ProPublica argues that in the U.S., "Hundreds of millions in taxpayer dollars have been spent on what was sold as a revolution in transparency and accountability.

"Instead, police departments routinely refuse to release footage..." The technology represented the largest new investment in policing in a generation. Yet without deeper changes, it was a fix bound to fall far short of those hopes. In every city, the police ostensibly report to mayors and other elected officials. But in practice, they have been given wide latitude to run their departments as they wish and to police — and protect — themselves. And so as policymakers rushed to equip the police with cameras, they often failed to grapple with a fundamental question: Who would control the footage?

Instead, they defaulted to leaving police departments, including New York's, with the power to decide what is recorded, who can see it and when. In turn, departments across the country have routinely delayed releasing footage, released only partial or redacted video or refused to release it at all. They have frequently failed to discipline or fire officers when body cameras document abuse and have kept footage from the agencies charged with investigating police misconduct. Even when departments have stated policies of transparency, they don't always follow them. Three years ago, after George Floyd's killing by Minneapolis police officers and amid a wave of protests against police violence, the New York Police Department said it would publish footage of so-called critical incidents "within 30 days." There have been 380 such incidents since then. The department has released footage within a month just twice.

And the department often does not release video at all. There have been 28 shootings of civilians this year by New York officers (through the first week of December). The department has released footage in just seven of these cases (also through the first week of December) and has not done so in any of the last 16.... For a snapshot of disclosure practices across the country, we conducted a review of civilians killed by police officers in June 2022, roughly a decade after the first body cameras were rolled out. We counted 79 killings in which there was body-worn-camera footage. A year and a half later, the police have released footage in just 33 cases — or about 42%.

The reporting reveals that without further intervention from city, state and federal officials and lawmakers, body cameras may do more to serve police interests than those of the public they are sworn to protect... The pattern has become so common across the country — public talk of transparency followed by a deliberate undermining of the stated goal — that the policing-oversight expert Hans Menos, who led Philadelphia's civilian police-oversight board until 2020, coined a term for it: the "body-cam head fake."

The article includes examples where when footage was ultimately released, it contradicted initial police accounts.

In one instance, past footage of Minneapolis police officer Derek Chauvin "was left in the control of a department where impunity reigned..." the article points out, adding that Minneapolis "fought against releasing the videos, even after Chauvin pleaded guilty in December 2021 to federal civil rights violations."
DRM

'Copyright Troll' Porn Company 'Makes Millions By Shaming Porn Consumers' (yahoo.com) 100

In 1999 Los Angeles Times reporter Michael Hiltzik co-authored a Pulitzer Prize-winning story. Now a business columnist for the Times, he writes that a Southern California maker of pornographic films named Strike 3 Holdings is also "a copyright troll," according to U.S. Judge Royce C. Lamberth: Lamberth cwrote in 2018, "Armed with hundreds of cut-and-pasted complaints and boilerplate discovery motions, Strike 3 floods this courthouse (and others around the country) with lawsuits smacking of extortion. It treats this Court not as a citadel of justice, but as an ATM." He likened its litigation strategy to a "high-tech shakedown." Lamberth was not speaking off the cuff. Since September 2017, Strike 3 has filed more than 12,440 lawsuits in federal courts alleging that defendants infringed its copyrights by downloading its movies via BitTorrent, an online service on which unauthorized content can be accessed by almost anyone with a computer and internet connection.

That includes 3,311 cases the firm filed this year, more than 550 in federal courts in California. On some days, scores of filings reach federal courthouses — on Nov. 17, to select a date at random, the firm filed 60 lawsuits nationwide... Typically, they are settled for what lawyers say are cash payments in the four or five figures or are dismissed outright...

It's impossible to pinpoint the profits that can be made from this courthouse strategy. J. Curtis Edmondson, a Portland, Oregon, lawyer who is among the few who pushed back against a Strike 3 case and won, estimates that Strike 3 "pulls in about $15 million to $20 million a year from its lawsuits." That would make the cases "way more profitable than selling their product...." If only one-third of its more than 12,000 lawsuits produced settlements averaging as little as $5,000 each, the yield would come to $20 million... The volume of Strike 3 cases has increased every year — from 1,932 in 2021 to 2,879 last year and 3,311 this year.

What's really needed is a change in copyright law to bring the statutory damages down to a level that truly reflects the value of a film lost because of unauthorized downloading — not $750 or $150,000 but perhaps a few hundred dollars.

Anone of the lawsuits go to trial. Instead ISPs get a subpoena demanding the real-world address and name behind IP addresses "ostensibly used to download content from BitTorrent..." according to the article. Strike 3 will then "proceed by sending a letter implicitly threatening the subscriber with public exposure as a pornography viewer and explicitly with the statutory penalties for infringement written into federal copyright law — up to $150,000 for each example of willful infringement and from $750 to $30,0000 otherwise."

A federal judge in Connecticut wrote last year that "Given the nature of the films at issue, defendants may feel coerced to settle these suits merely to prevent public disclosure of their identifying information, even if they believe they have been misidentified."

Thanks to Slashdot reader Beerismydad for sharing the article.

Slashdot Top Deals