×
Privacy

Dashlane Publishes Its Source Code To GitHub In Transparency Push (techcrunch.com) 8

Password management company Dashlane has made its mobile app code available on GitHub for public perusal, a first step it says in a broader push to make its platform more transparent. TechCrunch reports: The Dashlane Android app code is available now alongside the iOS incarnation, though it also appears to include the codebase for its Apple Watch and Mac apps even though Dashlane hasn't specifically announced that. The company said that it eventually plans to make the code for its web extension available on GitHub too. Initially, Dashlane said that it was planning to make its codebase "fully open source," but in response to a handful of questions posed by TechCrunch, it appears that won't in fact be the case.

At first, the code will be open for auditing purposes only, but in the future it may start accepting contributions too --" however, there is no suggestion that it will go all-in and allow the public to fork or otherwise re-use the code in their own applications. Dashlane has released the code under a Creative Commons Attribution-NonCommercial 4.0 license, which technically means that users are allowed to copy, share and build upon the codebase so long as it's for non-commercial purposes. However, the company said that it has stripped out some key elements from its release, effectively hamstringing what third-party developers are able to do with the code. [...]

"The main benefit of making this code public is that anyone can audit the code and understand how we build the Dashlane mobile application," the company wrote. "Customers and the curious can also explore the algorithms and logic behind password management software in general. In addition, business customers, or those who may be interested, can better meet compliance requirements by being able to review our code." On top of that, the company says that a benefit of releasing its code is to perhaps draw-in technical talent, who can inspect the code prior to an interview and perhaps share some ideas on how things could be improved. Moreover, so-called "white-hat hackers" will now be better equipped to earn bug bounties. "Transparency and trust are part of our company values, and we strive to reflect those values in everything we do," Dashlane continued. "We hope that being transparent about our code base will increase the trust customers have in our product."

AI

Judge Uses ChatGPT To Make Court Decision (vice.com) 59

An anonymous reader quotes a report from Motherboard: A judge in Colombia used ChatGPT to make a court ruling, in what is apparently the first time a legal decision has been made with the help of an AI text generator -- or at least, the first time we know about it. Judge Juan Manuel Padilla Garcia, who presides over the First Circuit Court in the city of Cartagena, said he used the AI tool to pose legal questions about the case and included its responses in his decision, according to a court document (PDF) dated January 30, 2023.

"The arguments for this decision will be determined in line with the use of artificial intelligence (AI)," Garcia wrote in the decision, which was translated from Spanish. "Accordingly, we entered parts of the legal questions posed in these proceedings." "The purpose of including these AI-produced texts is in no way to replace the judge's decision," he added. "What we are really looking for is to optimize the time spent drafting judgments after corroborating the information provided by AI."

The case involved a dispute with a health insurance company over whether an autistic child should receive coverage for medical treatment. According to the court document, the legal questions entered into the AI tool included "Is an autistic minor exonerated from paying fees for their therapies?" and "Has the jurisprudence of the constitutional court made favorable decisions in similar cases?" Garcia included the chatbot's full responses in the decision, apparently marking the first time a judge has admitted to doing so. The judge also included his own insights into applicable legal precedents, and said the AI was used to "extend the arguments of the adopted decision." After detailing the exchanges with the AI, the judge then adopts its responses and his own legal arguments as grounds for its decision.

AI

Replika, a 'Virtual Friendship' AI Chatbot, Hit With Data Ban in Italy Over Child Safety (techcrunch.com) 5

An anonymous reader shares a report: San Francisco-based AI chatbot maker, Replika -- which operates a freemium 'virtual friendship' service based on customizable digital avatars whose "personalized" responses are powered by artificial intelligence (and designed, per its pitch, to make human users feel better) -- has been ordered by Italy's privacy watchdog to stop processing local users' data. The Garante said it's concerned Replika's chatbot technology poses risks to minors -- and also that the company lacks a proper legal basis for processing children's data under the EU's data protection rules.

Additionally, the regulator is worried about the risk the AI chatbots could pose to emotionally vulnerable people. It's also accusing Luka, the developer behind the Replika app, of failing to fulfil regional legal requirements to clearly convey how it's using people's data. The order to stop processing Italians' data is effective immediately. In a press release announcing its intervention, the watchdog said: "The AI-powered chatbot, which generates a 'virtual friend' using text and video interfaces, will not be able to process [the] personal data of Italian users for the time being. A provisional limitation on data processing was imposed by the Italian Garante on the U.S.-based company that has developed and operates the app; the limitation will take effect immediately."

Crime

Former Ubiquiti Employee Pleads Guilty To Attempted Extortion Scheme (theverge.com) 15

A former employee of network technology provider Ubiquiti pleaded guilty to multiple felony charges after posing as an anonymous hacker in an attempt to extort almost $2 million worth of cryptocurrency while employed at the company. From a report: Nickolas Sharp, 37, worked as a senior developer for Ubiquiti between 2018 and 2021 and took advantage of his authorized access to Ubiquiti's network to steal gigabytes worth of files from the company during an orchestrated security breach in December 2020.

Prosecutors said that Sharp used the Surfshark VPN service to hide his home IP address and intentionally damaged Ubiquiti's computer systems during the attack in an attempt to conceal his unauthorized activity. Sharp later posed as an anonymous hacker who claimed to be behind the incident while working on an internal team that was investigating the security breach. While concealing his identity, Sharp attempted to extort Ubiquiti, sending a ransom note to the company demanding 50 Bitcoin (worth around $1.9 million at that time) in exchange for returning the stolen data and disclosing the security vulnerabilities used to acquire it. When Ubiquiti refused the ransom demands, Sharp leaked some of the stolen data to the public.
The FBI was prompted to investigate Sharp's home around March 24th, 2021, after it was discovered that a temporary internet outage had exposed Sharp's IP address during the security breach.

Further reading:
Ubiquiti Files Case Against Security Blogger Krebs Over 'False Accusations';
Former Ubiquiti Dev Charged For Trying To Extort His Employer.
Encryption

Kremlin's Tracking of Russian Dissidents Through Telegram Suggests App's Encryption Has Been Compromised (wired.com) 56

Russian antiwar activists placed their faith in Telegram, a supposedly secure messaging app. How does Putin's regime seem to know their every move? From a report: Matsapulina's case [anecdote in the story] is hardly an isolated one, though it is especially unsettling. Over the past year, numerous dissidents across Russia have found their Telegram accounts seemingly monitored or compromised. Hundreds have had their Telegram activity wielded against them in criminal cases. Perhaps most disturbingly, some activists have found their "secret chats" -- Telegram's purportedly ironclad, end-to-end encrypted feature -- behaving strangely, in ways that suggest an unwelcome third party might be eavesdropping.

These cases have set off a swirl of conspiracy theories, paranoia, and speculation among dissidents, whose trust in Telegram has plummeted. In many cases, it's impossible to tell what's really happening to people's accounts -- whether spyware or Kremlin informants have been used to break in, through no particular fault of the company; whether Telegram really is cooperating with Moscow; or whether it's such an inherently unsafe platform that the latter is merely what appears to be going on.

Facebook

Documents Show Meta Paid For Data Scraping Despite Years of Denouncing It (engadget.com) 11

An anonymous reader quotes a report from Engadget: Meta has routinely fought data scrapers, but it also participated in that practice itself -- if not necessarily for the same reasons. Bloomberg has obtained legal documents from a Meta lawsuit against a former contractor, Bright Data, indicating that the Facebook owner paid its partner to scrape other websites. Meta spokesperson Andy Stone confirmed the relationship in a discussion with Bloomberg, but said his company used Bright Data to build brand profiles, spot "harmful" sites and catch phishing campaigns, not to target competitors.

Stone added that data scraping could serve "legitimate integrity and commercial purposes" so long as it was done legally and honored sites' terms of service. Meta terminated its arrangement with Bright Data after the contractor allegedly violated company terms when gathering and selling data from Facebook and Instagram. Neither Bright Data nor Meta is saying which sites they scraped. Bright Data is countersuing Meta in a bid to keep scraping Facebook and Instagram, arguing that it only collects publicly available information and respects both European Union and US regulations.

Security

Anker Finally Comes Clean About Its Eufy Security Cameras (theverge.com) 30

An anonymous reader quotes a report from The Verge: First, Anker told us it was impossible. Then, it covered its tracks. It repeatedly deflected while utterly ignoring our emails. So shortly before Christmas, we gave the company an ultimatum: if Anker wouldn't answer why its supposedly always-encrypted Eufy cameras were producing unencrypted streams -- among other questions -- we would publish a story about the company's lack of answers. It worked.

In a series of emails to The Verge, Anker has finally admitted its Eufy security cameras are not natively end-to-end encrypted -- they can and did produce unencrypted video streams for Eufy's web portal, like the ones we accessed from across the United States using an ordinary media player. But Anker says that's now largely fixed. Every video stream request originating from Eufy's web portal will now be end-to-end encrypted -- like they are with Eufy's app -- and the company says it's updating every single Eufy camera to use WebRTC, which is encrypted by default. Reading between the lines, though, it seems that these cameras could still produce unencrypted footage upon request.

That's not all Anker is disclosing today. The company has apologized for the lack of communication and promised to do better, confirming it's bringing in outside security and penetration testing companies to audit Eufy's practices, is in talks with a "leading and well-known security expert" to produce an independent report, is promising to create an official bug bounty program, and will launch a microsite in February to explain how its security works in more detail. Those independent audits and reports may be critical for Eufy to regain trust because of how the company has handled the findings of security researchers and journalists. It's a little hard to take the company at its word! But we also think Anker Eufy customers, security researchers and journalists deserve to read and weigh those words, particularly after so little initial communication from the company. That's why we're publishing Anker's full responses [here].
As highlighted by Ars Technica, some of the notable statements include: - Its web portal now prohibits users from entering "debug mode."
- Video stream content is encrypted and inaccessible outside the portal.
- While "only 0.1 percent" of current daily users access the portal, it "had some issues," which have been resolved.
- Eufy is pushing WebRTC to all of its security devices as the end-to-end encrypted stream protocol.
- Facial recognition images were uploaded to the cloud to aid in replacing/resetting/adding doorbells with existing image sets, but has been discontinued. No recognition data was included with images sent to the cloud.
- Outside of the "recent issue with the web portal," all other video uses end-to-end encryption.
- A "leading and well-known security expert" will produce a report about Eufy's systems.
- "Several new security consulting, certification, and penetration testing" firms will be brought in for risk assessment.
- A "Eufy Security bounty program" will be established.
- The company promises to "provide more timely updates in our community (and to the media!)."

Privacy

GoodRx Leaked User Health Data To Facebook and Google, FTC Says (nytimes.com) 31

An anonymous reader quotes a report from The New York Times: Millions of Americans have used GoodRx, a drug discount app, to search for lower prices on prescriptions like antidepressants, H.I.V. medications and treatments for sexually transmitted diseases at their local drugstores. But U.S. regulators say the app's coupons and convenience came at a high cost for users: wrongful disclosure of their intimate health information. On Wednesday, the Federal Trade Commission accused the app's developer, GoodRx Holdings, of sharing sensitive personal data on millions of users' prescription medications and illnesses with companies like Facebook and Google without authorization. [...]

From 2017 to 2020, GoodRx uploaded the contact information of users who had bought certain medications, like birth control or erectile dysfunction pills, to Facebook so that the drug discount app could identify its users' social media profiles, the F.T.C. said in a legal complaint. GoodRx then used the personal information to target users with ads for medications on Facebook and Instagram, the complaint said, "all of which was visible to Facebook." GoodRx also targeted users who had looked up information on sexually transmitted diseases on HeyDoctor, the company's telemedicine service, with ads for HeyDoctor's S.T.D. testing services, the complaint said. Those data disclosures, regulators said, flouted public promises the company had made to "never provide advertisers any information that reveals a personal health condition."

The company's information-sharing practices, the agency said, violated a federal rule requiring health apps and fitness trackers that collect personal health details to notify consumers of data breaches. While GoodRx agreed to settle the case, it said it disagreed with the agency's allegations and admitted no wrongdoing. The F.T.C.'s case against GoodRx could upend widespread user-profiling and ad-targeting practices in the multibillion-dollar digital health industry, and it puts companies on notice that regulators intend to curb the nearly unfettered trade in consumers' health details. [...] If a judge approves the proposed federal settlement order, GoodRx will be permanently barred from sharing users' health information for advertising purposes. To settle the case, the company also agreed to pay a $1.5 million civil penalty for violating the health breach notification rule.

Privacy

Stable Diffusion 'Memorizes' Some Images, Sparking Privacy Concerns (arstechnica.com) 37

An anonymous reader quotes a report from Ars Technica: On Monday, a group of AI researchers from Google, DeepMind, UC Berkeley, Princeton, and ETH Zurich released a paper outlining an adversarial attack that can extract a small percentage of training images from latent diffusion AI image synthesis models like Stable Diffusion. It challenges views that image synthesis models do not memorize their training data and that training data might remain private if not disclosed. Recently, AI image synthesis models have been the subject of intense ethical debate and even legal action. Proponents and opponents of generative AI tools regularly argue over the privacy and copyright implications of these new technologies. Adding fuel to either side of the argument could dramatically affect potential legal regulation of the technology, and as a result, this latest paper, authored by Nicholas Carlini et al., has perked up ears in AI circles.

However, Carlini's results are not as clear-cut as they may first appear. Discovering instances of memorization in Stable Diffusion required 175 million image generations for testing and preexisting knowledge of trained images. Researchers only extracted 94 direct matches and 109 perceptual near-matches out of 350,000 high-probability-of-memorization images they tested (a set of known duplicates in the 160 million-image dataset used to train Stable Diffusion), resulting in a roughly 0.03 percent memorization rate in this particular scenario. Also, the researchers note that the "memorization" they've discovered is approximate since the AI model cannot produce identical byte-for-byte copies of the training images. By definition, Stable Diffusion cannot memorize large amounts of data because the size of the 160,000 million-image training dataset is many orders of magnitude larger than the 2GB Stable Diffusion AI model. That means any memorization that exists in the model is small, rare, and very difficult to accidentally extract.

Still, even when present in very small quantities, the paper appears to show that approximate memorization in latent diffusion models does exist, and that could have implications for data privacy and copyright. The results may one day affect potential image synthesis regulation if the AI models become considered "lossy databases" that can reproduce training data, as one AI pundit speculated. Although considering the 0.03 percent hit rate, they would have to be considered very, very lossy databases -- perhaps to a statistically insignificant degree. [...] Eric Wallace, one of the paper's authors, shared some personal thoughts on the research in a Twitter thread. As stated in the paper, he suggested that AI model-makers should de-duplicate their data to reduce memorization. He also noted that Stable Diffusion's model is small relative to its training set, so larger diffusion models are likely to memorize more. And he advised against applying today's diffusion models to privacy-sensitive domains like medical imagery.

Crime

'Pig-Butchering' Scam Apps Sneak Into Apple's App Store and Google Play (arstechnica.com) 44

In the past year, a new term has arisen to describe an online scam raking in millions, if not billions, of dollars per year. It's called "pig butchering," and now even Apple is getting fooled into participating. From a report: Researchers from security firm Sophos said on Wednesday that they uncovered two apps available in the App Store that were part of an elaborate network of tools used to dupe people into putting large sums of money into fake investment scams. At least one of those apps also made it into Google Play, but that market is notorious for the number of malicious apps that bypass Google vetting. Sophos said this was the first time it had seen such apps in the App Store and that a previous app identified in these types of scams was a legitimate one that was later exploited by bad actors.

Pig butchering relies on a rich combination of apps, websites, web hosts, and humans -- in some cases human trafficking victims -- to build trust with a mark over a period of weeks or months, often under the guise of a romantic interest, financial adviser, or successful investor. Eventually, the online discussion will turn to investments, usually involving cryptocurrency, that the scammer claims to have earned huge sums of money from. The scammer then invites the victim to participate. Once a mark deposits money, the scammers will initially allow them to make withdrawals. The scammers eventually lock the account and claim they need a deposit of as much as 20 percent of their balance to get it back. Even when the deposit is paid, the money isn't returned, and the scammers invent new reasons the victim should send more money. The pig-butchering term derives from a farmer fattening up a hog months before it's butchered.

Facebook

Hacker Finds Bug That Allowed Anyone To Bypass Facebook 2FA (techcrunch.com) 13

An anonymous reader quotes a report from TechCrunch: A bug in a new centralized system that Meta created for users to manage their logins for Facebook and Instagram could have allowed malicious hackers to switch off an account's two-factor protections just by knowing their phone number. Gtm Manoz, a security researcher from Nepal, realized that Meta did not set up a limit of attempts when a user entered the two-factor code used to log into their accounts on the new Meta Accounts Center, which helps users link all their Meta accounts, such as Facebook and Instagram.

With a victim's phone number, an attacker would go to the centralized accounts center, enter the phone number of the victim, link that number to their own Facebook account, and then brute force the two-factor SMS code. This was the key step, because there was no upper limit to the amount of attempts someone could make. Once the attacker got the code right, the victim's phone number became linked to the attacker's Facebook account. A successful attack would still result in Meta sending a message to the victim, saying their two-factor was disabled as their phone number got linked to someone else's account.

Manoz found the bug in the Meta Accounts Center last year, and reported it to the company in mid-September. Meta fixed the bug a few days later, and paid Manoz $27,200 for reporting the bug. Meta spokesperson Gabby Curtis told TechCrunch that at the time of the bug the login system was still at the stage of a small public test. Curtis also said that Meta's investigation after the bug was reported found that there was no evidence of exploitation in the wild, and that Meta saw no spike in usage of that particular feature, which would signal the fact that no one was abusing it.

The Internet

Massive Yandex Code Leak Reveals Russian Search Engine's Ranking Factors (arstechnica.com) 24

An anonymous reader quotes a report from Ars Technica: Nearly 45GB of source code files, allegedly stolen by a former employee, have revealed the underpinnings of Russian tech giant Yandex's many apps and services. It also revealed key ranking factors for Yandex's search engine, the kind almost never revealed in public. [...] While it's not clear whether there are security or structural implications of Yandex's source code revelation, the leak of 1,922 ranking factors in Yandex's search algorithm is certainly making waves. SEO consultant Martin MacDonald described the hack on Twitter as "probably the most interesting thing to have happened in SEO in years" (as noted by Search Engine Land). In a thread detailing some of the more notable factors, researcher Alex Buraks suggests that "there is a lot of useful information for Google SEO as well."

Yandex, the fourth-ranked search engine by volume, purportedly employs several ex-Google employees. Yandex tracks many of Google's ranking factors, identifiable in its code, and competes heavily with Google. Google's Russian division recently filed for bankruptcy after losing its bank accounts and payment services. Buraks notes that the first factor in Yandex's list of ranking factors is "PAGE_RANK," which is seemingly tied to the foundational algorithm created by Google's co-founders.

As detailed by Buraks (in two threads), Yandex's engine favors pages that: - Aren't too old
- Have a lot of organic traffic (unique visitors) and less search-driven traffic
- Have fewer numbers and slashes in their URL
- Have optimized code rather than "hard pessimization," with a "PR=0"
- Are hosted on reliable servers
- Happen to be Wikipedia pages or are linked from Wikipedia
- Are hosted or linked from higher-level pages on a domain
- Have keywords in their URL (up to three)

Security

JD Sports Admits Intruder Accessed 10 Million Customers' Data (theregister.com) 6

Sports fashion retailer JD Sports has confirmed miscreants broke into a system that contained data on a whopping 10 million customers, but no payment information was among the mix. The Register reports: In a post to investors this morning, the London Stock Exchange-listed business said the intrusion related to infrastructure that housed data for online orders from sub-brands including JD, Size? Millets, Blacks, Scotts and MilletSport between November 2018 and October 2020. The data accessed consisted of customer name, billing address, delivery address, phone number, order details and the final four digits of payment cards "of approximately 10 million unique customers." The company does "not hold full payment card details" and said that it has "no reason to believe that account passwords were accessed."

As is customary in such incidents, JD Sports has contacted the relevant authorities such as the Information Commissioner's Office and says it has enlisted the help of "leading cyber security experts." The chain has stores across Europe, with some operating in North America and Canada. It also operates some footwear brands including Go Outdoors and Shoe Palace.
"We want to apologize to those customers who may have been affected by this incident," said Neil Greenhalgh, chief financial officer at JD Sports. "We are advising them to be vigilant about potential scam emails, calls and texts and providing details on now to report these."

He added: "We are continuing with a full review of our cyber security in partnership with external specialists following this incident. Protecting that data of our customers is an absolute priority for JS."
Biotech

A Drug Company Made $114 Billion Gaming America's Patent System (msn.com) 92

The New York Times looks at the AbbVie's anti-inflammatory drug Humira and their "savvy but legal exploitation of the U.S. patent system." Though AbbVie's patent was supposed to expire in 2016, since then it's maintained a monopoly that generated $114 billion in revenue by using "a formidable wall of intellectual property protection and suing would-be competitors before settling with them to delay their product launches until this year." AbbVie did not invent these patent-prolonging strategies; companies like Bristol Myers Squibb and AstraZeneca have deployed similar tactics to maximize profits on drugs for the treatment of cancer, anxiety and heartburn. But AbbVie's success with Humira stands out even in an industry adept at manipulating the U.S. intellectual-property regime.... AbbVie and its affiliates have applied for 311 patents, of which 165 have been granted, related to Humira, according to the Initiative for Medicines, Access and Knowledge, which tracks drug patents. A vast majority were filed after Humira was on the market.

Some of Humira's patents covered innovations that benefited patients, like a formulation of the drug that reduced the pain from injections. But many of them simply elaborated on previous patents. For example, an early Humira patent, which expired in 2016, claimed that the drug could treat a condition known as ankylosing spondylitis, a type of arthritis that causes inflammation in the joints, among other diseases. In 2014, AbbVie applied for another patent for a method of treating ankylosing spondylitis with a specific dosing of 40 milligrams of Humira. The application was approved, adding 11 years of patent protection beyond 2016.

AbbVie has been aggressive about suing rivals that have tried to introduce biosimilar versions of Humira. In 2016, with Amgen's copycat product on the verge of winning regulatory approval, AbbVie sued Amgen, alleging that it was violating 10 of its patents. Amgen argued that most of AbbVie's patents were invalid, but the two sides reached a settlement in which Amgen agreed not to begin selling its drug until 2023.

Over the next five years, AbbVie reached similar settlements with nine other manufacturers seeking to launch their own versions of Humira. All of them agreed to delay their market entry until 2023.

A drug pricing expert at Washington University in St. Louis tells the New York Times that AbbVie and its strategy with Humira "showed other companies what it was possible to do."

But the article concludes that last year such tactics "became a rallying cry" for U.S. lawmakers "as they successfully pushed for Medicare to have greater control over the price of widely used drugs that, like Humira, have been on the market for many years but still lack competition."
Advertising

How to Handle Web Sites Asking for Your Email Address (seattletimes.com) 117

When you share your email, "you're sharing a lot more," warns the New York Times' lead consumer technology writer: [I]t can be linked to other data, including where you went to school, the make and model of the car you drive, and your ethnicity....

For many years, the digital ad industry has compiled a profile on you based on the sites you visit on the web.... An email could contain your first and last name, and assuming you've used it for some time, data brokers have already compiled a comprehensive profile on your interests based on your browsing activity. A website or an app can upload your email address into an ad broker's database to match your identity with a profile containing enough insights to serve you targeted ads.

The article recommends creating several email addresses to "make it hard for ad tech companies to compile a profile based on your email handle... Apple and Mozilla offer tools that automatically create email aliases for logging in to an app or a site; emails sent to the aliases are forwarded to your real email address." Apple's Hide My Email tool, which is part of its iCloud+ subscription service that costs 99 cents a month, will create aliases, but using it will make it more difficult to log in to the accounts from a non-Apple device. Mozilla's Firefox Relay will generate five email aliases at no cost; beyond that, the program charges 99 cents a month for additional aliases.

For sites using the UID 2.0 framework for ad targeting, you can opt out by entering your email address [or phone number] at https://transparentadvertising.org.

Security

Security Researchers Breached Server of Russia's 'Black Basta' Ransomware Gang (quadrantsec.com) 9

Long-time Slashdot reader Beave writes: Security researchers and practitioners at Quadrant Information Security recently found themselves in a battle with the Russian ransomware gang known as "Black Basta"... Quadrant discovered the Russian gang attempting to exfiltrate data from a network. Once a victim's data is fully exfiltrated the gang then encrypts workstations and servers, and demands ransom payments from the victim in order to decrypt their data and to prevent Black Basta from releasing exfiltrated data to the public.

Fortunately, in this case, Black Basta didn't make it that far. Instead, the security researchers used the opportunity to better understand Black Basta's "backend servers", tools, and methods. Black Basta will sometimes use a victim's network to log into their own servers, which leads to interesting opportunities to observe the gang's operations...

The first write up goes into technical details about the malware and tactics Black Basta used. The second second write up focuses on Black Basta's "backend" servers and how they manage them.

TLDR? You can also listen to two of the security researchers discuss their findings on the latest episode of the "Breaking Badness" podcast.

The articles go into great detail - even asking whether deleting their own exfiltrated data from the gang's server "would technically constitute a federal offense per the 'The Computer Fraud and Abuse Act' of 1986."
AI

Lawsuit Accusing Copilot of Abusing Open-Source Code Challenged by GitHub, Microsoft, OpenAI (reuters.com) 60

GitHub, Microsoft, and OpenAI "told a San Francisco federal court that a proposed class-action lawsuit for improperly monetizing open-source code to train their AI systems cannot be sustained," reports Reuters: The companies said in Thursday court filings that the complaint, filed by a group of anonymous copyright owners, did not outline their allegations specifically enough and that GitHub's Copilot system, which suggests lines of code for programmers, made fair use of the source code. A spokesperson for GitHub, an online platform for housing code, said Friday that the company has "been committed to innovating responsibly with Copilot from the start" and that its motion is "a testament to our belief in the work we've done to achieve that...."

Microsoft and OpenAI said Thursday that the plaintiffs lacked standing to bring the case because they failed to argue they suffered specific injuries from the companies' actions. The companies also said the lawsuit did not identify particular copyrighted works they misused or contracts that they breached.

Microsoft also said in its filing that the copyright allegations would "run headlong into the doctrine of fair use," which allows the unlicensed use of copyrighted works in some situations. The companies both cited a 2021 U.S. Supreme Court decision that Google's use of Oracle source code to build its Android operating system was transformative fair use.

Slashdot reader guest reader shares this excerpt from the plaintiffs' complaint: GitHub and OpenAI have offered shifting accounts of the source and amount of the code or other data used to train and operate Copilot. They have also offered shifting justifications for why a commercial AI product like Copilot should be exempt from these license requirements, often citing "fair use."

It is not fair, permitted, or justified. On the contrary, Copilot's goal is to replace a huge swath of open source by taking it and keeping it inside a GitHub-controlled paywall. It violates the licenses that open-source programmers chose and monetizes their code despite GitHub's pledge never to do so.

The Almighty Buck

Wyoming Crypto Bank Denied for Federal Reserve System Membership (apnews.com) 23

The Associated Press reports that America's Federal Reserve Board "has denied a Wyoming cryptocurrency bank's application for Federal Reserve System membership, officials announced Friday, dealing a setback to the crypto industry's attempts to build acceptance in mainstream U.S. banking." Many in crypto have been looking to Cheyenne-based Custodia Bank's more than 2-year-old application as a bellwether for crypto banking. Approval would have meant access to Federal Reserve services including its electronic payments system.

The rejection adds to doubts about crypto banking's viability, particularly in Wyoming, a state that has sought to become a hub of crypto banking, exchanges and mining....

Custodia sued the Federal Reserve Board and Federal Reserve Bank of Kansas City in Wyoming federal court last year, accusing them of taking an unreasonably long time on its application. In a statement Friday, the company said it was "surprised and disappointed" by the rejection and pledged to continue to litigate the issue.

In a statement, America's Federal Reserve Board argued argued that Custodia's "novel business model" and focus on crypto-assets "presented significant safety and soundness risks." "The Board has previously made clear that such crypto activities are highly likely to be inconsistent with safe and sound banking practices.

"The Board also found that Custodia's risk management framework was insufficient to address concerns regarding the heightened risks associated with its proposed crypto activities, including its ability to mitigate money laundering and terrorism financing risks."

AI

US and EU To Launch First-Of-Its-Kind AI Agreement 13

The United States and European Union on Friday announced an agreement to speed up and enhance the use of artificial intelligence to improve agriculture, healthcare, emergency response, climate forecasting and the electric grid. Reuters reports: A senior U.S. administration official, discussing the initiative shortly before the official announcement, called it the first sweeping AI agreement between the United States and Europe. Previously, agreements on the issue had been limited to specific areas such as enhancing privacy, the official said. AI modeling, which refers to machine-learning algorithms that use data to make logical decisions, could be used to improve the speed and efficiency of government operations and services.

"The magic here is in building joint models (while) leaving data where it is," the senior administration official said. "The U.S. data stays in the U.S. and European data stays there, but we can build a model that talks to the European and the U.S. data because the more data and the more diverse data, the better the model." The initiative will give governments greater access to more detailed and data-rich AI models, leading to more efficient emergency responses and electric grid management, and other benefits, the administration official said. The partnership is currently between just the White House and the European Commission, the executive arm of the 27-member European Union. The senior administration official said other countries will be invited to join in the coming months.
Crime

Boeing Pleads Not Guilty To Fraud In Criminal Case Over Deadly 737 Max Crashes (npr.org) 42

An anonymous reader quotes a report from NPR: Aerospace giant Boeing entered a plea of not guilty to a criminal charge at an arraignment in federal court in Texas Thursday. The company is charged with felony fraud related to the crashes of two of its 737 Max airplanes that killed a total of 346 people. About a dozen relatives of some of those who were killed in the crashes gave emotional testimony during the three-hour arraignment hearing about how they've been affected by what they call "the deadliest corporate crime in U.S. history." They testified after Boeing's chief aerospace safety officer Mike Delaney entered a plea of not guilty on behalf of the airplane manufacturer to the charge of conspiracy to commit fraud. The company is accused of deceiving and misleading federal regulators about the safety of a critical automated flight control system that investigators found played a major role in causing the crashes in Indonesia in 2018 and in Ethiopia in 2019.

Boeing and the Justice Department had entered into a deferred prosecution agreement to settle the charge two years ago but many of the families of the crash victims objected to the agreement, saying that they were not consulted about what they called a "secret, sweetheart deal." Under the terms of the agreement, Boeing admitted to defrauding the FAA by concealing safety problems with the 737 Max, but pinned much of the blame on two technical pilots who they say misled regulators while working on the certification of the aircraft. Only one of those pilots was prosecuted and a jury acquitted him at trial last year. Boeing also agreed to pay $2.5 billion, including $1.7 billion in compensation to airlines that had purchased 737 Max planes but could not use them while the plane was grounded for 20 months after the second plane crashed. The company also agreed to pay $500 million in compensation to the families of those killed in the two Max plane crashes, and to pay a $243 million fine. The agreement also required Boeing to make significant changes to its safety policies and procedures, as well as to the corporate culture, which many insiders have said had shifted in recent years from a safety first focus to one that critics say put profits first.

After three years, if the aerospace giant and defense contractor lived up to the terms of the deferred prosecution agreement, the criminal charge against Boeing would be dismissed and the company would be immune from further prosecution. But last fall, U.S. District Court Judge Reed O'Connor agreed that under the Crime Victims' Rights Act, the relatives' rights had been violated and they should have been consulted before the DOJ and Boeing reached the agreement. Last week, he ordered Boeing to appear Thursday to be arraigned. On Thursday, the families asked Judge O'Connor to impose certain conditions on Boeing as a condition of release, including appointing an independent monitor to oversee Boeing's compliance with the terms of the previous deferred prosecution agreement, and that the company's compliance efforts "be made public to the fullest extent possible." O'Connor did not rule on whether to impose those conditions yet, as Boeing and the Justice Department opposed the request. But he did impose a standard condition that Boeing commit no new crimes.

Slashdot Top Deals