×
AI

Imran Khan Deploys AI Clone To Campaign From Behind Bars in Pakistan (theguardian.com) 7

AI allowed Pakistan's former prime minister Imran Khan to campaign from behind bars on Monday, with a voice clone of the opposition leader giving an impassioned speech on his behalf. From a report: Khan has been locked up since August and is being tried for leaking classified documents, allegations he says have been trumped up to stop him contesting general elections due in February. His Pakistan Tehreek-e-Insaf (PTI) party used artificial intelligence to make a four-minute message from the 71-year-old, headlining a "virtual rally" hosted on social media overnight on Sunday into Monday despite internet disruptions that monitor NetBlocks said were consistent with previous attempts to censor Khan.

PTI said Khan sent a shorthand script through lawyers that was fleshed out into his rhetorical style. The text was then dubbed into audio using a tool from the AI firm ElevenLabs, which boasts the ability to create a "voice clone" from existing speech samples. "My fellow Pakistanis, I would first like to praise the social media team for this historic attempt," the voice mimicking Khan said. "Maybe you all are wondering how I am doing in jail," the stilted voice adds. "Today, my determination for real freedom is very strong." The audio was broadcast at the end of a five-hour live-stream of speeches by PTI supporters on Facebook, X and YouTube, and was overlaid with historic footage of Khan and still images.

Earth

Can We Help Fight the Climate Crisis with Stand-Up Comedy? (cnn.com) 84

Bill McGuire is professor emeritus of climate hazards at University College London. He also writes on CNN that it's "essential" to laugh in the face of the climate crisis: If you don't laugh, you will cry, and that marks the beginning of a very slippery slope. As civilization faces a threat that dwarfs that of every war ever fought combined, and the outcome of the latest climate COP offers little hope, it's something we need — not only to remember — but to actively adopt as a weapon in our armoury to fight for a better future for our children and their children. They say that laughter is the best medicine, but weaponised comedy has the potential to do more than just make us feel good. Not only can it help inform and educate about global heating and the climate breakdown it is driving, but also to encourage and bolster action...

This is why ventures like "Climate Science Translated," which I took part in earlier this year, are so important. The British-based project — brainchild of ethical insurer Nick Oldridge and the climate communications outfit Utopia Bureau — teams climate scientists up with comedians, who 'translate' the science into bite-sized, funny and pretty irreverent chunks that can be understood, digested and appreciated by anyone.

You can see four of the videos on their web site. "Climate science is complicated," each video begins. "So we're translating it into human."

For example, last month Dr. Friederike Otto, senior lecturer on climate science at London's Imperial College, created a new video with comedian Nish Kumar: Dr. Otto: Human-caused climate change is fundamentally changing the fabric of the weather as we know it. It's leading to events which we've simply never seen before.

Comedian Kumar: Translation: Weather used to be clouds. Now we've made it into a sort of Rottweiler on steroids that wants to chew everyone's head off.

Dr. Otto: The continuing increase in global average temperature is already causing higher probabilities of extreme rainfall and flash flooding, as well more intense storms, prolonged droughts, record-breaking heatwaves, and wildfires.

Kumar: Very soon climate scientists are just going to ditch their graphs and point out the window with an expression that says, "I fucking told you!"

Dr. Otto: This is not a problem just for our children and grandchildren. This is an immediate threat to all our lives.

Kumar: I don't know if you're familiar with the film The Terminator, but if someone came from the future to warn us of this threat, they'd have travelled from next Wednesday.

And three weeks ago a follow-up video came from earth systems science professor Mark Maslin from London's University College, teaming up with comedian Jo Brand: Professor Maslin: We are heading for unknown territory if we trigger tipping points — irreversible threshholds which shift our entire ecosystem into a different state.

Comedian Brand: If you liked climate crisis, you're going to love climate complete fucking collapse...

Professor Maslin: The irony is solar and wind power are now over 10 times cheaper than oil and gas. We can still prevent much of the damage, and end up in a better place for everyone.

Brand: With wind and sun power, we save money, and don't die. It's a pretty strong selling point.

Professor Maslin: Most people actually are in favor of urgent action. The reason governments are not transitioning fast enough is because the fossil fuel industry has a grip on many politicians. In fact, governments subsidize them with our taxpayer money — over $1 trillion a year, according to the IMF.

Brand: We are paying a bunch of rich dudes one trillion dollars a year to fuck up our future. I'd do it for that money. When can I start?

Each video ends with the words "All Hands On Deck Now", urging action by voting, contacting your representative, joining a local group, and protesting.

Climate hazard professor Bill McGuire writes on CNN that he hopes to see a growing movement: As Kiri Pritchard-McLean pointedly observes: "If comedians are helping scientists out, you know things aren't going well...." There is even a "Sustainable Stand-up" course aimed at teaching comedy beginners about how climate and social issues can be addressed in their shows, and which has run in 11 countries.
Social Networks

Threads Plans to Interoperate With Other Platforms in the Fediverse (theverge.com) 30

An anonymous reader shared this report from the Verge: On Friday, two days after Threads finally started publicly testing ActivityPub integration, Instagram head Adam Mosseri shared a thread on Threads detailing the company's plans for its continued integration with the fediverse. Right now, it's possible to follow a few Threads accounts (including Mosseri's) from other platforms, but Meta has much bigger plans for Threads interoperability that Mosseri says will take "the better part of a year" to realize...

Mosseri says that the Threads team wants to make it so the option to follow a Threads account on other platforms is available to "all public accounts on Threads, not just a handful of testers." The Threads team wants to let replies from other platforms show up inside of Threads.

According to the article, Threads is also planning to support the ability to follow non-Threads fediverse accounts — and even taking that openness in the other direction.

"Eventually, it should also be possible to enable creators to leave Threads and take their followers with them to another app / server," Mosseri writes.

Flipboard and Wordpress already allow ActivityPub integration, according to NBC News. They estimate there's 11 million users of the Fediverse now, "the vast majority of them on Mastodon."
Graphics

Vera Molnar, Pioneer of Computer Art, Dies At 99 (nytimes.com) 16

Alex Williams reports via The New York Times: Vera Molnar, a Hungarian-born artist who has been called the godmother of generative art for her pioneering digital work, which started with the hulking computers of the 1960s and evolved through the current age of NFTs, died on Dec. 7 in Paris. She was 99. Her death was announced on social media by the Pompidou Center in Paris, which is scheduled to present a major exhibition of her work in February. Ms. Molnar had lived in Paris since 1947. While her computer-aided paintings and drawings, which drew inspiration from geometric works by Piet Mondrian and Paul Klee, were eventually exhibited in major museums like the Museum of Modern Art in New York and the Los Angeles County Museum of Art, her work was not always embraced early in her career.

Ms. Molnar in fact began to employ the principles of computation in her work years before she gained access to an actual computer. In 1959, she began implementing a concept she called "Machine Imaginaire" -- imaginary machine. This analog approach involved using simple algorithms to guide the placement of lines and shapes for works that she produced by hand, on grid paper. She took her first step into the silicon age in 1968, when she got access to a computer at a university research laboratory in Paris. In the days when computers were generally reserved for scientific or military applications, it took a combination of gumption and '60s idealism for an artist to attempt to gain access to a machine that was "very complicated and expensive," she once said, adding, "They were selling calculation time in seconds." [...]

Making art on Apollo-era computers was anything but intuitive. Ms. Molnar had to learn early computer languages like Basic and Fortran and enter her data with punch cards, and she had to wait several days for the results, which were transferred to paper with a plotter printer. One early series, "Interruptions," involved a vast sea of tiny lines on a white background. As ARTNews noted in a recent obituary: "She would set up a series of straight lines, then rotate some, causing her rigorous set of marks to be thrown out of alignment. Then, to inject further chaos, she would randomly erase certain portions, resulting in blank areas amid a sea of lines." Another series, "(Des)Ordres" (1974), involved seemingly orderly patterns of concentric squares, which she tweaked to make them appear slightly disordered, as if they were vibrating.

Over the years, Ms. Molnar continued to explore the tensions between machine-like perfection and the chaos of life itself, as with her 1976 plotter drawing "1% of Disorder," another deconstructed pattern of concentric squares. "I love order, but I can't stand it," she told Mr. Obrist. "I make mistakes, I stutter, I mix up my words." And so, she concluded, "chaos, perhaps, came from this." [...] Her career continued to expand in scope in the 1970s. She began using computers with screens, which allowed her to instantly assess the results of her codes and adjust accordingly. With screens, it was "like a conversation, like a real pictorial process," she said in a recent interview with the generative art creator and entrepreneur Erick Calderon. "You move the 'brush' and you see immediately if it suits you or not." [...] Earlier this year, she cemented her legacy in the world of blockchain with "Themes and Variations," a generative art series of more than 500 works using NFT technology that was created in collaboration with the artist and designer Martin Grasser and sold through Sotheby's. The series fetched $1.2 million in sales.

Social Networks

Twitch Rescinds Policy That Allowed 'Artistic Nudity' (engadget.com) 41

Malak Saleh reports via Engadget: Twitch has quickly taken back its policy update that permitted users to post sexual content as long as it was labeled. In another update, the company said it is not going to allow any depictions of real or fictional nudity on its streaming platform. After giving users the green light to post "artistic nudity," Twitch says some streamers created content that violated policy.

The media streamed in response to the initial approval of sexually explicit content on Twitch was "met with community concern," according to the update. The company said, "We have decided that we went too far with this change." While a huge part of the initial decision was to allow for the "digital depiction" of artistic nudity, the company clarified that digital depictions of sexual content is a concern when artificial intelligence can be used to develop realistic images and that it can be difficult to discern between what's been digitally produced and real photography.

The Courts

TikTok Requires Users To 'Forever Waive' Rights To Sue Over Past Harms (arstechnica.com) 23

An anonymous reader quotes a report from Ars Technica: Some TikTok users may have skipped reviewing an update to TikTok's terms of service this summer that shakes up the process for filing a legal dispute against the app. According to The New York Times, changes that TikTok "quietly" made to its terms suggest that the popular app has spent the back half of 2023 preparing for a wave of legal battles. In July, TikTok overhauled its rules for dispute resolution, pivoting from requiring private arbitration to insisting that legal complaints be filed in either the US District Court for the Central District of California or the Superior Court of the State of California, County of Los Angeles. Legal experts told the Times this could be a way for TikTok to dodge arbitration claims filed en masse that can cost companies millions more in fees than they expected to pay through individual arbitration.

Perhaps most significantly, TikTok also added a section to its terms that mandates that all legal complaints be filed within one year of any alleged harm caused by using the app. The terms now say that TikTok users "forever waive" rights to pursue any older claims. And unlike a prior version of TikTok's terms of service archived in May 2023, users do not seem to have any options to opt out of waiving their rights. Lawyers told the Times that these changes could make it more challenging for TikTok users to pursue legal action at a time when federal agencies are heavily scrutinizing the app and complaints about certain TikTok features allegedly harming kids are mounting.

Security

Intelligence Researchers To Study Computer Code for Clues To Hackers' Identities (wsj.com) 4

Government researchers in the U.S. are studying methods to help identify hackers based on the code they use to carry out cyberattacks. From a report: The Intelligence Advanced Research Projects Activity, the lead federal research agency for the intelligence community, plans to develop technologies that could speed up investigations for identifying perpetrators of cyberattacks. "The number of attacks is increasing far more than the number of forensic experts that are available to go after these attacks," said Kristopher Reese, who is managing the research program at IARPA and holds a doctorate in computer science and engineering. The lack of forensic resources means hackers who target small organizations or companies that don't fall under critical infrastructure sectors often escape identification, he said.

Tools that are developed as part of the planned 30-month research project won't replace human analysts, who are crucial for identifying social and political dynamics that might explain why a particular hacking group targeted a victim, Reese said. But using artificial intelligence to analyze code used in cyberattacks will make investigations more efficient, he said. IARPA is accepting pitches from researchers until next month and plans to begin research next summer. [...] There hasn't been enough research into how analyzing code can reveal a hacker's identity, Reese said. Behavioral traits evident in code can reveal specific countries where hackers might be from or even the university where they were trained, he said. Some companies also have style guides outlining how employees should program, which could leave traces that indicate a person worked there, he said.

Social Networks

Threads Launches In the European Union (macrumors.com) 27

Meta CEO Mark Zuckerberg announced that Threads is now available to users in the European Union. "Today we're opening Threads to more countries in Europe," wrote Zuckerberg in a post on the platform. "Welcome everyone." MacRumors reports: The move comes five months after the social media network launched in most markets around the world, but remained unavailable to EU-based users due to regulatory hurdles. [...] In addition to creating a Threads profile for posting, users in the EU can also simply browse Threads without having an Instagram account, an option likely introduced to comply with legislation surrounding online services.

The expansion into a market of 448 million people should see Threads' user numbers get a decent boost. Meta CEO Mark Zuckerberg said on a company earnings call in October that Threads now has "just under" 100 million monthly users. Since its launch earlier this year it has gained a web app, an ability to search for posts, and a post editing feature.

Businesses

FTC is Investigating Adobe Over Its Rules for Canceling Software Subscriptions (fortune.com) 18

Adobe said US regulators are probing the company's cancellation rules for software subscriptions, an issue that has long been a source of ire for customers. From a report: The company has been cooperating with the Federal Trade Commission on a civil investigation of the issue since June 2022, Adobe said Wednesday in a filing. A settlement could involve "significant monetary costs or penalties," the company said.

Users of Adobe programs including Photoshop and Premiere have long complained about the expense of canceling a subscription, which can cost more than $700 annually for individuals. Subscribers must cancel within two weeks of buying a subscription to receive a full refund; otherwise, they incur a prorated penalty. Some other digital services such as Spotify and Netflix don't charge a cancellation fee. Digital subscriptions have been a recent focus for the FTC. It proposed a rule in March that consumers must be able to cancel subscriptions as easily as they sign up for them.

"Too often, companies make it difficult to unsubscribe from a service, wasting Americans' time and money on things they may not want or need," President Joe Biden said in a social media post at the time. Adobe said the FTC alerted the company in November that commission staff say "they had the authority to enter into consent negotiations to determine if a settlement regarding their investigation of these issues could be reached. We believe our practices comply with the law and are currently engaging in discussion with FTC staff."

Youtube

More Than 15% of Teens Say They're On YouTube or TikTok 'Almost Constantly' (cnbc.com) 70

Nearly 1 in 5 teenagers in the U.S. say they use YouTube and TikTok "almost constantly," according to a Pew Research Center survey. CNBC reports: The survey showed that YouTube was the most "widely used platform" for U.S.-based teenagers, with 93% of survey respondents saying they regularly use Google's video-streaming service. Of that 93% figure, about 16% of the teenage respondents said they "almost constantly visit or use" YouTube, underscoring the video app's immense popularity with the youth market. TikTok was the second-most popular app, with 63% of teens saying they use the ByteDance-owned short-video service, followed by Snapchat and Meta's Instagram, which had 60% and 59%, respectively. About 17% of the 63% of respondents who said they use TikTok indicated they access the short-video service "almost constantly," the report noted.

Meanwhile, Facebook and Twitter, now known as X, are not as popular with U.S.-based teenagers as they were a decade ago, the Pew Research study detailed. Regarding Facebook in particular, the Pew Research authors wrote that the share of teens who use the Meta-owned social media app "has dropped from 71% in 2014-2015 to 33% today." During the same period, Meta-owned Instagram's usage has not made up the difference in share, increasing from 52% in 2014-15 to a peak of 62% last year, then dropping to 59% in 2023, according to the firm.

AI

MIT Group Releases White Papers On Governance of AI (mit.edu) 46

An anonymous reader quotes a report from MIT News: Providing a resource for U.S. policymakers, a committee of MIT leaders and scholars has released a set of policy briefs that outlines a framework for the governance of artificial intelligence. The approach includes extending current regulatory and liability approaches in pursuit of a practical way to oversee AI. The aim of the papers is to help enhance U.S. leadership in the area of artificial intelligence broadly, while limiting harm that could result from the new technologies and encouraging exploration of how AI deployment could be beneficial to society.

The main policy paper, "A Framework for U.S. AI Governance: Creating a Safe and Thriving AI Sector," suggests AI tools can often be regulated by existing U.S. government entities that already oversee the relevant domains. The recommendations also underscore the importance of identifying the purpose of AI tools, which would enable regulations to fit those applications. "As a country we're already regulating a lot of relatively high-risk things and providing governance there," says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing, who helped steer the project, which stemmed from the work of an ad hoc MIT committee. "We're not saying that's sufficient, but let's start with things where human activity is already being regulated, and which society, over time, has decided are high risk. Looking at AI that way is the practical approach." [...]

"The framework we put together gives a concrete way of thinking about these things," says Asu Ozdaglar, the deputy dean of academics in the MIT Schwarzman College of Computing and head of MIT's Department of Electrical Engineering and Computer Science (EECS), who also helped oversee the effort. The project includes multiple additional policy papers and comes amid heightened interest in AI over last year as well as considerable new industry investment in the field. The European Union is currently trying to finalize AI regulations using its own approach, one that assigns broad levels of risk to certain types of applications. In that process, general-purpose AI technologies such as language models have become a new sticking point. Any governance effort faces the challenges of regulating both general and specific AI tools, as well as an array of potential problems including misinformation, deepfakes, surveillance, and more.
These are the key policies and approaches mentioned in the white papers:

Extension of Current Regulatory and Liability Approaches: The framework proposes extending current regulatory and liability approaches to cover AI. It suggests leveraging existing U.S. government entities that oversee relevant domains for regulating AI tools. This is seen as a practical approach, starting with areas where human activity is already being regulated and deemed high risk.

Identification of Purpose and Intent of AI Tools: The framework emphasizes the importance of AI providers defining the purpose and intent of AI applications in advance. This identification process would enable the application of relevant regulations based on the specific purpose of AI tools.

Responsibility and Accountability: The policy brief underscores the responsibility of AI providers to clearly define the purpose and intent of their tools. It also suggests establishing guardrails to prevent misuse and determining the extent of accountability for specific problems. The framework aims to identify situations where end users could reasonably be held responsible for the consequences of misusing AI tools.

Advances in Auditing of AI Tools: The policy brief calls for advances in auditing new AI tools, whether initiated by the government, user-driven, or arising from legal liability proceedings. Public standards for auditing are recommended, potentially established by a nonprofit entity or a federal entity similar to the National Institute of Standards and Technology (NIST).

Consideration of a Self-Regulatory Organization (SRO): The framework suggests considering the creation of a new, government-approved "self-regulatory organization" (SRO) agency for AI. This SRO, similar to FINRA for the financial industry, could accumulate domain-specific knowledge, ensuring responsiveness and flexibility in engaging with a rapidly changing AI industry.

Encouragement of Research for Societal Benefit: The policy papers highlight the importance of encouraging research on how to make AI beneficial to society. For instance, there is a focus on exploring the possibility of AI augmenting and aiding workers rather than replacing them, leading to long-term economic growth distributed throughout society.

Addressing Legal Issues Specific to AI: The framework acknowledges the need to address specific legal matters related to AI, including copyright and intellectual property issues. Special consideration is also mentioned for "human plus" legal issues, where AI capabilities go beyond human capacities, such as mass surveillance tools.

Broadening Perspectives in Policymaking: The ad hoc committee emphasizes the need for a broad range of disciplinary perspectives in policymaking, advocating for academic institutions to play a role in addressing the interplay between technology and society. The goal is to govern AI effectively by considering both technical and social systems.
Security

US Healthcare Giant Norton Says Hackers Stole Millions of Patients' Data During Ransomware Attack (techcrunch.com) 27

An anonymous reader quotes a report from TechCrunch: Kentucky-based nonprofit healthcare system Norton Healthcare has confirmed that hackers accessed the personal data of millions of patients and employees during an earlier ransomware attack. Norton operates more than 40 clinics and hospitals in and around Louisville, Kentucky, and is the city's third-largest private employer. The organization has more than 20,000 employees, and more than 3,000 total providers on its medical staff, according to its website. In a filing with Maine's attorney general on Friday, Norton said that the sensitive data of approximately 2.5 million patients, as well as employees and their dependents, was accessed during its May ransomware attack.

In a letter sent to those affected, the nonprofit said that hackers had access to "certain network storage devices between May 7 and May 9," but did not access Norton Healthcare's medical record system or Norton MyChart, its electronic medical record system. But Norton admitted that following a "time-consuming" internal investigation, which the organization completed in November, Norton found that hackers accessed a "wide range of sensitive information," including names, dates of birth, Social Security numbers, health and insurance information and medical identification numbers. Norton Healthcare says that, for some individuals, the exposed data may have also included financial account numbers, driver licenses or other government ID numbers, as well as digital signatures. It's not known if any of the accessed data was encrypted.

Norton says it notified law enforcement about the attack and confirmed it did not pay any ransom payment. The organization did not name the hackers responsible for the cyberattack, but the incident was claimed by the notorious ALPHV/BlackCat ransomware gang in May, according to data breach news site DataBreaches.net, which reported that the group claimed it exfiltrated almost five terabytes of data. TechCrunch could not confirm this, as the ALPHV website was inaccessible at the time of writing.

Privacy

Republican Presidential Candidates Debate Anonymity on Social Media (cnbc.com) 174

Four Republican candidates for U.S. president debated Wednesday — and moderator Megyn Kelly had a tough question for former South Carolina governor Nikki Haley. "Can you please speak to the requirement that you said that every anonymous internet user needs to out themselves?" Nikki Haley: What I said was, that social media companies need to show us their algorithms. I also said there are millions of bots on social media right now. They're foreign, they're Chinese, they're Iranian. I will always fight for freedom of speech for Americans; we do not need freedom of speech for Russians and Iranians and Hamas. We need social media companies to go and fight back on all of these bots that are happening. That's what I said.

As a mom, do I think social media would be more civil if we went and had people's names next to that? Yes, I do think that, because I think we've got too much cyberbullying, I think we've got child pornography and all of those things. But having said that, I never said government should go and require anyone's name.

DeSantis: That's false.

Haley: What I said —

DeSantis:You said I want your name. As president of the United States, her first day in office, she said one of the first things I'm going to do --

Haley: I said we were going to get the millions of bots.

DeSantis: "All social medias? I want your name." A government i.d. to dox every American. That's what she said. You can roll the tape. She said I want your name — and that was going to be one of the first things she did in office. And then she got real serious blowback — and understandably so, because it would be a massive expansion of government. We have anonymous speech. The Federalist Papers were written with anonymous writers — Jay, Madison, and Hamilton, they went under "Publius". It's something that's important — and especially given how conservatives have been attacked and they've lost jobs and they've been cancelled. You know the regime would use that to weaponize that against our own people. It was a bad idea, and she should own up to it.

Haley: This cracks me up, because Ron is so hypocritical, because he actually went and tried to push a law that would stop anonymous people from talking to the press, and went so far to say bloggers should have to register with the state --

DeSantis:That's not true.

Haley: — if they're going to write about elected officials. It was in the — check your newpaper. It was absolutely there.

DeSantis quickly attributed the introduction of that legislation to "some legislator".

The press had already extensively written about Haley's position on anonymity on social media. Three weeks ago Business Insider covered a Fox News interview, and quoted Nikki Haley as saying: "When I get into office, the first thing we have to do, social media companies, they have to show America their algorithms. Let us see why they're pushing what they're pushing. The second thing is every person on social media should be verified by their name." Haley said this was why her proposals would be necessary to counter the "national security threat" posed by anonymous social media accounts and social media bots. "When you do that, all of a sudden people have to stand by what they say, and it gets rid of the Russian bots, the Iranian bots, and the Chinese bots," Haley said. "And then you're gonna get some civility when people know their name is next to what they say, and they know their pastor and their family member's gonna see it. It's gonna help our kids and it's gonna help our country," she continued... A representative for the Haley campaign told Business Insider that Haley's proposals were "common sense."

"We all know that America's enemies use anonymous bots to spread anti-American lies and sow chaos and division within our borders. Nikki believes social media companies need to do a better job of verifying users so we can crack down on Chinese, Iranian, and Russian bots," the representative said.

The next day CNBC reported that Haley "appeared to add a caveat... suggesting Wednesday that Americans should still be allowed to post anonymously online." A spokesperson for Haley's campaign added, "Social media companies need to do a better job of verifying users as human in order to crack down on anonymous foreign bots. We can do this while protecting America's right to free speech and Americans who post anonymously."

Privacy issues had also come up just five minutes earlier in the debate. In March America's Treasury Secretary had recommended the country "advance policy and technical work on a potential central bank digital currency, or CBDC, so the U.S. is prepared if CBDC is determined to be in the national interest."

But Florida governor Ron DeSantis spoke out forecefully against the possibility. "They want to get rid of cash, crypto, they want to force you to do that. They'll take away your privacy. They will absolutely regulate your purchases. On Day One as president, we take the idea of Central Bank Digital Currency, and we throw it in the trash can. It'll be dead on arrival." [The audience applauded.]
Education

Harvard Accused of Bowing to Meta By Ousted Disinformation Scholar in Whistleblower Complaint (cjr.org) 148

The Washington Post reports: A prominent disinformation scholar has accused Harvard University of dismissing her to curry favor with Facebook and its current and former executives in violation of her right to free speech.

Joan Donovan claimed in a filing with the Education Department and the Massachusetts attorney general that her superiors soured on her as Harvard was getting a record $500 million pledge from Meta founder Mark Zuckerberg's charitable arm. As research director of Harvard Kennedy School projects delving into mis- and disinformation on social media platforms, Donovan had raised millions in grants, testified before Congress and been a frequent commentator on television, often faulting internet companies for profiting from the spread of divisive falsehoods. Last year, the school's dean told her that he was winding down her main project and that she should stop fundraising for it. This year, the school eliminated her position.

As one of the first researchers with access to "the Facebook papers" leaked by Frances Haugen, Donovan was asked to speak at a meeting of the Dean's Council, a group of the university's high-profile donors, remembers The Columbia Journalism Review : Elliot Schrage, then the vice president of communications and global policy for Meta, was also at the meeting. Donovan says that, after she brought up the Haugen leaks, Schrage became agitated and visibly angry, "rocking in his chair and waving his arms and trying to interrupt." During a Q&A session after her talk, Donovan says, Schrage reiterated a number of common Meta talking points, including the fact that disinformation is a fluid concept with no agreed-upon definition and that the company didn't want to be an "arbiter of truth."

According to Donovan, Nancy Gibbs, Donovan's faculty advisor, was supportive after the incident. She says that they discussed how Schrage would likely try to pressure Douglas Elmendorf, the dean of the Kennedy School of Government (where the Shorenstein Center hosting Donovan's project is based) about the idea of creating a public archive of the documents... After Elmendorf called her in for a status meeting, Donovan claims that he told her she was not to raise any more money for her project; that she was forbidden to spend the money that she had raised (a total of twelve million dollars, she says); and that she couldn't hire any new staff. According to Donovan, Elmendorf told her that he wasn't going to allow any expenditure that increased her public profile, and used a number of Meta talking points in his assessment of her work...

Donovan says she tried to move her work to the Berkman Klein Center at Harvard, but that the head of that center told her that they didn't have the "political capital" to bring on someone whom Elmendorf had "targeted"... Donovan told me that she believes the pressure to shut down her project is part of a broader pattern of influence in which Meta and other tech platforms have tried to make research into disinformation as difficult as possible... Donovan said she hopes that by blowing the whistle on Harvard, her case will be the "tip of the spear."

Another interesting detail from the article: [Donovan] alleges that Meta pressured Elmendorf to act, noting that he is friends with Sheryl Sandberg, the company's chief operating officer. (Elmendorf was Sandberg's advisor when she studied at Harvard in the early nineties; he attended Sandberg's wedding in 2022, four days before moving to shut down Donovan's project.)
Social Networks

Reactions Continue to Viral Video that Led to Calls for College Presidents to Resign 414

After billionaire Bill Ackman demanded three college presidents "resign in disgrace," that post on X — excerpting their testimony before a U.S. Congressional committee — has now been viewed more than 104 million times, provoking a variety of reactions.

Saturday afternoon, one of the three college presidents resigned — University of Pennsylvania president Liz Magill.

Politico reports that the Republican-led Committee now "will be investigating Harvard University, MIT and the University of Pennsylvania after their institutions' leaders failed to sufficiently condemn student protests calling for 'Jewish genocide.'" The BBC reports a wealthy UPenn donor reportedly withdrew a stock grant worth $100 million.

But after watching the entire Congressional hearing, New York Times opinion columnist Michelle Goldberg wrote that she'd seen a "more understandable" context: In the questioning before the now-infamous exchange, you can see the trap [Congresswoman Elise] Stefanik laid. "You understand that the use of the term 'intifada' in the context of the Israeli-Arab conflict is indeed a call for violent armed resistance against the state of Israel, including violence against civilians and the genocide of Jews. Are you aware of that?" she asked Claudine Gay of Harvard. Gay responded that such language was "abhorrent."

Stefanik then badgered her to admit that students chanting about intifada were calling for genocide, and asked angrily whether that was against Harvard's code of conduct. "Will admissions offers be rescinded or any disciplinary action be taken against students or applicants who say, 'From the river to the sea' or 'intifada,' advocating for the murder of Jews?" Gay repeated that such "hateful, reckless, offensive speech is personally abhorrent to me," but said action would be taken only "when speech crosses into conduct." So later in the hearing, when Stefanik again started questioning Gay, Kornbluth and Magill about whether it was permissible for students to call for the genocide of the Jews, she was referring, it seemed clear, to common pro-Palestinian rhetoric and trying to get the university presidents to commit to disciplining those who use it. Doing so would be an egregious violation of free speech. After all, even if you're disgusted by slogans like "From the river to the sea, Palestine will be free," their meaning is contested...

Liberal blogger Josh Marshall argues that "While groups like Hamas certainly use the word [intifada] with a strong eliminationist meaning it is simply not the case that the term consistently or usually or mostly refers to genocide. It's just not. Stefanik's basic equation was and is simply false and the university presidents were maladroit enough to fall into her trap."

The Wall Street Journal published an investigation the day after the hearing. A political science professor at the University of California, Berkeley hired a survey firm to poll 250 students across the U.S. from "a variety of backgrounds" — and the results were surprising: A Latino engineering student from a southern university reported "definitely" supporting "from the river to the sea" because "Palestinians and Israelis should live in two separate countries, side by side." Shown on a map of the region that a Palestinian state would stretch from the Jordan River to the Mediterranean Sea, leaving no room for Israel, he downgraded his enthusiasm for the mantra to "probably not." Of the 80 students who saw the map, 75% similarly changed their view... In all, after learning a handful of basic facts about the Middle East, 67.8% of students went from supporting "from the river to the sea" to rejecting the mantra. These students had never seen a map of the Mideast and knew little about the region's geography, history, or demography.
More about the phrase from the Associated Press: Many Palestinian activists say it's a call for peace and equality after 75 years of Israeli statehood and decades-long, open-ended Israeli military rule over millions of Palestinians. Jews hear a clear demand for Israel's destruction... By 2012, it was clear that Hamas had claimed the slogan in its drive to claim land spanning Israel, the Gaza Strip and the West Bank... The phrase also has roots in the Hamas charter... [Since 1997 the U.S. government has considered Hamas a terrorist organization.]

"A Palestine between the river to the sea leaves not a single inch for Israel," read an open letter signed by 30 Jewish news outlets around the world and released on Wednesday... Last month, Vienna police banned a pro-Palestinian demonstration, citing the fact that the phrase "from the river to the sea" was mentioned in invitations and characterizing it as a call to violence. And in Britain, the Labour party issued a temporary punishment to a member of Parliament, Andy McDonald, for using the phrase during a rally at which he called for a stop to bombardment.

As the controversy rages on, Ackman's X timeline now includes an official response reposted from a college that wasn't called to testify — Stanford University: In the context of the national discourse, Stanford unequivocally condemns calls for the genocide of Jews or any peoples. That statement would clearly violate Stanford's Fundamental Standard, the code of conduct for all students at the university.
Ackman also retweeted this response from OpenAI CEO Sam Altman: for a long time i said that antisemitism, particularly on the american left, was not as bad as people claimed. i'd like to just state that i was totally wrong. i still don't understand it, really. or know what to do about it. but it is so fucked.
Wednesday UPenn's president announced they'd immediately consider a new change in policy," in an X post viewed 38.7 million times: For decades under multiple Penn presidents and consistent with most universities, Penn's policies have been guided by the [U.S.] Constitution and the law. In today's world, where we are seeing signs of hate proliferating across our campus and our world in a way not seen in years, these policies need to be clarified and evaluated. Penn must initiate a serious and careful look at our policies, and provost Jackson and I will immediately convene a process to do so. As president, I'm committed to a safe, secure, and supportive environment so all members of our community can thrive. We can and we will get this right. Thank you.
The next day the university's business school called on Magill to resign. And Saturday afternoon, Magill resigned.
Businesses

Before Sam Altman's Ouster, OpenAI's Leaders Were Warned of Abusive Behavior (msn.com) 64

"This fall, a small number of senior leaders approached the board of OpenAI with concerns about chief executive Sam Altman," the Washington Post reported late Friday: Altman — a revered mentor, and avatar of the AI revolution — had been psychologically abusive, the employees alleged, creating pockets of chaos and delays at the artificial-intelligence start-up, according to two people familiar with the board's thinking who spoke on the condition of anonymity to discuss sensitive internal matters. The company leaders, a group that included key figures and people who manage large teams, mentioned Altman's allegedly pitting employees against each other in unhealthy ways, the people said.

Although the board members didn't use the language of abuse to describe Altman's behavior, these complaints echoed some of their interactions with Altman over the years, and they had already been debating the board's ability to hold the CEO accountable. Several board members thought Altman had lied to them, for example, as part of a campaign to remove board member Helen Toner after she published a paper criticizing OpenAI, the people said.

The new complaints triggered a review of Altman's conduct during which the board weighed the devotion Altman had cultivated among factions of the company against the risk that OpenAI could lose key leaders who found interacting with him highly toxic. They also considered reports from several employees who said they feared retaliation from Altman: One told the board that Altman was hostile after the employee shared critical feedback with the CEO and that he undermined the employee on that person's team, the people said...

The complaints about Altman's alleged behavior, which have not previously been reported, were a major factor in the board's abrupt decision to fire Altman on Nov. 17, according to the people. Initially cast as a clash over the safe development of artificial intelligence, Altman's firing was at least partially motivated by the sense that his behavior would make it impossible for the board to oversee the CEO.

Bloomberg reported Friday: The board had heard from some senior executives at OpenAI who had issues with Altman, said one person familiar with directors' thinking. But employees approached board members warily because they were scared of potential repercussions of Altman finding out they had spoken out against him, the person said.
Two other interesting details from the Post's article:
  • While over 95% of the company's employees signed an open letter after Altman's firing demanding his return, "On social media, in news reports and on the anonymous app Blind, which requires members to sign up with a work email address to post, people identified as current OpenAI employees also described facing intense peer pressure to sign the mass-resignation letter."
  • The Post also spotted "a cryptic post" on X Wednesday from OpenAI co-founder and chief scientist Ilya Sutskever about lessons learned over the past month: "One such lesson is that the phrase 'the beatings will continue until morale improves' applies more often than it has any right to,'" (The Post adds that "The tweet was quickly deleted.")

The Post also reported in November that "Before OpenAI, Altman was asked to leave by his mentor at the prominent start-up incubator Y Combinator, part of a pattern of clashes that some attribute to his self-serving approach."


Social Networks

Threads Adds Hashtags Ahead of EU Launch (9to5google.com) 11

Ahead of its December 14th launch in the European Union, Meta's Twitter-like social media platform, Threads, is adding a simplified version of hashtags to help users find related posts. 9to5Google reports: Announced in a post on Threads today, Meta is adding "Tags" to the social platform as a way to categorize a post and have it show up alongside other posts on the same topic. Tags work similarly to hashtags in the sense that they group together content, but they also work differently. Unlike hashtags, you can only have one tag/topic on a post. So, where many platforms (including Instagram) suffer somewhat from posts being flooded with dozens of hashtags appended to the bottom, Threads seemingly avoids that entirely. Meta says that this "makes it easier for others who care about that topic to find and read your post."

The other big difference with tags is how they appear in posts. Tags can be added by typing the # symbol in line with the text, but they don't appear with the symbol in the published post. Instead, they appear in blue text in the post, much like a traditional hyperlink. You can also add a tag by tapping the "#" symbol on the new post UI.
As for the EU launch, Meta has opted to "sneakily update the Threads website with an untitled countdown timer (which won't be viewable in countries where Threads is already available) with just under six days remaining on the clock," reports The Verge. "European Instagram users can also search for the term 'ticket' within the app to discover a digital invitation to Threads, alongside a scannable QR code and a launch time -- which may vary depending on the country in which the user is based."

"The delay in Threads' rollout to the EU has been caused by what Meta spokesperson Christine Pai described as 'upcoming regulatory uncertainty,' likely in reference to strict rules under the bloc's Digital Markets Act (DMA)."
Businesses

Amazon Says Thieves Swiped Millions by Faking Product Refunds (bloomberg.com) 26

Amazon sued what it called an international ring of thieves who swiped millions of dollars in merchandise from the company through a series of refund scams that included buying products on Amazon and seeking refunds without returning the goods. From a report: An organization called REKK advertised its refund services on social media sites, including Reddit and Discord, and communicated with perpetrators on the messaging app Telegram, Amazon said in a lawsuit filed Thursday in US District Court in the state of Washington.

The lawsuit names REKK and nearly 30 people from the US, Canada, UK, Greece, Lithuania and the Netherlands as defendants in the scheme, which involved hacking into Amazon's internal systems and bribing Amazon employees to approve reimbursements. REKK charged customers, who wanted to get pricey items like MacBook Pro laptops and car tires without paying for them, a commission based on the value of the purchase. "The defendants' scheme tricks Amazon into processing refunds for products that are never returned; instead of returning the products as promised, defendants keep the product and the refund," Amazon said in its lawsuit.

Social Networks

Actors Recorded Videos for 'Vladimir.' It Turned Into Russian Propaganda. (wsj.com) 70

Internet propagandists aligned with Russia have duped at least seven Western celebrities, including Elijah Wood and Priscilla Presley, into recording short videos to support its online information war against Ukraine, according to new security research by Microsoft. From a report: The celebrities look like they were asked to offer words of encouragement -- apparently via the Cameo app -- to someone named "Vladimir" who appears to be struggling with substance abuse, Microsoft said. Instead, these messages were edited, sometimes dressed up with emojis, links and the logos of media outlets and then shared online by the Russia-aligned trolls, the company said.

The point was to give the appearance that the celebrities were confirming that Ukrainian President Volodymyr Zelensky was suffering from drug and alcohol problems, false claims that Russia has pushed in the past, according to Microsoft. Russia has denied engaging in disinformation campaigns. In one of the videos, a crudely edited message by Wood to someone named Vladimir references drugs and alcohol, saying: "I just want to make sure that you're getting help." Wood's video first surfaced in July, but since then Microsoft researchers have observed six other similar celebrity videos misused in the same way, including clips by "Breaking Bad" actor Dean Norris, John C. McGinley of "Scrubs," and Kate Flannery of "The Office," the company said.

Technology

How Tech Giants Use Money, Access To Steer Academic Research (washingtonpost.com) 19

Tech giants including Google and Facebook parent Meta have dramatically ramped up charitable giving to university campuses over the past several years -- giving them influence over academics studying such critical topics as artificial intelligence, social media and disinformation. From a report: Meta CEO Mark Zuckerberg alone has donated money to more than 100 university campuses, either through Meta or his personal philanthropy arm, according to new research by the Tech Transparency Project, a nonprofit watchdog group studying the technology industry. Other firms are helping fund academic centers, doling out grants to professors and sitting on advisory boards reserved for donors, researchers told The Post.

Silicon Valley's influence is most apparent among computer science professors at such top-tier schools as Berkeley, University of Toronto, Stanford and MIT. According to a 2021 paper by University of Toronto and Harvard researchers, most tenure-track professors in computer science at those schools whose funding sources could be determined had taken money from the technology industry, including nearly 6 of 10 scholars of AI. The proportion rose further in certain controversial subjects, the study found. Of 33 professors whose funding could be traced who wrote on AI ethics for the top journals Nature and Science, for example, all but one had taken grant money from the tech giants or had worked as their employees or contractors.

Slashdot Top Deals