×
Education

UCLA Will Transform Dead Westside Mall Into Major Science Innovation Center (latimes.com) 23

An anonymous reader quotes a report from the Los Angeles Times: The former Westside Pavilion, a long shuttered indoor mall, will be transformed into a UCLA biomedical research center aimed at tackling such towering challenges as curing cancer and preventing global pandemics, officials announced Wednesday. The sprawling three-story structure will be known as the UCLA Research Park and will house two multidisciplinary centers focusing on immunology and immunotherapy as well as quantum science and engineering. Establishment of the public-private research center is a coup for Southern California that "will cement California's global, economic, scientific and technical dominance into the 22nd century and beyond," said Gov. Gavin Newsom.

The former owners of the mall, Hudson Pacific Properties Inc. and Macerich, said Wednesday that they sold the property to the Regents of the University of California for $700 million. By purchasing the former shopping center, UCLA saved several years of potential toil to build such a facility on campus. UCLA is the most-applied-to university in the nation, but its Westwood home is among the smallest of the nine UC undergraduate campuses, leaving it limited room for growth. The former mall sits on prime real estate in the heart of the Westside at Pico Boulevard and Overland Avenue, about two miles from the UCLA campus. The mall was owned by commercial developers who spent hundreds of millions of dollars to dramatically remake the old shopping center into an office complex intended to appeal to technology firms, which signed some of the biggest office leases in L.A.'s Silicon Beach before the pandemic.

Google agreed to become the sole tenant and began paying rent last year yet never moved in. The interior is mostly unfinished, but is ready for UCLA to build out to its specifications in a process Newsom said would take about 40 months. The UCLA Research Park "will serve as a state of the art hub of research and innovation that will bring together academics, corporate partners, government agencies and startups to explore new areas of inquiry and achieve breakthroughs that serve the common good," UCLA Chancellor Gene Block said. In addition to flexible work areas, the former mall's 12-screen multiplex movie theater may be converted into lecture halls or performance spaces offering programming across the arts, humanities, sciences and social sciences, the chancellor's office said. One tenant of the research park will be the new California Institute for Immunology and Immunotherapy.

Education

Nobel Prize Winner Cautions on Rush Into STEM (bloomberg.com) 113

A Nobel Prize-winning labor market economist has cautioned younger generations against piling into studying science, technology, engineering, and mathematics (STEM) subjects, saying as "empathetic" and creative skills may thrive in a world dominated by artificial intelligence. From a report: Christopher Pissarides, professor of economics at the London School of Economics, said that workers in certain IT jobs risk sowing their "own seeds of self-destruction" by advancing AI that will eventually take the same jobs in the future. While Pissarides is an optimist on AI's overall impact on the jobs market, he raised concerns for those taking STEM subjects hoping to ride the coattails of the technological advances.

He said that despite rapid growth in the demand for STEM skills currently, jobs requiring more traditional face-to-face skills, such as in hospitality and healthcare, will still dominate the jobs market. "The skills that are needed now -- to collect the data, collate it, develop it, and use it to develop the next phase of AI or more to the point make AI more applicable for jobs -- will make the skills that are needed now obsolete because it will be doing the job," he said in an interview. "Despite the fact that you see growth, they're still not as numerous as might be required to have jobs for all those graduates coming out with STEM because that's what they want to do." He added, "This demand for these new IT skills, they contain their own seeds of self destruction."

Education

US Department of Education Spending $4 Million To Teach 3,450 Kids CS Using Minecraft 38

theodp writes: Among the 45 winners of this year's Education Innovation and Research (EIR) program competitions is Creative Coders: Middle School CS Pathways Through Game Design (PDF). The U.S. Dept. of Education is providing the national nonprofit Urban Arts with $3,999,988 to "use materials and learning from its School of Interactive Arts program to create an engaging, game-based, middle school CS course using [Microsoft] Minecraft tools" for 3,450 middle schoolers (6th-8th grades) in New York and California with the help of "our industry partner Microsoft with the utilization of Minecraft Education."

From Urban Arts' winning proposal: "Because a large majority of children play video games regularly, teaching CS through video game design exemplifies CRT [Culturally Responsive Teaching], which has been linked to 'academic achievement, improved attendance, [and] greater interest in school.' The video game Minecraft has over 173 million users worldwide and is extremely popular with students at the middle school level; the Minecraft Education workspace we utilize in the Creative Coders curriculum is a familiar platform to any player of the original game. By leveraging students' personal interests and their existing 'funds of knowledge', we believe Creative Coders is likely to increase student participation and engagement."

Speaking of UA's EIR grant partner Microsoft, Urban Arts' Board of Directors includes Josh Reynolds, the Director of Modern Workplace for Microsoft Education, whose Urban Arts bio notes "has led some of the largest game-based learning activations worldwide with Minecraft." Urban Arts' Gaming Pathways Educational Advisory Board includes Reynolds and Microsoft Sr. Account Executive Amy Brandt. And in his 2019 book Tools and Weapons, Microsoft President Brad Smith cited $50 million K-12 CS pledges made to Ivanka Trump by Microsoft and other Tech Giants as the key to getting Donald Trump to sign a $1 billion, five-year presidential order (PDF) "to ensure that federal funding from the Department of Education helps advance [K-12] computer science," including via EIR program grants.
United Kingdom

UK Students Launch Barclays 'Career Boycott' Over Bank's Climate Policies (theguardian.com) 47

Hundreds of students from leading UK universities have launched a "career boycott" of Barclays over its climate policies, warning that the bank will miss out on top talent unless it stops financing fossil fuel companies. From a report: More than 220 students from Barclays' top recruitment universities, including Oxford, Cambridge, and University College London, have sent a letter to the high street lender, saying they will not work for Barclays and raising the alarm over its funding for oil and gas firms including Shell, TotalEnergies, Exxon and BP. "Your ambitious decarbonisation targets are discredited by your absence of action and the roster of fossil fuel companies on your books," the letter said. "You may say you're working with them to help them transition, but Shell, Total and BP have all rowed back."

Large oil firms have started to water down climate commitments, including BP, which originally pledged to lower emissions by 35% by 2030 but is now aiming for a 20% to 30% cut instead. Meanwhile, ExxonMobil quietly withdrew funding for plans to use algae to create low-carbon fuel, while Shell announced it would not increase its investments in renewable energy this year, despite earlier promises to slash its emissions. The letter calls on Barclays to end all financing and underwriting of oil and gas companies -- not only their projects -- and to boost funding of firms behind wind and solar energy significantly.

Programming

Code.org Sues WhiteHat Jr. For $3 Million 8

theodp writes: Back in May 2021, tech-backed nonprofit Code.org touted the signing of a licensing agreement with WhiteHat Jr., allowing the edtech company with a controversial past (Whitehat Jr. was bought for $300M in 2020 by Byju's, an edtech firm that received a $50M investment from Mark Zuckerberg's venture firm) to integrate Code.org's free-to-educators-and-organizations content and tools into their online tutoring service. Code.org did not reveal what it was charging Byju's to use its "free curriculum and open source technology" for commercial purposes, but Code.org's 2021 IRS 990 filing reported $1M in royalties from an unspecified source after earlier years reported $0. Coincidentally, Whitehat Jr. is represented by Aaron Kornblum, who once worked at Microsoft for now-President Brad Smith, who left Code.org's Board just before the lawsuit was filed.

Fast forward to 2023 and the bloom is off the rose, as Court records show that Code.org earlier this month sued Whitehat Education Technology, LLC (Exhibits A and B) in what is called "a civil action for breach of contract arising from Whitehat's failure to pay Code.org the agreed-upon charges for its use of Code.org's platform and licensed content and its ongoing, unauthorized use of that platform and content." According to the filing, "Whitehat agreed [in April 2022] to pay to Code.org licensing fees totaling $4,000,000 pursuant to a four-year schedule" and "made its first four scheduled payments, totaling $1,000,000," but "about a year after the Agreement was signed, Whitehat informed Code.org that it would be unable to make the remaining scheduled license payments." While the original agreement was amended to backload Whitehat's license fee payment obligations, "Whitehat has not paid anything at all beyond the $1,000,000 that it paid pursuant to the 2022 invoices before the Agreement was amended" and "has continued to access Code.org's platform and content."

That Byju's Whitehat Jr. stiffed Code.org is hardly shocking. In June 2023, Reuters reported that Byju's auditor Deloitte cut ties with the troubled Indian Edtech startup that was once an investor darling and valued at $22 billion, adding that a Byju's Board member representing the Chan-Zuckerberg Initiative had resigned with two other Board members. The BBC reported in July that Byju's was guilty of overexpanding during the pandemic (not unlike Zuck's Facebook). Ironically, the lawsuit Exhibits include screenshots showing Mark Zuckerberg teaching Code.org lessons. Zuckerberg and Facebook were once among the biggest backers of Code.org, although it's unclear whether that relationship soured after court documents were released that revealed Code.org's co-founders talking smack about Zuck and Facebook's business practices to lawyers for Six4Three, which was suing Facebook.

Code.org's curriculum is also used by the Amazon Future Engineer (AFE) initiative, but it is unclear what royalties -- if any -- Amazon pays to Code.org for the use of Code.org curriculum. While the AFE site boldly says, "we provide free computer science curriculum," the AFE fine print further explains that "our partners at Code.org and ProjectSTEM offer a wide array of introductory and advance curriculum options and teacher training." It's unclear what kind of organization Amazon's AFE ("Computer Science Learning Childhood to Career") exactly is -- an IRS Tax Exempt Organization Search failed to find any hits for "Amazon Future Engineer" -- making it hard to guess whether Code.org might consider AFE's use of Code.org software 'commercial use.' Would providing a California school district with free K-12 CS curriculum that Amazon boasts of cultivating into its "vocal champion" count as "commercial use"? How about providing free K-12 CS curriculum to children who live where Amazon is seeking incentives? Or if Amazon CEO Jeff Bezos testifies Amazon "funds computer science coursework" for schools as he attempts to counter a Congressional antitrust inquiry? These seem to be some of the kinds of distinctions Richard Stallman anticipated more than a decade ago as he argued against a restriction against commercial use of otherwise free software.
Earth

How a Surge in Organized Crime Threatens the Amazon (doi.org) 25

In Brazil's Amazon, armed men with a rogue police unit overseeing illegal mining operations intimidated journalists investigating regional violence and trafficking surges. Though Brazil's 2023 deforestation decreased, fires and attacks continued as governments deprioritized crime reduction. Illegal mining finances threats to the climate-critical rainforest, yet improving security was absent from the 2023 UN climate summit agenda. With complex criminal networks forging cross-border alliances and violence escalating, addressing this dilemma is pivotal to safeguarding the Amazon and its Indigenous peoples. Nature: Solutions to these multifaceted issues might not be simple, but practical steps exist. Nations must cooperate to guard against this violence. They must support local communities -- by increasing the state's presence in remote areas and promoting health care, education and sustainable economic development -- and help them to safeguard the rainforest. For example, Indigenous peoples in Peru and Brazil are using drones and GPS devices to monitor their land and detect threats from violent invaders.

Indigenous peoples are the Amazon's best forest guardians, but they need more legally demarcated lands and protective measures, such as funding for Indigenous guards and rapid response and emergency protocols. In 2022, Colombia and Brazil saw the most deaths of environmental and land defenders worldwide. Developing effective strategies to enhance cooperation between law enforcement and local populations must also be a priority. To prevent irreversible damage to the rainforest and the climate, security in the Amazon must be added to the global climate agenda.

AI

'What Kind of Bubble Is AI?' (locusmag.com) 100

"Of course AI is a bubble," argues tech activist/blogger/science fiction author Cory Doctorow.

The real question is what happens when it bursts?

Doctorow examines history — the "irrational exuberance" of the dotcom bubble, 2008's financial derivatives, NFTs, and even cryptocurrency. ("A few programmers were trained in Rust... but otherwise, the residue from crypto is a lot of bad digital art and worse Austrian economics.") So would an AI bubble leave anything useful behind? The largest of these models are incredibly expensive. They're expensive to make, with billions spent acquiring training data, labelling it, and running it through massive computing arrays to turn it into models. Even more important, these models are expensive to run.... Do the potential paying customers for these large models add up to enough money to keep the servers on? That's the 13 trillion dollar question, and the answer is the difference between WorldCom and Enron, or dotcoms and cryptocurrency. Though I don't have a certain answer to this question, I am skeptical.

AI decision support is potentially valuable to practitioners. Accountants might value an AI tool's ability to draft a tax return. Radiologists might value the AI's guess about whether an X-ray suggests a cancerous mass. But with AIs' tendency to "hallucinate" and confabulate, there's an increasing recognition that these AI judgments require a "human in the loop" to carefully review their judgments... There just aren't that many customers for a product that makes their own high-stakes projects betÂter, but more expensive. There are many low-stakes applications — say, selling kids access to a cheap subscription that generates pictures of their RPG characters in action — but they don't pay much. The universe of low-stakes, high-dollar applications for AI is so small that I can't think of anything that belongs in it.

There are some promising avenues, like "federated learning," that hypothetically combine a lot of commodity consumer hardware to replicate some of the features of those big, capital-intensive models from the bubble's beneficiaries. It may be that — as with the interregnum after the dotcom bust — AI practitioners will use their all-expenses-paid education in PyTorch and TensorFlow (AI's answer to Perl and Python) to push the limits on federated learning and small-scale AI models to new places, driven by playfulness, scientific curiosity, and a desire to solve real problems. There will also be a lot more people who understand statistical analysis at scale and how to wrangle large amounts of data. There will be a lot of people who know PyTorch and TensorFlow, too — both of these are "open source" projects, but are effectively controlled by Meta and Google, respectively. Perhaps they'll be wrestled away from their corporate owners, forked and made more broadly applicable, after those corporate behemoths move on from their money-losing Big AI bets.

Our policymakers are putting a lot of energy into thinking about what they'll do if the AI bubble doesn't pop — wrangling about "AI ethics" and "AI safety." But — as with all the previous tech bubbles — very few people are talking about what we'll be able to salvage when the bubble is over.

Thanks to long-time Slashdot reader mspohr for sharing the article.
Education

Are Phones Making the World's Students Dumber? (msn.com) 123

Long-time Slashdot reader schwit1 shared this article from the Atlantic: For the past few years, parents, researchers, and the news media have paid closer attention to the relationship between teenagers' phone use and their mental health. Researchers such as Jonathan Haidt and Jean Twenge have shown that various measures of student well-being began a sharp decline around 2012 throughout the West, just as smartphones and social media emerged as the attentional centerpiece of teenage life. Some have even suggested that smartphone use is so corrosive, it's systematically reducing student achievement. I hadn't quite believed that last argument — until now.

The Program for International Student Assessment, conducted by the Organization for Economic Co-operation and Development in almost 80 countries every three years, tests 15-year-olds est scores have been falling for years — even before the pandemic. Across the OECD, science scores peaked in 2009, and reading scores peaked in 2012. Since then, developed countries have as a whole performed "increasingly poorly" on average. "No single country showed an increasingly positive trend in any subject," PISA reported, and "many countries showed increasingly poor performance in at least one subject." Even in famously high-performing countries, such as Finland, Sweden, and South Korea, PISA grades in one or several subjects have been declining for a while.

So what's driving down student scores around the world? The PISA report offers three reasons to suspect that phones are a major culprit. First, PISA finds that students who spend less than one hour of "leisure" time on digital devices a day at school scored about 50 points higher in math than students whose eyes are glued to their screens more than five hours a day. This gap held even after adjusting for socioeconomic factors... Second, screens seem to create a general distraction throughout school, even for students who aren't always looking at them.... Finally, nearly half of students across the OECD said that they felt "nervous" or "anxious" when they didn't have their digital devices near them. (On average, these students also said they were less satisfied with life.) This phone anxiety was negatively correlated with math scores.

In sum, students who spend more time staring at their phone do worse in school, distract other students around them, and feel worse about their life.

AI

AI Companies Would Be Required To Disclose Copyrighted Training Data Under New Bill (theverge.com) 42

An anonymous reader quotes a report from The Verge: Two lawmakers filed a bill requiring creators of foundation models to disclose sources of training data so copyright holders know their information was taken. The AI Foundation Model Transparency Act -- filed by Reps. Anna Eshoo (D-CA) and Don Beyer (D-VA) -- would direct the Federal Trade Commission (FTC) to work with the National Institute of Standards and Technology (NIST) to establish rules for reporting training data transparency. Companies that make foundation models will be required to report sources of training data and how the data is retained during the inference process, describe the limitations or risks of the model, how the model aligns with NIST's planned AI Risk Management Framework and any other federal standards might be established, and provide information on the computational power used to train and run the model. The bill also says AI developers must report efforts to "red team" the model to prevent it from providing "inaccurate or harmful information" around medical or health-related questions, biological synthesis, cybersecurity, elections, policing, financial loan decisions, education, employment decisions, public services, and vulnerable populations such as children.

The bill calls out the importance of training data transparency around copyright as several lawsuits have come out against AI companies alleging copyright infringement. It specifically mentions the case of artists against Stability AI, Midjourney, and Deviant Art, (which was largely dismissed in October, according to VentureBeat), and Getty Images' complaint against Stability AI. The bill still needs to be assigned to a committee and discussed, and it's unclear if that will happen before the busy election campaign season starts. Eshoo and Beyer's bill complements the Biden administration's AI executive order, which helps establish reporting standards for AI models. The executive order, however, is not law, so if the AI Foundation Model Transparency Act passes, it will make transparency requirements for training data a federal rule.

Education

Microsoft President Brad Smith Quietly Leaves Board of Nonprofit Code.org 4

Longtime Slashdot reader theodp writes: Way back in September 2012, Microsoft President Brad Smith discussed the idea of "producing a crisis" to advance Microsoft's "two-pronged" National Talent Strategy to increase K-12 CS education and the number of H-1B visas. Not long thereafter, the tech-backed nonprofit Code.org (which promotes and provides K-12 CS education and is led by Smith's next-door neighbor) and Mark Zuckerberg's FWD.us PAC (which lobbied for H-1B reform) were born, with Smith on board both. Over the past 10+ years, Smith has played a key role in establishing Code.org's influence in the new K-12 CS education "grassroots" movement, including getting buy-in from three Presidential administrations -- Obama, Trump, and Biden -- as well as the U.S. Dept. of Education and the nation's Governors.

But after recent updates, Code.org's Leadership page now indicates that Smith has quietly left Code.org's Board of Directors and thanks him for his past help and advice. Since November (when archive.org indicates Smith's photo was yanked from Code.org's Leadership page), Smith has been in the news in conjunction with Microsoft's relationship with another Microsoft-bankrolled nonprofit, OpenAI, which has come under scrutiny by the Feds and in the UK. Smith, who noted he and Microsoft helped OpenAI and CEO Sam Altman craft messaging ahead of a White House meeting, announced in a Dec. 8th tweet that Microsoft will be getting a non-voting OpenAI Board seat in connection with Altman's return to power (who that non-voting Microsoft OpenAI board member will be has not been announced).

OpenAI, Microsoft, and Code.org teamed up in December to provide K-12 CS+AI tutorials for this December's AI-themed Hour of Code (the trio has also partnered with Amazon and Google on the Code.org-led TeachAI initiative). And while Smith has left Code.org's Board, Microsoft's influence there will live on as Microsoft CTO Kevin Scott -- credited for forging Microsoft's OpenAI partnership -- remains a Code.org Board member together with execs from other Code.org Platinum Supporters ($3+ million in past 2 years) Google and Amazon.
United Kingdom

Women In IT Are On a 283-Year March To Parity, BCS Warns (theregister.com) 197

An anonymous reader quotes a report from The Register: It will take 283 years for female representation in IT to make up an equal share of the tech workforce in the UK, according to a report from the British Computer Society, the chartered institute for IT (BCS). BCS has calculated that based on trends from 2005 to 2022, it would take nearly three centuries for the representation of women in the IT workforce -- currently 20 percent -- to reach the average representation across the whole UK workforce, currently at 48 percent. BCS's annual Diversity Report also found that progress towards the gender norm was stalling in IT jobs. Between 2018 and 2021, the proportion of women tech workers rose from 16 percent to 20 percent. But there was no change in 2022, according to BCS analysis of data from the Office for National Statistics.

Julia Adamson, BCS managing director for education and public benefit, said in a statement: "More women and girls need the opportunity to take up great careers in a tech industry that's shaping the world. A massive pool of talent and creativity is being overlooked when it could benefit employers and the economy. There has to be a radical rethink of how we get more women and girls into tech careers, and a more inclusive tech culture is ethically and morally the right thing to do. Having greater diversity means that what is produced is more relevant to, and representative of, society at large. This is crucial when it comes to, for instance, the use of AI in medicine or finance. The fact that 94 percent of girls and 79 percent of boys drop computing at age 14 is a huge alarm bell we must not ignore; the subject should have a broader digital curriculum that is relevant to all young people."

Education

Ask Slashdot: What Are Some Methods To Stop Digital Surveillance In Schools? 115

Longtime Slashdot reader Kreuzfeld writes: Help please: here in Lawrence, Kansas, the public school district has recently started using Gaggle (source may be paywalled; alternative source), a system for monitoring all digital documents and communications created by students on school-provided devices. Unsurprisingly, the system inundates employees with false 'alerts' but the district nonetheless hails this pervasive, dystopic surveillance system as a great success. What useful advice can readers here offer regarding successful methods to get public officials to backtrack from a policy so corrosive to liberty, trust, and digital freedoms?
Education

Elon Musk Is Funding a New School In Austin, Texas (cnn.com) 167

"Associates of Elon Musk are planning to launch a new primary and secondary school," reports CNN, "and ultimately a university, in Austin, Texas, with the help of a nearly $100 million donation from the billionaire, tax documents show..." Members of Musk's inner circle — including Jared Birchall, who runs Musk's family office — are named as leaders of The Foundation, a new school planning to teach "STEM subjects and other topics," in an application to the Internal Revenue Service asking for tax-exempt status last year... The IRS filing, dated October 2022, was obtained and posted publicly by Bloomberg, which first reported plans for the school on Wednesday... "The School is being designed to meet the educational needs of those with proven academic and scientific potential, who will thrive in a rigorous, project based curriculum," the filing posted by Bloomberg states.

The school plans to initially enroll about 50 students and grow over time, according to the filing. It expects to be funded through donations and tuition fees, although it notes that the school will offer scholarships to support students who couldn't otherwise afford to attend... "The School intends ultimately to expand its operations to create a university dedicated to education at the highest levels," according to the filing...

The Foundation said in its filing said that it had raised around $100 million in contributions since mid-2022 for the new Austin school. The 2022 annual 990 tax filing for the Musk Foundation, also made public by Bloomberg, notes that the Musk charity donated $10 million in cash to the group that year, as well as nearly $90 million worth of Tesla stock.

Education

Amazon, Microsoft, and Google Help Teachers Incorporate AI Into CS Education 16

Long-time Slashdot reader theodp writes: Earlier this month, Amazon came under fire as the Los Angeles Times reported on a leaked confidential document that "reveals an extensive public relations strategy by Amazon to donate to community groups, school districts, institutions and charities" to advance the company's business objectives. "We will not fund organizations that have positioned themselves antagonistically toward our interests," explained Amazon officials of the decision to cut off donations to the Cheech Marin Center for Chicano Art and Culture after it ran an exhibit ("Burn Them All Down") that the artist called a commentary on how public officials were not listening to community concerns about the growing number of Amazon warehouses in Southern California's Inland Empire neighborhoods...

Interestingly on the same day the Los Angeles Times was sounding the alarm on Amazon philanthropy, the White House and National Science Foundation (NSF) held a White House-hosted event on K-12 AI education. There it was announced that the Amazon-backed nonprofit Computer Science Teachers Association (CSTA) will develop new K-12 computer science standards that incorporate AI into foundational computer science education with support from the NSF, Amazon, Google, and Microsoft. CSTA separately announced it had received a $1.5 million donation from Amazon to "support efforts to update the CSTA K-12 Computer Science Standards to reflect the rapid advancements in technologies like artificial intelligence (AI)," adding that the CSTA standards — which CSTA credited Microsoft Philanthropies for helping to advance — "serve as a model for CS teaching and learning across grades K-12" in 42 states.

The announcements, the White House noted, came during Computer Science Education Week, the signature event of which is Amazon, Google, and Microsoft-backed Code.org's Hour of Code (which was AI-themed this year), for which Amazon, Google, and Microsoft — not teachers — provided the event's signature tutorials used by the nation's K-12 students. Amazon, Google, and Microsoft are also advisors to Code.org's TeachAI initiative, which was launched in May "to provide thought leadership to guide governments and educational leaders in aligning education with the needs of an increasingly AI-driven world and connecting the discussion of teaching with AI to teaching about AI and computer science."
Television

Netflix's Big Data Dump Shows Just OK TV Is Here To Stay (wired.com) 50

After years of withholding viewership data, Netflix earlier this week released statistics showing its top viewed titles from January-June 2023. The winner with over 800 million hours watched was The Night Agent. Though the steamy, soapy Sex/Life scored over 120 million hours, the warm coming-of-age series Sex Education had under 30 million.

Netflix claimed "success comes in all shapes and sizes," but co-CEO Ted Sarandos admitted the data guides business decisions. So while Netflix says stats aren't everything, pouring resources into sure bets like The Night Agent seems likely as competition grows post-Hot Strike Summer. The show is what some call "just OK TV" -- not offensive, not groundbreaking, but reliably watched. Wired adds: This era of Just OK also comes as Netflix captures the King of Reality TV throne. Shows like Love Is Blind and Selling Sunset are becoming cultural juggernauts, and the streamer shows no sign of slowing down, especially now that the Squid Game spinoff, Squid Game: The Challenge, is getting major traction.

True, Netflix is still putting out artful content. A show like Wednesday, for example, had more than 507 million hours viewed and is also currently up for 12 Emmys. Netflix, on the whole, is nominated for a whopping 103 Emmys. That's impressive, but also, it's down from the 160 nods it got at its peak in 2020 and fewer than the 127 nabbed by (HBO) Max, which crushed thanks to shows like The White Lotus, The Last of Us, and Succession. You see where this is going. Netflix likes to tout its prestige shows, but also has to keep its paying customers, who left in droves in 2022 before partly coming back as Netflix cracked down on password sharing. To that end, it behooves Netflix to make more Ginny & Georgia, more Night Agent, more You. One analysis of the data found that the most-watched film, according to Netflix's data dump, was the Jennifer Lopez vehicle The Mother, which accumulated about 250 million hours watched in six months. Variety puts that level of engagement up there with Barbie and The Super Mario Bros. Movie. Not a bad showing.

AI

Cheating Fears Over Chatbots Were Overblown, New Research Suggests (nytimes.com) 55

Natasha Singer reports via The New York Times: According to new research from Stanford University, the popularization of A.I. chatbots has not boosted overall cheating rates in schools (Warning: source may be paywalled; alternative source). In surveys this year of more than 40 U.S. high schools, some 60 to 70 percent of students said they had recently engaged in cheating -- about the same percent as in previous years, Stanford education researchers said. "There was a panic that these A.I. models will allow a whole new way of doing something that could be construed as cheating," said Denise Pope, a senior lecturer at Stanford Graduate School of Education who has surveyed high school students for more than a decade through an education nonprofit she co-founded. But "we're just not seeing the change in the data."

ChatGPT, developed by OpenAI in San Francisco, began to capture the public imagination late last year with its ability to fabricate human-sounding essays and emails. Almost immediately, classroom technology boosters started promising that A.I. tools like ChatGPT would revolutionize education. And critics began warning that such tools -- which liberally make stuff up -- would enable widespread cheating, and amplify misinformation, in schools. Now the Stanford research, along with a recent report from the Pew Research Center, are challenging the notion that A.I. chatbots are upending public schools.

Microsoft

Microsoft and Labor Unions Form 'Historic' Alliance on AI (bloomberg.com) 38

Microsoft is teaming up with labor unions to create "an open dialogue" on how AI will impact workers. From a report: The software giant is forming an alliance with the American Federation of Labor and Congress of Industrial Organizations, which comprises 60 labor unions representing 12.5 million workers, according to a statement on Monday. Under the partnership, Redmond, Washington-based Microsoft will provide labor leaders and workers with formal training on how AI works. The education sessions will start in the winter of 2024. Microsoft will also begin gathering feedback from labor groups and will focus on unions and workers in "key selected sectors."

The initiative marks the first formal collaboration on AI between labor unions and the technology industry and coincides with growing concerns that artificial intelligence could displace workers. The agreement also includes a template for "neutrality" terms that would make it easier for unions to organize at Microsoft. The move expands an approach the company already agreed to for its video game workers and lays the groundwork for broader unionization at Microsoft. Neutrality agreements commit companies not to wage anti-union campaigns in response to workers organizing.

Education

Harvard Accused of Bowing to Meta By Ousted Disinformation Scholar in Whistleblower Complaint (cjr.org) 148

The Washington Post reports: A prominent disinformation scholar has accused Harvard University of dismissing her to curry favor with Facebook and its current and former executives in violation of her right to free speech.

Joan Donovan claimed in a filing with the Education Department and the Massachusetts attorney general that her superiors soured on her as Harvard was getting a record $500 million pledge from Meta founder Mark Zuckerberg's charitable arm. As research director of Harvard Kennedy School projects delving into mis- and disinformation on social media platforms, Donovan had raised millions in grants, testified before Congress and been a frequent commentator on television, often faulting internet companies for profiting from the spread of divisive falsehoods. Last year, the school's dean told her that he was winding down her main project and that she should stop fundraising for it. This year, the school eliminated her position.

As one of the first researchers with access to "the Facebook papers" leaked by Frances Haugen, Donovan was asked to speak at a meeting of the Dean's Council, a group of the university's high-profile donors, remembers The Columbia Journalism Review : Elliot Schrage, then the vice president of communications and global policy for Meta, was also at the meeting. Donovan says that, after she brought up the Haugen leaks, Schrage became agitated and visibly angry, "rocking in his chair and waving his arms and trying to interrupt." During a Q&A session after her talk, Donovan says, Schrage reiterated a number of common Meta talking points, including the fact that disinformation is a fluid concept with no agreed-upon definition and that the company didn't want to be an "arbiter of truth."

According to Donovan, Nancy Gibbs, Donovan's faculty advisor, was supportive after the incident. She says that they discussed how Schrage would likely try to pressure Douglas Elmendorf, the dean of the Kennedy School of Government (where the Shorenstein Center hosting Donovan's project is based) about the idea of creating a public archive of the documents... After Elmendorf called her in for a status meeting, Donovan claims that he told her she was not to raise any more money for her project; that she was forbidden to spend the money that she had raised (a total of twelve million dollars, she says); and that she couldn't hire any new staff. According to Donovan, Elmendorf told her that he wasn't going to allow any expenditure that increased her public profile, and used a number of Meta talking points in his assessment of her work...

Donovan says she tried to move her work to the Berkman Klein Center at Harvard, but that the head of that center told her that they didn't have the "political capital" to bring on someone whom Elmendorf had "targeted"... Donovan told me that she believes the pressure to shut down her project is part of a broader pattern of influence in which Meta and other tech platforms have tried to make research into disinformation as difficult as possible... Donovan said she hopes that by blowing the whistle on Harvard, her case will be the "tip of the spear."

Another interesting detail from the article: [Donovan] alleges that Meta pressured Elmendorf to act, noting that he is friends with Sheryl Sandberg, the company's chief operating officer. (Elmendorf was Sandberg's advisor when she studied at Harvard in the early nineties; he attended Sandberg's wedding in 2022, four days before moving to shut down Donovan's project.)
Social Networks

Reactions Continue to Viral Video that Led to Calls for College Presidents to Resign 414

After billionaire Bill Ackman demanded three college presidents "resign in disgrace," that post on X — excerpting their testimony before a U.S. Congressional committee — has now been viewed more than 104 million times, provoking a variety of reactions.

Saturday afternoon, one of the three college presidents resigned — University of Pennsylvania president Liz Magill.

Politico reports that the Republican-led Committee now "will be investigating Harvard University, MIT and the University of Pennsylvania after their institutions' leaders failed to sufficiently condemn student protests calling for 'Jewish genocide.'" The BBC reports a wealthy UPenn donor reportedly withdrew a stock grant worth $100 million.

But after watching the entire Congressional hearing, New York Times opinion columnist Michelle Goldberg wrote that she'd seen a "more understandable" context: In the questioning before the now-infamous exchange, you can see the trap [Congresswoman Elise] Stefanik laid. "You understand that the use of the term 'intifada' in the context of the Israeli-Arab conflict is indeed a call for violent armed resistance against the state of Israel, including violence against civilians and the genocide of Jews. Are you aware of that?" she asked Claudine Gay of Harvard. Gay responded that such language was "abhorrent."

Stefanik then badgered her to admit that students chanting about intifada were calling for genocide, and asked angrily whether that was against Harvard's code of conduct. "Will admissions offers be rescinded or any disciplinary action be taken against students or applicants who say, 'From the river to the sea' or 'intifada,' advocating for the murder of Jews?" Gay repeated that such "hateful, reckless, offensive speech is personally abhorrent to me," but said action would be taken only "when speech crosses into conduct." So later in the hearing, when Stefanik again started questioning Gay, Kornbluth and Magill about whether it was permissible for students to call for the genocide of the Jews, she was referring, it seemed clear, to common pro-Palestinian rhetoric and trying to get the university presidents to commit to disciplining those who use it. Doing so would be an egregious violation of free speech. After all, even if you're disgusted by slogans like "From the river to the sea, Palestine will be free," their meaning is contested...

Liberal blogger Josh Marshall argues that "While groups like Hamas certainly use the word [intifada] with a strong eliminationist meaning it is simply not the case that the term consistently or usually or mostly refers to genocide. It's just not. Stefanik's basic equation was and is simply false and the university presidents were maladroit enough to fall into her trap."

The Wall Street Journal published an investigation the day after the hearing. A political science professor at the University of California, Berkeley hired a survey firm to poll 250 students across the U.S. from "a variety of backgrounds" — and the results were surprising: A Latino engineering student from a southern university reported "definitely" supporting "from the river to the sea" because "Palestinians and Israelis should live in two separate countries, side by side." Shown on a map of the region that a Palestinian state would stretch from the Jordan River to the Mediterranean Sea, leaving no room for Israel, he downgraded his enthusiasm for the mantra to "probably not." Of the 80 students who saw the map, 75% similarly changed their view... In all, after learning a handful of basic facts about the Middle East, 67.8% of students went from supporting "from the river to the sea" to rejecting the mantra. These students had never seen a map of the Mideast and knew little about the region's geography, history, or demography.
More about the phrase from the Associated Press: Many Palestinian activists say it's a call for peace and equality after 75 years of Israeli statehood and decades-long, open-ended Israeli military rule over millions of Palestinians. Jews hear a clear demand for Israel's destruction... By 2012, it was clear that Hamas had claimed the slogan in its drive to claim land spanning Israel, the Gaza Strip and the West Bank... The phrase also has roots in the Hamas charter... [Since 1997 the U.S. government has considered Hamas a terrorist organization.]

"A Palestine between the river to the sea leaves not a single inch for Israel," read an open letter signed by 30 Jewish news outlets around the world and released on Wednesday... Last month, Vienna police banned a pro-Palestinian demonstration, citing the fact that the phrase "from the river to the sea" was mentioned in invitations and characterizing it as a call to violence. And in Britain, the Labour party issued a temporary punishment to a member of Parliament, Andy McDonald, for using the phrase during a rally at which he called for a stop to bombardment.

As the controversy rages on, Ackman's X timeline now includes an official response reposted from a college that wasn't called to testify — Stanford University: In the context of the national discourse, Stanford unequivocally condemns calls for the genocide of Jews or any peoples. That statement would clearly violate Stanford's Fundamental Standard, the code of conduct for all students at the university.
Ackman also retweeted this response from OpenAI CEO Sam Altman: for a long time i said that antisemitism, particularly on the american left, was not as bad as people claimed. i'd like to just state that i was totally wrong. i still don't understand it, really. or know what to do about it. but it is so fucked.
Wednesday UPenn's president announced they'd immediately consider a new change in policy," in an X post viewed 38.7 million times: For decades under multiple Penn presidents and consistent with most universities, Penn's policies have been guided by the [U.S.] Constitution and the law. In today's world, where we are seeing signs of hate proliferating across our campus and our world in a way not seen in years, these policies need to be clarified and evaluated. Penn must initiate a serious and careful look at our policies, and provost Jackson and I will immediately convene a process to do so. As president, I'm committed to a safe, secure, and supportive environment so all members of our community can thrive. We can and we will get this right. Thank you.
The next day the university's business school called on Magill to resign. And Saturday afternoon, Magill resigned.
Chrome

Chromebooks Are Problematic For Profits and Planet, Says Lenovo Exec (theregister.com) 46

Laura Dobberstein reports via The Register: Lenovo won't stop making Chromebooks despite the machines scoring poorly when it comes to both sustainability and revenue, according to an exec speaking at Canalys APAC Forum in Bangkok on Wednesday. "I don't know who makes the profit," commented Che Min Tu, Lenovo senior vice president and group operations officer. "Everybody struggled to sell the Chromebook." Tu further remarked that the laptop is not great from an environmental standpoint either -- recycling its material won't be easy, or cheap. "But I think we'll continue to sell the Chromebook because there's a demand," explained Tu, who added that the major driver of that demand is coming from the education sector. [...]

While the number of Chromebooks being sold has dropped since the pandemic, the education market has kept it afloat. In the US, education accounted for 80 percent of Chromebook sales in Q2 this year. IDC estimated that Q2 Chromebook channel sales shrank 1.8 percent to 5.8 million units in that quarter as many customers had refreshed in the previous quarter to avoid a licensing increase in the second half of 2023.

Slashdot Top Deals