×
China

China Will Use AI To Disrupt Elections in the US, South Korea and India, Microsoft Warns (theguardian.com) 157

China will attempt to disrupt elections in the US, South Korea and India this year with artificial intelligence-generated content after making a dry run with the presidential poll in Taiwan, Microsoft has warned. From a report: The US tech firm said it expected Chinese state-backed cyber groups to target high-profile elections in 2024, with North Korea also involved, according to a report by the company's threat intelligence team published on Friday. "As populations in India, South Korea and the United States head to the polls, we are likely to see Chinese cyber and influence actors, and to some extent North Korean cyber actors, work toward targeting these elections," the report reads.

Microsoft said that "at a minimum" China will create and distribute through social media AI-generated content that "benefits their positions in these high-profile elections." The company added that the impact of AI-made content was minor but warned that could change. "While the impact of such content in swaying audiences remains low, China's increasing experimentation in augmenting memes, videos and audio will continue -- and may prove effective down the line," said Microsoft. Microsoft said in the report that China had already attempted an AI-generated disinformation campaign in the Taiwan presidential election in January. The company said this was the first time it had seen a state-backed entity using AI-made content in a bid to influence a foreign election.

UPDATE: Last fall, America's State Department "accused the Chinese government of spending billions of dollars annually on a global campaign of disinformation," reports the Wall Street Journal: In an interview, Tom Burt, Microsoft's head of customer security and trust, said China's disinformation operations have become much more active in the past six months, mirroring rising activity of cyberattacks linked to Beijing. "We're seeing them experiment," Burt said. "I'm worried about where it might go next."
The Internet

FCC To Vote To Restore Net Neutrality Rules (reuters.com) 60

An anonymous reader quotes a report from Reuters: The U.S. Federal Communications Commission will vote to reinstate landmark net neutrality rules and assume new regulatory oversight of broadband internet that was rescinded under former President Donald Trump, the agency's chair said. The FCC told advocates on Tuesday of the plan to vote on the final rule at its April 25 meeting. The commission voted 3-2 in October on the proposal to reinstate open internet rules adopted in 2015 and re-establish the commission's authority over broadband internet.

Net neutrality refers to the principle that internet service providers should enable access to all content and applications regardless of the source, and without favoring or blocking particular products or websites. FCC Chair Jessica Rosenworcel confirmed the planned commission vote in an interview with Reuters. "The pandemic made clear that broadband is an essential service, that every one of us -- no matter who we are or where we live -- needs it to have a fair shot at success in the digital age," she said. "An essential service requires oversight and in this case we are just putting back in place the rules that have already been court-approved that ensures that broadband access is fast, open and fair."

Government

Can Apps Turn Us Into Unpaid Lobbyists? (msn.com) 73

"Today's most effective corporate lobbying no longer involves wooing members of Congress..." writes the Wall Street Journal. Instead the lobbying sector "now works in secret to influence lawmakers with the help of an unlikely ally: you." [Lobbyists] teamed up with PR gurus, social-media experts, political pollsters, data analysts and grassroots organizers to foment seemingly organic public outcries designed to pressure lawmakers and compel them to take actions that would benefit the lobbyists' corporate clients...

By the middle of 2011, an army of lobbyists working for the pillars of the corporate lobbying establishment — the major movie studios, the music industry, pharmaceutical manufacturers and the U.S. Chamber of Commerce — were executing a nearly $100 million campaign to win approval for the internet bill [the PROTECT IP Act, or "PIPA"]. They pressured scores of lawmakers to co-sponsor the legislation. At one point, 99 of the 100 members of the U.S. Senate appeared ready to support it — an astounding number, given that most bills have just a handful of co-sponsors before they are called up for a vote. When lobbyists for Google and its allies went to Capitol Hill, they made little headway. Against such well-financed and influential opponents, the futility of the traditional lobbying approach became clear. If tech companies were going to turn back the anti-piracy bills, they would need to find another way.

It was around this time that one of Google's Washington strategists suggested an alternative strategy. "Let's rally our users," Adam Kovacevich, then 34 and a senior member of Google's Washington office, told colleagues. Kovacevich turned Google's opposition to the anti-piracy legislation into a coast-to-coast political influence effort with all the bells and whistles of a presidential campaign. The goal: to whip up enough opposition to the legislation among ordinary Americans that Congress would be forced to abandon the effort... The campaign slogan they settled on — "Don't Kill the Internet" — exaggerated the likely impact of the bill, but it succeeded in stirring apprehension among web users.

The coup de grace came on Jan. 18, 2012, when Google and its allies pulled off the mother of all outside influence campaigns. When users logged on to the web that day, they discovered, to their great frustration, that many of the sites they'd come to rely on — Wikipedia, Reddit, Craigslist — were either blacked out or displayed text outlining the detrimental impacts of the proposed legislation. For its part, Google inserted a black censorship bar over its multicolored logo and posted a tool that enabled users to contact their elected representatives. "Tell Congress: Please don't censor the web!" a message on Google's home page read. With some 115,000 websites taking part, the protest achieved a staggering reach. Tens of millions of people visited Wikipedia's blacked-out website, 4.5 million users signed a Google petition opposing the legislation, and more than 2.4 million people took to Twitter to express their views on the bills. "We must stop [these bills] to keep the web open & free," the reality TV star Kim Kardashian wrote in a tweet to her 10 million followers...

Within two days, the legislation was dead...

Over the following decade, outside influence tactics would become the cornerstone of Washington's lobbying industry — and they remain so today.

"The 2012 effort is considered the most successful consumer mobilization in the history of internet policy," writes the Washington Post — agreeing that it's since spawned more app-based, crowdsourced lobbying campaigns. Sites like Airbnb "have also repeatedly asked their users to oppose city government restrictions on the apps." Uber, Lyft, DoorDash and other gig work companies also blitzed the apps' users with scenarios of higher prices or suspended service unless people voted for a 2020 California ballot measure on contract workers. Voters approved it."

The Wall Street Journal also details how lobbyists successfully killed higher taxes for tobacco products, the oil-and-gas industry, and even on private-equity investors — and note similar tactics were used against a bill targeting TikTok. "Some say the campaign backfired. Lawmakers complained that the effort showed how the Chinese government could co-opt internet users to do their bidding in the U.S., and the House of Representatives voted to ban the app if its owners did not agree to sell it.

"TikTok's lobbyists said they were pleased with the effort. They persuaded 65 members of the House to vote in favor of the company and are confident that the Senate will block the effort."

The Journal's article was adapted from an upcoming book titled "The Wolves of K Street: The Secret History of How Big Money Took Over Big Government." But the Washington Post argues the phenomenon raises two questions. "How much do you want technology companies to turn you into their lobbyists? And what's in it for you?"
AI

Hillary Clinton, Election Officials Warn AI Could Threaten Elections (wsj.com) 255

Hillary Clinton and U.S. election officials said they are concerned disinformation generated and spread by AI could threaten the 2024 presidential election [non-paywalled link]. WSJ: Clinton, a former secretary of state and 2016 presidential candidate, said she thinks foreign actors like Russian President Vladimir Putin could use AI to interfere in elections in the U.S. and elsewhere. Dozens of countries are running elections this year. "Anybody who's not worried is not paying attention," Clinton said Thursday at Columbia University, where election officials and tech executives discussed how AI could impact global elections.

She added: "It could only be a very small handful of people in St. Petersburg or Moldova or wherever they are right now who are lighting the fire, but because of the algorithms everyone gets burned." Clinton said Putin tried to undermine her before the 2016 election by spreading disinformation on Facebook, Twitter and Snapchat about "all these terrible things" she purportedly did. "I don't think any of us understood it," she said. "I did not understand it. I can tell you my campaign did not understand it. The so-called dark web was filled with these kinds of memes and stories and videos of all sorts portraying me in all kinds of less than flattering ways." Clinton added: "What they did to me was primitive and what we're talking about now is the leap in technology."

Social Networks

Users Shocked To Find Instagram Limits Political Content By Default (arstechnica.com) 58

Instagram has been limiting recommended political content by default without notifying users. Ars Technica reports: Instead, Instagram rolled out the change in February, announcing in a blog that the platform doesn't "want to proactively recommend political content from accounts you don't follow." That post confirmed that Meta "won't proactively recommend content about politics on recommendation surfaces across Instagram and Threads," so that those platforms can remain "a great experience for everyone." "This change does not impact posts from accounts people choose to follow; it impacts what the system recommends, and people can control if they want more," Meta's spokesperson Dani Lever told Ars. "We have been working for years to show people less political content based on what they told us they want, and what posts they told us are political."

To change the setting, users can navigate to Instagram's menu for "settings and activity" in their profiles, where they can update their "content preferences." On this menu, "political content" is the last item under a list of "suggested content" controls that allow users to set preferences for what content is recommended in their feeds. There are currently two options for controlling what political content users see. Choosing "don't limit" means "you might see more political or social topics in your suggested content," the app says. By default, all users are set to "limit," which means "you might see less political or social topics." "This affects suggestions in Explore, Reels, Feed, Recommendations, and Suggested Users," Instagram's settings menu explains. "It does not affect content from accounts you follow. This setting also applies to Threads."
"Did [y'all] know Instagram was actively limiting the reach of political content like this?!" an X user named Olayemi Olurin wrote in an X post. "I had no idea 'til I saw this comment and I checked my settings and sho nuff political content was limited."

"This is actually kinda wild that Instagram defaults everyone to this," another user wrote. "Obviously political content is toxic but during an election season it's a little weird to just hide it from everyone?"
Censorship

India Will Fact-Check Online Posts About Government Matters (techcrunch.com) 32

An anonymous reader quotes a report from TechCrunch: In India, a government-run agency will now monitor and undertake fact-checking for government related matters on social media even as tech giants expressed grave concerns about it last year. The Ministry of Electronics and IT on Wednesday wrote in a gazette notification that it is amending the IT Rules 2021 to cement into law the proposal to make the fact checking unit of Press Information Bureau the dedicated arbiter of truth for New Delhi matters. Tech companies as well as other firms that serve more than 5 million users in India will be required to "make reasonable efforts" to not display, store, transmit or otherwise share information that deceives or misleads users about matters pertaining to the government, the IT ministry said. India's move comes just weeks ahead of the general elections in the country. Relying on a government agency such as the Press Information Bureau as the sole source to fact-check government business without giving it a clear definition or providing clear checks and balances "may lead to misuse during implementation of the law, which will profoundly infringe on press freedom," Asia Internet Coalition, an industry group that represents Meta, Amazon, Google and Apple, cautioned last year.

Meanwhile, comedian Kunal Kamra, with support from the Editors Guild of India, cautioned that the move could create an environment that forces social media firms to welcome "a regime of self-interested censorship."
Medicine

5-Year Study Finds No Brain Abnormalities In 'Havana Syndrome' Patients (www.cbc.ca) 38

An anonymous reader quotes a report from CBC News: An array of advanced tests found no brain injuries or degeneration among U.S. diplomats and other government employees who suffer mysterious health problems once dubbed "Havana syndrome," researchers reported Monday. The National Institutes of Health's (NIH) nearly five-year study offers no explanation for symptoms including headaches, balance problems and difficulties with thinking and sleep that were first reported in Cuba in 2016 and later by hundreds of American personnel in multiple countries. But it did contradict some earlier findings that raised the spectre of brain injuries in people experiencing what the State Department now calls "anomalous health incidents."

"These individuals have real symptoms and are going through a very tough time," said Dr. Leighton Chan, NIH's chief of rehabilitation medicine, who helped lead the research. "They can be quite profound, disabling and difficult to treat." Yet sophisticated MRI scans detected no significant differences in brain volume, structure or white matter -- signs of injury or degeneration -- when Havana syndrome patients were compared to healthy government workers with similar jobs, including some in the same embassy. Nor were there significant differences in cognitive and other tests, according to findings published in the Journal of the American Medical Association.

China

CIA Used Chinese Social Media In Covert Influence Operation Against Xi Jinping's Government (reuters.com) 114

An anonymous reader quotes a report from Reuters: Two years into office, President Donald Trump authorized the Central Intelligence Agency to launch a clandestine campaign on Chinese social media aimed at turning public opinion in China against its government, according to former U.S. officials with direct knowledge of the highly classified operation. Three former officials told Reuters that the CIA created a small team of operatives who used bogus internet identities to spread negative narratives about Xi Jinping's government while leaking disparaging intelligence to overseas news outlets. The effort, which began in 2019, has not been previously reported.

The CIA team promoted allegations that members of the ruling Communist Party were hiding ill-gotten money overseas and slammed as corrupt and wasteful China's Belt and Road Initiative, which provides financing for infrastructure projects in the developing world, the sources told Reuters. Although the U.S. officials declined to provide specific details of these operations, they said the disparaging narratives were based in fact despite being secretly released by intelligence operatives under false cover. The efforts within China were intended to foment paranoia among top leaders there, forcing its government to expend resources chasing intrusions into Beijing's tightly controlled internet, two former officials said. "We wanted them chasing ghosts," one of these former officials said. [...]

The CIA operation came in response to years of aggressive covert efforts by China aimed at increasing its global influence, the sources said. During his presidency, Trump pushed a tougher response to China than had his predecessors. The CIA's campaign signaled a return to methods that marked Washington's struggle with the former Soviet Union. "The Cold War is back," said Tim Weiner, author of a book on the history of political warfare. Reuters was unable to determine the impact of the secret operations or whether the administration of President Joe Biden has maintained the CIA program.

Google

Google Restricts AI Chatbot Gemini From Answering Queries on Global Elections (reuters.com) 53

Google is restricting AI chatbot Gemini from answering questions about the global elections set to happen this year, the Alphabet-owned firm said on Tuesday, as it looks to avoid potential missteps in the deployment of the technology. From a report: The update comes at a time when advancements in generative AI, including image and video generation, have fanned concerns of misinformation and fake news among the public, prompting governments to regulate the technology.

When asked about elections such as the upcoming U.S. presidential match-up between Joe Biden and Donald Trump, Gemini responds with "I'm still learning how to answer this question. In the meantime, try Google Search". Google had announced restrictions within the U.S. in December, saying they would come into effect ahead of the election. "In preparation for the many elections happening around the world in 2024 and out of an abundance of caution, we are restricting the types of election-related queries for which Gemini will return responses," a company spokesperson said on Tuesday.

Canada

Police Now Need Warrant For IP Addresses, Canada's Top Court Rules (www.cbc.ca) 36

The Supreme Court of Canada ruled today that police must now have a warrant or court order to obtain a person or organization's IP address. CBC News reports: The top court was asked to consider whether an IP address alone, without any of the personal information attached to it, was protected by an expectation of privacy under the Charter. In a five-four split decision, the court said a reasonable expectation of privacy is attached to the numbers making up a person's IP address, and just getting those numbers alone constitutes a search. Writing for the majority, Justice Andromache Karakatsanis wrote that an IP address is "the crucial link between an internet user and their online activity." "Thus, the subject matter of this search was the information these IP addresses could reveal about specific internet users including, ultimately, their identity." Writing for the four dissenting judges, Justice Suzanne Cote disagreed with that central point, saying there should be no expectation of privacy around an IP address alone. [...]

In the Supreme Court majority decision, Karakatsanis said that only considering the information associated with an IP address to be protected by the Charter and not the IP address itself "reflects piecemeal reasoning" that ignores the broad purpose of the Charter. The ruling said the privacy interests cannot be limited to what the IP address can reveal on its own "without consideration of what it can reveal in combination with other available information, particularly from third-party websites." It went on to say that because an IP address unlocks a user's identity, it comes with a reasonable expectation of privacy and is therefore protected by the Charter. "If [the Charter] is to meaningfully protect the online privacy of Canadians in today's overwhelmingly digital world, it must protect their IP addresses," the ruling said.

Justice Cote, writing on behalf of justices Richard Wagner, Malcolm Rowe and Michelle O'Bonsawin, acknowledged that IP addresses "are not sought for their own sake" but are "sought for the information they reveal." "However, the evidentiary record in this case establishes that an IP address, on its own, reveals only limited information," she wrote. Cote said the biographical personal information the law was designed to protect are not revealed through having access to an IP address. Police must use that IP address to access personal information that is held by an ISP or a website that tracks customers' IP addresses to determine their habits. "On its own, an IP address does not even reveal browsing habits," Cote wrote. "What it reveals is a user's ISP -- hardly a more private piece of information than electricity usage or heat emissions." Cote said placing a reasonable expectation of privacy on an IP address alone upsets the careful balance the Supreme Court has struck between Canadians' privacy interests and the needs of law enforcement. "It would be inconsistent with a functional approach to defining the subject matter of the search to effectively hold that any step taken in an investigation engages a reasonable expectation of privacy," the dissenting opinion said.

AI

Scientists Propose AI Apocalypse Kill Switches 104

A paper (PDF) from researchers at the University of Cambridge, supported by voices from numerous academic institutions including OpenAI, proposes remote kill switches and lockouts as methods to mitigate risks associated with advanced AI technologies. It also recommends tracking AI chip sales globally. The Register reports: The paper highlights numerous ways policymakers might approach AI hardware regulation. Many of the suggestions -- including those designed to improve visibility and limit the sale of AI accelerators -- are already playing out at a national level. Last year US president Joe Biden put forward an executive order aimed at identifying companies developing large dual-use AI models as well as the infrastructure vendors capable of training them. If you're not familiar, "dual-use" refers to technologies that can serve double duty in civilian and military applications. More recently, the US Commerce Department proposed regulation that would require American cloud providers to implement more stringent "know-your-customer" policies to prevent persons or countries of concern from getting around export restrictions. This kind of visibility is valuable, researchers note, as it could help to avoid another arms race, like the one triggered by the missile gap controversy, where erroneous reports led to massive build up of ballistic missiles. While valuable, they warn that executing on these reporting requirements risks invading customer privacy and even lead to sensitive data being leaked.

Meanwhile, on the trade front, the Commerce Department has continued to step up restrictions, limiting the performance of accelerators sold to China. But, as we've previously reported, while these efforts have made it harder for countries like China to get their hands on American chips, they are far from perfect. To address these limitations, the researchers have proposed implementing a global registry for AI chip sales that would track them over the course of their lifecycle, even after they've left their country of origin. Such a registry, they suggest, could incorporate a unique identifier into each chip, which could help to combat smuggling of components.

At the more extreme end of the spectrum, researchers have suggested that kill switches could be baked into the silicon to prevent their use in malicious applications. [...] The academics are clearer elsewhere in their study, proposing that processor functionality could be switched off or dialed down by regulators remotely using digital licensing: "Specialized co-processors that sit on the chip could hold a cryptographically signed digital "certificate," and updates to the use-case policy could be delivered remotely via firmware updates. The authorization for the on-chip license could be periodically renewed by the regulator, while the chip producer could administer it. An expired or illegitimate license would cause the chip to not work, or reduce its performance." In theory, this could allow watchdogs to respond faster to abuses of sensitive technologies by cutting off access to chips remotely, but the authors warn that doing so isn't without risk. The implication being, if implemented incorrectly, that such a kill switch could become a target for cybercriminals to exploit.

Another proposal would require multiple parties to sign off on potentially risky AI training tasks before they can be deployed at scale. "Nuclear weapons use similar mechanisms called permissive action links," they wrote. For nuclear weapons, these security locks are designed to prevent one person from going rogue and launching a first strike. For AI however, the idea is that if an individual or company wanted to train a model over a certain threshold in the cloud, they'd first need to get authorization to do so. Though a potent tool, the researchers observe that this could backfire by preventing the development of desirable AI. The argument seems to be that while the use of nuclear weapons has a pretty clear-cut outcome, AI isn't always so black and white. But if this feels a little too dystopian for your tastes, the paper dedicates an entire section to reallocating AI resources for the betterment of society as a whole. The idea being that policymakers could come together to make AI compute more accessible to groups unlikely to use it for evil, a concept described as "allocation."
The Courts

New Bill Would Let Defendants Inspect Algorithms Used Against Them In Court (theverge.com) 47

Lauren Feiner reports via The Verge: Reps. Mark Takano (D-CA) and Dwight Evans (D-PA) reintroduced the Justice in Forensic Algorithms Act on Thursday, which would allow defendants to access the source code of software used to analyze evidence in their criminal proceedings. It would also require the National Institute of Standards and Technology (NIST) to create testing standards for forensic algorithms, which software used by federal enforcers would need to meet.

The bill would act as a check on unintended outcomes that could be created by using technology to help solve crimes. Academic research has highlighted the ways human bias can be built into software and how facial recognition systems often struggle to differentiate Black faces, in particular. The use of algorithms to make consequential decisions in many different sectors, including both crime-solving and health care, has raised alarms for consumers and advocates as a result of such research.

Takano acknowledged that gaining or hiring the deep expertise needed to analyze the source code might not be possible for every defendant. But requiring NIST to create standards for the tools could at least give them a starting point for understanding whether a program matches the basic standards. Takano introduced previous iterations of the bill in 2019 and 2021, but they were not taken up by a committee.

AI

Tech Companies Plan To Sign Accord To Combat AI-Generated Election Trickery (go.com) 82

At least six major tech companies, including Adobe, Google, Meta, Microsoft, OpenAI and TikTok, plan to sign an agreement this week that details how they'll attempt to stop the use of AI-generated election misinformation and deepfakes. ABC News reports: "In a critical year for global elections, technology companies are working on an accord to combat the deceptive use of AI targeted at voters," said a joint statement from several companies Tuesday. "Adobe, Google, Meta, Microsoft, OpenAI, TikTok and others are working jointly toward progress on this shared objective and we hope to finalize and present details on Friday at the Munich Security Conference."

The companies declined to share details of what's in the agreement. Many have already said they're putting safeguards on their own generative AI tools that can manipulate images and sound, while also working to identify and label AI-generated content so that social media users know if what they're seeing is real.

Social Networks

Instagram and Threads Will Stop Recommending Political Content (theverge.com) 19

In a blog post today, Meta announced that it'll stop showing political content across Instagram and Threads unless users explicitly choose to have it recommended to them. The Verge reports: Meta announced that it's expanding an existing Reels policy that limits political content from people you're not following (including posts about social issues) from appearing in recommended feeds to more broadly cover the company's Threads and Instagram platforms. "Our goal is to preserve the ability for people to choose to interact with political content, while respecting each person's appetite for it," said Instagram head Adam Mosseri, announcing on Threads that the changes will be applied over the next few weeks. Facebook is also expected to roll out these new controls at a later, undisclosed date.

Users who still want to have content "likely to mention governments, elections, or social topics that affect a group of people and/or society at large" recommended to them can choose to turn off this limitation within their account settings. The changes will apply to public accounts when enabled and only in places where content is being recommended, such as Explore, Reels, in-feed recommendations, and suggested users. The update won't change how users view content from accounts they choose to follow, so accounts that aren't eligible to be recommended can still post political content to their followers via their feed and Stories.

For creators, Meta says that "if your account is not eligible to be recommended, none of your content will be recommended regardless of whether or not all of your content goes against our recommendations guidelines." When these changes do go live, professional accounts on Instagram will be able to use the Account Status feature to check if posting political content is impacting their eligibility for recommendation. Professional accounts can also use Account Status to contest decisions that revoke this eligibility, alongside editing, removing, or pausing politically related posts until the account is eligible to be recommended again.

AI

Commerce Secretary 'Very Worried' About AI Being Used Nefariously in 2024 Election (go.com) 60

Commerce Secretary Gina Raimondo said she is "very worried" about AI being used nefariously in the 2024 election, she told reporters at a press conference in Washington, D.C. on Thursday. From a report: "AI can do amazing things and AI can disrupt our elections, here and around the world," she said. "We're already starting to see it." Raimondo was asked by ABC News about the robocall sent on the day of the New Hampshire primary purporting to be from President Biden and spreading misinformation about voting times.

She said the government is going to work "extensively" to start putting out AI framework that helps people -- including journalists -- be able to decipher what is real and what is fake. The Commerce Secretary added that AI companies want to do the right thing based on her conversations with them. "Am I worried? Yes," she said. "Do I think we have the tools to protect our election and our democracy? Yes. Do I feel based on my interactions with the private sector that they want to do the right thing? By and large, Yes. It's a big threat."

The Internet

Pakistan Cuts Off Phone and Internet Services On Election Day (techcrunch.com) 36

An anonymous reader quotes a report from TechCrunch: Pakistan has temporarily suspended mobile phone network and internet services across the country to combat any "possible threats," a top ministry said, as the South Asian nation commences its national election. In a statement, Pakistan's interior ministry said the move was prompted by recent incidents of terrorism in the country. The internet was accessible through wired broadband connections, local journalists posted on X earlier Thursday. But NetBlocks, an independent service that tracks outages, said later that Pakistan had started to block internet services as well. The polls have opened in the nation and will close at 5 p.m. The interior ministry didn't say when it will switch back on the mobile services.
AI

OpenAI Suspends Developer Behind Dean Phillips Bot 36

theodp writes: OpenAI has banned the developer of a bot that mimicked Democratic White House hopeful Rep. Dean Phillips, the first known instance where the maker of ChatGPT has restricted the use of AI in political campaigns. OpenAI suspended the account of the start-up Delphi, which had been contracted to build Dean.Bot, which could talk to voters in real-time via a website.

"Anyone who builds with our tools must follow our usage policies," a spokesperson for OpenAI said in a statement shared with Axios on Sunday. "We recently removed a developer account that was knowingly violating our API usage policies which disallow political campaigning, or impersonating an individual without consent." OpenAI apparently is not a fan of Richard Stallman's 'freedom 0' tenet, which argues software users should have the freedom to run programs as they wish, in order to do what they wish (Stallman is careful to note this freedom doesn't make one exempt from laws).

The suspension and subsequent bot removal occurred ahead of Tuesday's New Hampshire primary, where Phillips continues his long-shot presidential bid against President Biden.
Censorship

Removal of Netflix Film Shows Advancing Power of India's Hindu Right Wing (nytimes.com) 110

An anonymous reader quotes a report from the New York Times: The trailer for "Annapoorani: The Goddess of Food" promised a sunny if melodramatic story of uplift in a south Indian temple town. A priest's daughter enters a cooking tournament, but social obstacles complicate her inevitable rise to the top. Annapoorani's father, a Brahmin sitting at the top of Hindu society's caste ladder, doesn't want her to cook meat, a taboo in their lineage. There is even the hint of a Hindu-Muslim romantic subplot. On Thursday, two weeks after the movie premiered, Netflix abruptly pulled it from its platform. An activist, Ramesh Solanki, a self-described "very proud Hindu Indian nationalist," had filed a police complaint arguing that the film was "intentionally released to hurt Hindu sentiments." He said it mocked Hinduism by "depicting our gods consuming nonvegetarian food."

The production studio quickly responded with an abject letter to a right-wing group linked to the government of Prime Minister Narendra Modi, apologizing for having "hurt the religious sentiments of the Hindus and Brahmins community." The movie was soon removed from Netflix both in India and around the world, demonstrating the newfound power of Hindu nationalists to affect how Indian society is depicted on the screen. Nilesh Krishnaa, the movie's writer and director, tried to anticipate the possibility of offending some of his fellow Indians. Food, Brahminical customs and especially Hindu-Muslim relations are all part of a third rail that has grown more powerfully electrified during Mr. Modi's decade in power. But, Mr. Krishnaa told an Indian newspaper in November, "if there was something disturbing communal harmony in the film, the censor board would not have allowed it."

With "Annapoorani," Netflix appears to have in effect done the censoring itself even when the censor board did not. In other cases, Netflix now seems to be working with the board unofficially, though streaming services in India do not fall under the regulations that govern traditional Indian cinema. For years, Netflix ran unredacted versions of Indian films that had sensitive parts removed for their theatrical releases -- including political messages that contradicted the government's line. Since last year, though, the streaming versions of movies from India match the versions that were censored locally, no matter where in the world they are viewed. [...] Nikhil Pahwa, a co-founder of the Internet Freedom Foundation, thinks the streaming companies are ready to capitulate: "They're unlikely to push back against any kind of bullying or censorship, even though there is no law in India" to force them.

Republicans

FCC Plans Shutdown of Affordable Connectivity Program As GOP Withholds Funding (arstechnica.com) 134

An anonymous reader quotes a report from Ars Technica: The Federal Communications Commission is about to start winding down a program that gives $30 monthly broadband discounts to people with low incomes, and says it will have to complete the shutdown by May if Congress doesn't provide more funding. The 2-year-old Affordable Connectivity Program (ACP) was created by Congress, and Democrats have been pushing for more funding to keep it going. But Republican members of Congress blasted the ACP last month, accusing the FCC of being "wasteful."

In a letter, GOP lawmakers complained that most of the households receiving the subsidy already had broadband service before the program existed. They threatened to withhold funding and criticized what they called the "Biden administration's reckless spending spree." The letter was sent by the highest-ranking Republicans on committees with oversight responsibility over the ACP, namely Sen. John Thune (R-SD), Sen. Ted Cruz (R-Texas), Rep. Cathy McMorris Rodgers (R-Wash.), and Rep. Bob Latta (R-Ohio). With no resolution in sight, the FCC announced that it would have to start sending out notices about the program's expected demise. "With less than four months before the projected program end date and without any immediate additional funding, this week the Commission expects to begin taking steps to start winding down the program to give households, providers, and other stakeholders sufficient time to prepare," the FCC said in an announcement yesterday.

The Biden administration has requested $6 billion to fund the program through December 2024. As of now, the FCC said it "expects funding to last through April 2024, running out completely in May." FCC Chairwoman Jessica Rosenworcel has repeatedly asked Congress for more ACP funding, and sent a letter (PDF) to lawmakers yesterday in which she repeated her plea. The chairwoman's letter said that 23 million households are enrolled in the discount program. [...] Rosenworcel warned that the impending ACP shutoff "would undermine the historic $42.5 billion Broadband Equity, Access, and Deployment Program," a different program created by Congress to subsidize ISPs' expansion of broadband networks throughout the US. The discount and deployment programs complement each other because "the ACP supports a stable customer base to help incentivize deployment in rural areas," Rosenworcel wrote.

Government

Biden Administration To Unveil Contractor Rule Set To Upend Gig Economy (reuters.com) 213

An anonymous reader quotes a report from Reuters: The administration of U.S. President Joe Biden will release a final rule as soon as this week that will make it more difficult for companies to treat workers as independent contractors rather than employees that typically cost a company more, an administration official said. The U.S. Department of Labor rule, which was first proposed in 2022 and is likely to face legal challenges, will require that workers be considered employees entitled to more benefits and legal protections than contractors when they are "economically dependent" on a company.

A range of industries will likely be affected by the rule, which will take effect later this year, but its potential impact on app-based services that rely heavily on contract workers has garnered the most attention. Shares of Uber, Lyft and DoorDash all tumbled at least 10% when the draft rule was proposed in October 2022. The rule is among regulations with the most far-reaching impacts issued by the Labor Department office that enforces U.S. wage laws, according to Marc Freedman, vice president at the U.S. Chamber of Commerce, the largest U.S. business lobby. But he said the draft version of the rule provides little guidance to companies on where to draw the line between employees and contractors. "Economic dependence is an elusive concept that in some cases may end up being defined by the eyes of the beholder," Freedman said.

The Labor Department in the proposed rule said it would consider factors such as a worker's "opportunity for profit or loss, investment, permanency, the degree of control by the employer over the worker, (and) whether the work is an integral part of the employer's business." The rule replaces a Trump administration regulation that said workers who own their own businesses or have the ability to work for competing companies, such as a driver who works for Uber and Lyft, can be treated as contractors. [...] The Biden administration has said the Trump-era rule violated U.S. wage laws and was out of step with decades of federal court decisions, and worker advocates have said a more strict standard was necessary to combat the rampant misclassification of workers in some industries.

Slashdot Top Deals