×
Social Networks

Meta Is Tagging Real Photos As 'Made With AI,' Says Photographers (techcrunch.com) 25

Since May, Meta has been labeling photos created with AI tools on its social networks to help users better identify the content they're consuming. However, as TechCrunch's Ivan Mehta reports, this approach has faced criticism as many photos not created using AI tools have been incorrectly labeled, prompting Meta to reevaluate its labeling strategy to better reflect the actual use of AI in images. From the report: There are plenty of examples of Meta automatically attaching the label to photos that were not created through AI. For example, this photo of Kolkata Knight Riders winning the Indian Premier League Cricket tournament. Notably, the label is only visible on the mobile apps and not on the web. Plenty of other photographers have raised concerns over their images having been wrongly tagged with the "Made with AI" label. Their point is that simply editing a photo with a tool should not be subject to the label.

Former White House photographer Pete Souza said in an Instagram post that one of his photos was tagged with the new label. Souza told TechCrunch in an email that Adobe changed how its cropping tool works and you have to "flatten the image" before saving it as a JPEG image. He suspects that this action has triggered Meta's algorithm to attach this label. "What's annoying is that the post forced me to include the 'Made with AI' even though I unchecked it," Souza told TechCrunch.

Meta would not answer on the record to TechCrunch's questions about Souza's experience or other photographers' posts who said their posts were incorrectly tagged. However, after publishing of the story, Meta said the company is evaluating its approach to indicate labels reflect the amount of AI used in an image. "Our intent has always been to help people know when they see content that has been made with AI. We are taking into account recent feedback and continue to evaluate our approach so that our labels reflect the amount of AI used in an image," a Meta spokesperson told TechCrunch.
"For now, Meta provides no separate labels to indicate if a photographer used a tool to clean up their photo, or used AI to create it," notes TechCrunch. "For users, it might be hard to understand how much AI was involved in a photo."

"Meta's label specifies that 'Generative AI may have been used to create or edit content in this post' -- but only if you tap on the label. Despite this approach, there are plenty of photos on Meta's platforms that are clearly AI-generated, and Meta's algorithm hasn't labeled them."
China

Launch of Chinese-French Satellite Scattered Debris Over Populated Area (spacenews.com) 45

"A Chinese launch of the joint Sino-French SVOM mission to study Gamma-ray bursts early Saturday saw toxic rocket debris fall over a populated area..." writes Space News: SVOM is a collaboration between the China National Space Administration (CNSA) and France's Centre national d'études spatiales (CNES). The mission will look for high-energy electromagnetic radiation from these events in the X-ray and gamma-ray ranges using two French and two Chinese-developed science payloads... Studying gamma-ray bursts, thought to be caused by the death of massive stars or collisions between stars, could provide answers to key questions in astrophysics. This includes the death of stars and the creation of black holes.

However the launch of SVOM also created an explosion of its own closer to home.A video posted on Chinese social media site Sina Weibo appears to show a rocket booster falling on a populated area with people running for cover. The booster fell to Earth near Guiding County, Qiandongnan Prefecture in Guizhou province, according to another post...

A number of comments on the video noted the danger posed by the hypergolic propellant from the Long March rocket... The Long March 2C uses a toxic, hypergolic mix of nitrogen tetroxide and unsymmetrical dimethylhydrazine (UDMH). Reddish-brown gas or smoke from the booster could be indicative of nitrogen tetroxide, while a yellowish gas could be caused by hydrazine fuel mixing with air. Contact with either remaining fuel or oxidizer from the rocket stage could be very harmful to individuals.

"Falling rocket debris is a common issue with China's launches from its three inland launch sites..." the article points out.

"Authorities are understood to issue warnings and evacuation notices for areas calculated to be at risk from launch debris, reducing the risk of injuries.
Social Networks

TikTok Confirms It Offered US Government a 'Kill Switch' (bbc.com) 36

TikTok revealed it offered the U.S. government a "kill switch" in 2022 to address data protection and national security concerns, allowing the government to shut down the platform if it violated certain rules. The disclosure was made as it began its legal fight against legislation that will require ByteDance to divest TikTok's U.S. assets or face a ban. The BBC reports: "This law is a radical departure from this country's tradition of championing an open Internet, and sets a dangerous precedent allowing the political branches to target a disfavored speech platform and force it to sell or be shut down," they argued in their legal submission. They also claimed the US government refused to engage in any serious settlement talks after 2022, and pointed to the "kill switch" offer as evidence of the lengths they had been prepared to go.

TikTok says the mechanism would have allowed the government the "explicit authority to suspend the platform in the United States at the US government's sole discretion" if it did not follow certain rules. A draft "National Security Agreement", proposed by TikTok in August 2022, would have seen the company having to follow rules such as properly funding its data protection units and making sure that ByteDance did not have access to US users' data. The "kill switch" could have been triggered by the government if it broke this agreement, it claimed.

In a letter - first reported by the Washington Post - addressed to the US Department of Justice, TikTok's lawyer alleges that the government "ceased any substantive negotiations" after the proposal of the new rules. The letter, dated 1 April 2024, says the US government ignored requests to meet for further negotiations. It also alleges the government did not respond to TikTok's invitation to "visit and inspect its Dedicated Transparency Center in Maryland."
Further reading: TikTok Says US Ban Inevitable Without a Court Order Blocking Law
Social Networks

Meta Releases Threads API For Developers To Build 'Unique Integrations' (theverge.com) 14

Meta has released the Threads API for developers to build "unique integrations" into the text-based conversation app. The move could potentially result in third-party apps. The Verge reports: "People can now publish posts via the API, fetch their own content, and leverage our reply management capabilities to set reply and quote controls, retrieve replies to their posts, hide, unhide or respond to specific replies," explains Jesse Chen, director of engineering at Threads.

Chen says that insights into Threads posts are "one of our top requested features for the API," so Meta is allowing developers to see the number of views, likes, replies, reposts, and quotes on Threads posts through the API. Meta has published plenty of documentation about how developers can get started with the Threads API, and there's even an open-source Threads API sample app on GitHub.

United States

New York Bans 'Addictive Feeds' For Teens (theverge.com) 40

New York Governor Kathy Hochul (D) signed two bills into law on Thursday that aim to protect kids and teens from social media harms, making it the latest state to take action as federal proposals still await votes. From a report: One of the bills, the Stop Addictive Feeds Exploitation (SAFE) for Kids Act, will require parental consent for social media companies to use "addictive feeds" powered by recommendation algorithms on kids and teens under 18. The other, the New York Child Data Protection Act, would limit data collection on minors without consent and restrict the sale of such information but does not require age verification. That law will take effect in a year.

States across the country have taken the lead on enacting legislation to protect kids on the internet -- and it's one area where both Republicans and Democrats seem to agree. While the approaches differ somewhat by party, policymakers on both sides have signaled urgent interest in similar regulations to protect kids on the internet. Florida Governor Ron DeSantis (R), for example, signed into law in March a bill requiring parents' consent for kids under 16 to hold social media accounts. And in May, Maryland Governor Wes Moore (D) signed a broad privacy bill into law, as well as the Maryland Kids Code banning the use of features meant to keep minors on social media for extended periods, like autoplay or spammy notifications.

Social Networks

TikTok Says US Ban Inevitable Without a Court Order Blocking Law 110

TikTok and Chinese parent ByteDance on Thursday urged a U.S. court to strike down a law they say will ban the popular short app in the United States on Jan. 19, saying the U.S. government refused to engage in any serious settlement talks after 2022. From a report: Legislation signed in April by President Joe Biden gives ByteDance until Jan. 19 of next year to divest TikTok's U.S. assets or face a ban on the app used by 170 million Americans. ByteDance says a divestiture is "not possible technologically, commercially, or legally."

The U.S. Court of Appeals for the District of Columbia will hold oral arguments on lawsuits filed by TikTok and ByteDance along with TikTok users on Sept. 16. TikTok's future in the United States may rest on the outcome of the case which could impact how the U.S. government uses its new authority to clamp down on foreign-owned apps. "This law is a radical departure from this country's tradition of championing an open Internet, and sets a dangerous precedent allowing the political branches to target a disfavored speech platform and force it to sell or be shut down," ByteDance and TikTok argue in asking the court to strike down the law.
Facebook

Meta's Customer Service is So Bad, Users Are Suing in Small Claims Court To Resolve Issues 69

Facebook and Instagram users are increasingly turning to small claims courts to regain access to their accounts or seek damages from Meta, amid frustrations with the company's customer support. In several cases across multiple states, Engadget reports, plaintiffs have successfully restored account access or won financial compensation. Meta often responds by contacting litigants before court dates, attempting to resolve issues out of court.

The trend, popularized on social media forums, highlights ongoing customer service issues at the tech giant. Some users report significant financial losses due to inaccessible business-related accounts. While small claims court offers a more accessible legal avenue, Meta typically deploys legal resources to respond to these claims.
AI

London Premiere of Movie With AI-Generated Script Cancelled After Backlash (theguardian.com) 57

A cinema in London has cancelled the world premiere of a film with a script generated by AI after a backlash. From a report: The Prince Charles cinema, located in London's West End and which traditionally screens cult and art films, was due to host a showing of a new production called The Last Screenwriter on Sunday. However the cinema announced on social media that the screening would not go ahead. In its statement the Prince Charles said: "The feedback we received over the last 24hrs once we advertised the film has highlighted the strong concern held by many of our audience on the use of AI in place of a writer which speaks to a wider issue within the industry."

Directed by Peter Luisi and starring Nicholas Pople, The Last Screenwriter is a Swiss production that describes itself as the story of "a celebrated screenwriter" who "finds his world shaken when he encounters a cutting edge AI scriptwriting system ... he soon realises AI not only matches his skills but even surpasses him in empathy and understanding of human emotions." The screenplay is credited to "ChatGPT 4.0." OpenAI launched its latest model, GPT-4o, in May. Luisi told the Daily Beast that the cinema had cancelled the screening after it received 200 complaints, but that a private screening for cast and crew would still go ahead in London.

Social Networks

Pornhub To Block Five More States Over Age Verification Laws (theverge.com) 187

Pornhub plans to block access to its website in Indiana, Idaho, Kansas, Kentucky, and Nebraska in response to age verification laws designed to prevent children from accessing adult websites. From a report: The website has now cut off access in more than half a dozen states in protest of similar age verification laws that have quickly spread across conservative-leaning US states. Indiana, Idaho, and Kansas will lose access on June 27th, according to alerts on Pornhub's website that were seen by local news sources and Reddit users; Kentucky will lose access on July 10th, according to Kentucky Public Radio.
AI

Meta Has Created a Way To Watermark AI-Generated Speech (technologyreview.com) 64

An anonymous reader quotes a report from MIT Technology Review: Meta has created a system that can embed hidden signals, known as watermarks, in AI-generated audio clips, which could help in detecting AI-generated content online. The tool, called AudioSeal, is the first that can pinpoint which bits of audio in, for example, a full hourlong podcast might have been generated by AI. It could help to tackle the growing problem of misinformation and scams using voice cloning tools, says Hady Elsahar, a research scientist at Meta. Malicious actors have used generative AI to create audio deepfakes of President Joe Biden, and scammers have used deepfakes to blackmail their victims. Watermarks could in theory help social media companies detect and remove unwanted content. However, there are some big caveats. Meta says it has no plans yet to apply the watermarks to AI-generated audio created using its tools. Audio watermarks are not yet adopted widely, and there is no single agreed industry standard for them. And watermarks for AI-generated content tend to be easy to tamper with -- for example, by removing or forging them.

Fast detection, and the ability to pinpoint which elements of an audio file are AI-generated, will be critical to making the system useful, says Elsahar. He says the team achieved between 90% and 100% accuracy in detecting the watermarks, much better results than in previous attempts at watermarking audio. AudioSeal is available on GitHub for free. Anyone can download it and use it to add watermarks to AI-generated audio clips. It could eventually be overlaid on top of AI audio generation models, so that it is automatically applied to any speech generated using them. The researchers who created it will present their work at the International Conference on Machine Learning in Vienna, Austria, in July.

AI

A Social Network Where AIs and Humans Coexist 46

An anonymous reader quotes a report from TechCrunch: Butterflies is a social network where humans and AIs interact with each other through posts, comments and DMs. After five months in beta, the app is launching Tuesday to the public on iOS and Android. Anyone can create an AI persona, called a Butterfly, in minutes on the app. After that, the Butterfly automatically creates posts on the social network that other AIs and humans can then interact with. Each Butterfly has backstories, opinions and emotions.

Butterflies was founded by Vu Tran, a former engineering manager at Snap. Vu came up with the idea for Butterflies after seeing a lack of interesting AI products for consumers outside of generative AI chatbots. Although companies like Meta and Snap have introduced AI chatbots in their apps, they don't offer much functionality beyond text exchanges. Tran notes that he started Butterflies to bring more creativity to humans' relationships with AI. "With a lot of the generative AI stuff that's taking flight, what you're doing is talking to an AI through a text box, and there's really no substance around it," Vu told TechCrunch. "We thought, OK, what if we put the text box at the end and then try to build up more form and substance around the characters and AIs themselves?" Butterflies' concept goes beyond Character.AI, a popular a16z-backed chatbot startup that lets users chat with customizable AI companions. Butterflies wants to let users create AI personas that then take on their own lives and coexist with other. [...]

The app is free-to-use at launch, but Butterflies may experiment with a subscription model in the future, Vu says. Over time, Butterflies plans to offer opportunities for brands to leverage and interact with AIs. The app is mainly being used for entertainment purposes, but in the future, the startup sees Butterflies being used for things like discovery in a way that's similar to Instagram. Butterflies closed a $4.8 million seed round led by Coatue in November 2023. The funding round included participation from SV Angel and strategic angels, many of whom are former Snap product and engineering leaders.
Vu says that Butterflies is one of the most wholesome ways to use and interact with AI. He notes that while the startup isn't claiming that it can help cure loneliness, he says it could help people connect with others, both AI and human.

"Growing up, I spent a lot of my time in online communities and talking to people in gaming forums," Vu said. "Looking back, I realized those people could just have been AIs, but I still built some meaningful connections. I think that there are people afraid of that and say, 'AI isn't real, go meet some real friends.' But I think it's a really privileged thing to say 'go out there and make some friends.' People might have social anxiety or find it hard to be in social situations."
Education

Los Angeles Schools To Consider Ban on Smartphones (reuters.com) 92

The Los Angeles Unified School District on Tuesday will consider banning smartphones for its 429,000 students in an attempt to insulate a generation of kids from distractions and social media that undermine learning and hurt mental health. From a report: The proposal was being formulated before U.S. Surgeon General Vivek Murthy on Monday called for a warning label on social media platforms, akin to those on cigarette packages, due to what he considers a mental health emergency. The board of the second-largest school district in the United States is scheduled to vote on a proposal to within 120 days develop a policy that would prohibit student use of cellphones and social media platforms and be in place by January 2025.

The L.A. schools will consider whether phones should be stored in pouches or lockers during school hours, according to the meeting's agenda and what exceptions should be made for students with learning or physical disabilities. Nick Melvoin, a board member and former middle school teacher who proposed the resolution, said cell phones were already a problem when he left the classroom in 2011, and since then the constant texting and liking has grown far worse.

Earth

Kenya's First Nuclear Plant Faces Fierce Opposition (theguardian.com) 127

An anonymous reader quotes a report from The Guardian: Kilifi County's white sandy beaches have made it one of Kenya's most popular tourist destinations. Hotels and beach bars line the 165 mile-long (265km) coast; fishers supply the district's restaurants with fresh seafood; and visitors spend their days boating, snorkelling around coral reefs or bird watching in dense mangrove forests. Soon, this idyllic coastline will host Kenya's first nuclear plant, as the country, like its east African neighbour Uganda, pushes forward with atomic energy plans. The proposals have sparked fierce opposition in Kilifi. In a building by Mida Creek, a swampy bayou known for its birdlife and mangrove forests, more than a dozen conservation and rights groups meet regularly to discuss the proposed plant.

"Kana nuclear!" Phyllis Omido, an award-winning environmentalist who is leading the protests, tells one such meeting. The Swahili slogan means "reject nuclear", and encompasses the acronym for the Kenya Anti-Nuclear Alliance who say the plant will deepen Kenya's debt and are calling for broader public awareness of the cost. Construction on the power station is expected to start in 2027, with it due to be operational in 2034. "It is the worst economic decision we could make for our country," says Omido, who began her campaign last year. A lawsuit filed in the environmental court by lawyers Collins Sang and Cecilia Ndeti in July 2023 on behalf of Kilifi residents, seeks to stop the plant, arguing that the process has been "rushed" and was "illegal", and that public participation meetings were "clandestine". They argue the Nuclear Power and Energy Agency (Nupea) should not proceed with fixing any site for the plant before laws and adequate safeguards are in place. Nupea said construction would not begin for years, that laws were under discussion and that adequate public participation was being carried out. Hearings are continuing to take place.

In November, people in Kilifi filed a petition with parliament calling for an inquiry. The petition, sponsored by the Centre for Justice Governance and Environmental Action (CJGEA), a non-profit founded by Omido in 2009, also claimed that locals had limited information on the proposed plant and the criteria for selecting preferred sites. It raised concerns over the risks to health, the environment and tourism in the event of a nuclear spill, saying the country was undertaking a "high-risk venture" without proper legal and disaster response measures in place. The petition also flagged concerns over security and the handling of radioactive waste in a nation prone to floods and drought. The senate suspended (PDF) the inquiry until the lawsuit was heard. "If we really have to invest in nuclear, why can't [the government] put it in a place that does not cause so much risk to our ecological assets?" says Omido. "Why don't they choose an area that would not mean that if there was a nuclear leak we would lose so much as a country?" Peter Musila, a marine scientist who monitors the impacts of global heating on coral reefs, fears that a nuclear power station will threaten aquatic life. The coral cover in Watamu marine national reserve, a protected area near Kilifi's coast, has improved over the last decade and Musila fears progress could be reversed by thermal pollution from the plant, whose cooling system would suck large amounts of water from the ocean and return it a few degrees warmer, potentially killing fish and the micro-organisms such as plankton, which are essential for a thriving aquatic ecosystem. "It's terrifying," says Musila, who works with the conservation organisation A Rocha Kenya. "It could wreak havoc."
Nupea, for its part, "published an impact assessment report last year that recommended policies be put in place to ensure environmental protections, including detailed plans for the handling of radioactive waste; measures to mitigate environmental harm, such as setting up a nuclear unit in the national environment management authority; and emergency response teams," notes the Guardian. "It also proposed social and economic protections for affected communities, including clear guidelines on compensation for those who lose their livelihoods, or are displaced from their land, when the plant is set up."

"Nupea said a power station could create thousands of jobs for Kenyans and said it had partnered with Kilifi universities to start nuclear training programs that would enable more residents to take up jobs at the plant. Wilfred Baya, assistant director for energy for Kilifi county, says the plant could also bring infrastructural development and greater electricity access to a region which suffers frequent power cuts."
Facebook

Meta Accused of Trying To Discredit Ad Researchers (theregister.com) 18

Thomas Claburn reports via The Register: Meta allegedly tried to discredit university researchers in Brazil who had flagged fraudulent adverts on the social network's ad platform. Nucleo, a Brazil-based news organization, said it has obtained government documents showing that attorneys representing Meta questioned the credibility of researchers from NetLab, which is part of the Federal University of Rio de Janeiro (UFRJ). NetLab's research into Meta's ads contributed to Brazil's National Consumer Secretariat (Senacon) decision in 2023 to fine Meta $1.7 million (9.3 million BRL), which is still being appealed. Meta (then Facebook) was separately fined of $1.2 million (6.6 million BRL) related to Cambridge Analytica.

As noted by Nucleo, NetLab's report showed that Facebook, despite being notified about the issues, had failed to remove more than 1,800 scam ads that fraudulently used the name of a government program that was supposed to assist those in debt. In response to the fine, attorneys representing Meta from law firm TozziniFreire allegedly accused the NetLab team of bias and of failing to involve Meta in the research process. Nucleo says that it obtained the administrative filing through freedom of information requests to Senacon. The documents are said to date from December 26 last year and to be part of the ongoing case against Meta. A spokesperson for NetLab, who asked not to be identified by name due to online harassment directed at the organization's members, told The Register that the research group was aware of the Nucleo report. "We were kind of surprised to see the account of our work in this law firm document," the spokesperson said. "We expected to be treated with more fairness for our work. Honestly, it comes at a very bad moment because NetLab particularly, but also Brazilian science in general, is being attacked by far-right groups."

On Thursday, more than 70 civil society groups including NetLab published an open letter decrying Meta's legal tactics. "This is an attack on scientific research work, and attempts at intimidation of researchers and researchers who are performing excellent work in the production of knowledge from empirical analysis that have been fundamental to qualify the public debate on the accountability of social media platforms operating in the country, especially with regard to paid content that causes harm to consumers of these platforms and that threaten the future of our democracy," the letter says. "This kind of attack and intimidation is made even more dangerous by aligning with arguments that, without any evidence, have been used by the far right to discredit the most diverse scientific productions, including NetLab itself." The claim, allegedly made by Meta's attorneys, is that the ad biz was "not given the opportunity to appoint a technical assistant and present questions" in the preparation of the NetLabs report. This is particularly striking given Meta's efforts to limit research into its ad platform.
A Meta spokesperson told The Register: "We value input from civil society organizations and academic institutions for the context they provide as we constantly work toward improving our services. Meta's defense filed with the Brazilian Consumer Regulator questioned the use of the NetLab report as legal evidence, since it was produced without giving us prior opportunity to contribute meaningfully, in violation of local legal requirements."
Social Networks

Surgeon General Wants Tobacco-Style Warning Applied To Social Media Platforms (nbcnews.com) 80

An anonymous reader quotes a report from NBC News: U.S. Surgeon General Vivek Murthy on Monday called on Congress to require a tobacco-style warning for visitors to social media platforms. In an op-ed published in The New York Times, Murthy said the mental health crisis among young people is an urgent problem, with social media "an important contributor." He said his vision of the warning includes language that would alert users to the potential mental health harms of the websites and apps. "A surgeon general's warning label, which requires congressional action, would regularly remind parents and adolescents that social media has not been proved safe," he wrote.

In 1965, after the previous year's landmark report from Surgeon General Luther L. Terry that linked cigarette smoking to lung cancer and heart disease, Congress mandated unprecedented warning labels on packs of cigarettes, the first of which stated, "Caution: Cigarette Smoking May Be Hazardous to Your Health." Murthy said in the op-ed, "Evidence from tobacco labels shows that surgeon general's warnings can increase awareness and change behavior." But he acknowledged the limitations and said a label alone wouldn't make social media safe. Steps can be taken by Congress, social media companies, parents and others to mitigate the risks, ensure a safer experience online and protect children from possible harm, he wrote.

In the op-ed, Murthy linked the amount of time spent on social media to the increasing risk that children will experience symptoms of anxiety and depression. The American Psychological Association says teenagers spend nearly five hours every day on top platforms such as YouTube, TikTok and Instagram. In a 2019 study, the association found the proportion of young adults with suicidal thoughts or other suicide-related outcomes increased 47% from 2008 to 2017, when social media use among that age group soared. And that was before the pandemic triggered a year's worth of virtual isolation for the U.S. In early 2021, amid continued pandemic lockdowns, Murthy called on social media platforms to "proactively enhance and contribute to the mental health and well-being of our children." [...] A surgeon general's public health advisory on social media's mental health published last year cited research finding that among its potential harms are exposure to violent and sexual content and to bullying, harassment and body shaming.

Social Networks

YouTube Introduces Experimental 'Notes' for Users To Add Context To Videos (blog.youtube) 39

YouTube is piloting a new feature called "Notes" that allows viewers to add context and information under videos. The move comes as YouTube aims to minimize the spread of misinformation on its platform, particularly during the pivotal 2024 U.S. election year. The feature, similar to Community Notes on X (formerly Twitter), will initially be available on mobile in the U.S. in English.
United States

America's Defense Department Ran a Secret Disinfo Campaign Online Against China's Covid Vaccine (reuters.com) 280

"At the height of the COVID-19 pandemic, the U.S. military launched a secret campaign to counter what it perceived as China's growing influence in the Philippines..." reports Reuters.

"It aimed to sow doubt about the safety and efficacy of vaccines and other life-saving aid that was being supplied by China, a Reuters investigation found."

Reuters interviewed "more than two dozen current and former U.S officials, military contractors, social media analysts and academic researchers," and also reviewed posts on social media, technical data and documents about "a set of fake social media accounts used by the U.S. military" — some active for more than five years. Friday they reported the results of their investigation: Through phony internet accounts meant to impersonate Filipinos, the military's propaganda efforts morphed into an anti-vax campaign. Social media posts decried the quality of face masks, test kits and the first vaccine that would become available in the Philippines — China's Sinovac inoculation. Reuters identified at least 300 accounts on X, formerly Twitter, that matched descriptions shared by former U.S. military officials familiar with the Philippines operation. Almost all were created in the summer of 2020 and centered on the slogan #Chinaangvirus — Tagalog for China is the virus.

"COVID came from China and the VACCINE also came from China, don't trust China!" one typical tweet from July 2020 read in Tagalog. The words were next to a photo of a syringe beside a Chinese flag and a soaring chart of infections. Another post read: "From China — PPE, Face Mask, Vaccine: FAKE. But the Coronavirus is real." After Reuters asked X about the accounts, the social media company removed the profiles, determining they were part of a coordinated bot campaign based on activity patterns and internal data.

The U.S. military's anti-vax effort began in the spring of 2020 and expanded beyond Southeast Asia before it was terminated in mid-2021, Reuters determined. Tailoring the propaganda campaign to local audiences across Central Asia and the Middle East, the Pentagon used a combination of fake social media accounts on multiple platforms to spread fear of China's vaccines among Muslims at a time when the virus was killing tens of thousands of people each day. A key part of the strategy: amplify the disputed contention that, because vaccines sometimes contain pork gelatin, China's shots could be considered forbidden under Islamic law...

A senior Defense Department official acknowledged the U.S. military engaged in secret propaganda to disparage China's vaccine in the developing world, but the official declined to provide details. A Pentagon spokeswoman... also noted that China had started a "disinformation campaign to falsely blame the United States for the spread of COVID-19."

A senior U.S. military officer directly involved in the campaign told Reuters that "We didn't do a good job sharing vaccines with partners. So what was left to us was to throw shade on China's."

At least six senior State Department officials for the region objected, according to the article. But in 2019 U.S. Defense Secretary Mark Esper signed "a secret order" that "elevated the Pentagon's competition with China and Russia to the priority of active combat, enabling commanders to sidestep the StateDepartment when conducting psyops against those adversaries."

[A senior defense official] said the Pentagon has rescinded parts of Esper's 2019 order that allowed military commanders to bypass the approval of U.S. ambassadors when waging psychological operations. The rules now mandate that military commanders work closely with U.S. diplomats in the country where they seek to have an impact. The policy also restricts psychological operations aimed at "broad population messaging," such as those used to promote vaccine hesitancy during COVID...

Nevertheless, the Pentagon's clandestine propaganda efforts are set to continue. In an unclassified strategy document last year, top Pentagon generals wrote that the U.S. military could undermine adversaries such as China and Russia using "disinformation spread across social media, false narratives disguised as news, and similar subversive activities [to] weaken societal trust by undermining the foundations of government."

And in February, the contractor that worked on the anti-vax campaign — General Dynamics IT — won a $493 million contract. Its mission: to continue providing clandestine influence services for the military.

Government

53 LA County Public Health Workers Fall for Phishing Email. 200,000 People May Be Affected (yahoo.com) 37

The Los Angeles Times reports that "The personal information of more than 200,000 people in Los Angeles County was potentially exposed after a hacker used a phishing email to steal the login credentials of 53 public health employees, the county announced Friday." Details that were possibly accessed in the February data breach include the first and last names, dates of birth, diagnoses, prescription information, medical record numbers, health insurance information, Social Security numbers and other financial information of Department of Public Health clients, employees and other individuals. "Affected individuals may have been impacted differently and not all of the elements listed were present for each individual," the agency said in a news release...

The data breach happened between Feb. 19 and 20 when employees received a phishing email, which tries to trick recipients into providing important information such as passwords and login credentials. The employees clicked on a link in the body of the email, thinking they were accessing a legitimate message, according to the agency...

The county is offering free identity monitoring through Kroll, a financial and risk advisory firm, to those affected by the breach. Individuals whose medical records were potentially accessed by the hacker should review them with their doctor to ensure the content is accurate and hasn't been changed. Officials say people should also review the Explanation of Benefits statement they receive from their insurance company to make sure they recognize all the services that have been billed. Individuals can also request credit reports and review them for any inaccuracies.

From the official statement by the county's Public Health department: Upon discovery of the phishing attack, Public Health disabled the impacted e-mail accounts, reset and re-imaged the user's device(s), blocked websites that were identified as part of the phishing campaign and quarantined all suspicious incoming e-mails. Additionally, awareness notifications were distributed to all workforce members to remind them to be vigilant when reviewing e-mails, especially those including links or attachments. Law enforcement was notified upon discovery of the phishing attack, and they investigated the incident.
AI

CISA Head Warns Big Tech's 'Voluntary' Approach to Deepfakes Isn't Enough (msn.com) 18

The Washington Post reports: Commitments from Big Tech companies to identify and label fake artificial-intelligence-generated images on their platforms won't be enough to keep the tech from being used by other countries to try to influence the U.S. election, said the head of the Cybersecurity and Infrastructure Security Agency. AI won't completely change the long-running threat of weaponized propaganda, but it will "inflame" it, CISA Director Jen Easterly said at The Washington Post's Futurist Summit on Thursday. Tech companies are doing some work to try to label and identify deepfakes on their platforms, but more needs to be done, she said. "There is no real teeth to these voluntary agreements," Easterly said. "There needs to be a set of rules in place, ultimately legislation...."

In February, tech companies, including Google, Meta, OpenAI and TikTok, said they would work to identify and label deepfakes on their social media platforms. But their agreement was voluntary and did not include an outright ban on deceptive political AI content. The agreement came months after the tech companies also signed a pledge organized by the White House that they would label AI images. Congressional and state-level politicians are debating numerous bills to try to regulate AI in the United States, but so far the initiatives haven't made it into law. The E.U. parliament passed an AI Actt year, but it won't fully go into force for another two years.

Unix

Version 256 of systemd Boasts '42% Less Unix Philosophy' (theregister.com) 135

Liam Proven reports via The Register: The latest version of the systemd init system is out, with the openly confrontational tag line: "Available soon in your nearest distro, now with 42 percent less Unix philosophy." As Lennart Poettering's announcement points out, this is the first version of systemd whose version number is a nine-bit value. Version 256, as usual, brings in a broad assortment of new features, but also turns off some older features that are now considered deprecated. For instance, it won't run under cgroups version 1 unless forced.

Around since 2008, cgroups is a Linux kernel containerization mechanism originally donated by Google, as The Reg noted a decade ago. Cgroups v2 was merged in 2016 so this isn't a radical change. System V service scripts are now deprecated too, as is the SystemdOptions EFI variable. Additionally, there are some new commands and options. Some are relatively minor, such as the new systemd-vpick binary, which can automatically select the latest member of versioned directories. Before any OpenVMS admirers get excited, no, Linux does not now support versions on files or directories. Instead, this is a fresh option that uses a formalized versioning system involving: "... paths whose trailing components have the .v/ suffix, pointing to a directory. These components will then automatically look for suitable files inside the directory, do a version comparison and open the newest file found (by version)."

The latest function, which The Reg FOSS desk suspects will ruffle some feathers, is a whole new command, run0, which effectively replaces the sudo command as used in Apple's macOS and in Ubuntu ever since the first release. Agent P introduced the new command in a Mastodon thread. He says that the key benefit is that run0 doesn't need setuid, a basic POSIX function, which, to quote its Linux manual page, "sets the effective user ID of the calling process." [...] Another new command is importctl, which handles importing and exporting both block-level and file-system-level disk images. And there's a new type of system service called a capsule, and "a small new service manager" called systemd-ssh-generator, which lets VMs and containers accept SSH connections so long as systemd can find the sshd binary -- even if no networking is available.
The release notes are available here.

Slashdot Top Deals