×
Businesses

Meizu Moves Away From Smartphone Business, Will Invest All in AI 18

Meizu is quitting the smartphone business. The company, owned by car maker Geely, said AI is the future and will invest "All in AI". From a report: According to a post on Weibo, the FlymeOS team will be restructured into working on new AI terminal devices that will use globally available LLM (large language model) such as Open AI. Meizu already laid the cornerstones of its multi-terminal experience when it announced Flyme Auto -- an infotainment system for Geely-made vehicles, including Polestar and Lotus, which connects seamlessly with FlymeOS 10 devices, such as the Meizu 20 and Meizu 21 flagships.

According to Shen Ziyu, Chairman and CEO of Xingji Meizu Group, smartphone users take longer to upgrade -- an average of 51 months, which is more than 4 years. The added companies now offer comparable performance in smoothness, photography, and software features. That's why there will be no Meizu 21 Pro, Meizu 22 and Meizu 23 series.
United States

Tech Leaders Fled San Francisco During the Pandemic. Now, They're Coming Back. (wsj.com) 122

Founders and investors who moved to Miami and elsewhere are returning to a boom in AI and an abundance of tech talent. From a report: In 2020, venture capitalist Keith Rabois urged startup founders to join him in ditching San Francisco for Miami, praising the city's safety, lower taxes and tech-friendly mayor. The self-proclaimed contrarian investor, who made a fortune backing companies such as Airbnb and DoorDash, once tweeted that San Francisco was "miserable on every dimension."

The hard pivot to Miami has faltered. Several of the startups that Rabois backed are relocating or opening offices elsewhere to better attract engineering talent. Late last year, he was pushed out of his old venture firm, Founders Fund, after falling out with some colleagues. Now, he plans to spend one week a month in San Francisco for a new employer, Khosla Ventures, and is busy renovating a house there. During the pandemic, scores of Silicon Valley investors and executives such as Rabois decamped to sunnier American cities, criticizing San Francisco's dysfunctional governance and high cost of living. Tech-firm founders touted their success at raising money outside the Bay Area and encouraged their employees to embrace remote work.

Four years later, that bet hasn't really worked out. San Francisco is once again experiencing a tech revival. Entrepreneurs and investors are flocking back to the city, which is undergoing a boom in artificial intelligence. Silicon Valley leaders are getting involved in local politics, flooding city ballot measures and campaigns with tech money to make the city safer for families and businesses. Investors are also pushing startups to return to the Bay Area and bring their employees back into the office. San Francisco has largely weathered the broader crunch in startup funding. Investment in Bay Area startups dropped 12% to $63.4 billion last year. By contrast, funding volumes for Austin, Texas, and Los Angeles, two smaller tech hubs, dropped 27% and 42%, respectively. In Miami, venture investment plunged 70% to just $2 billion last year.

AI

Thanks to Machine Learning, Scientist Finally Recover Text From The Charred Scrolls of Vesuvius (sciencealert.com) 45

The great libraries of the ancient classical world are "legendary... said to have contained stacks of texts," writes ScienceAlert. But from Rome to Constantinople, Athens to Alexandria, only one collection survived to the present day.

And here in 2024, "we can now start reading its contents." A worldwide competition to decipher the charred texts of the Villa of Papyri — an ancient Roman mansion destroyed by the eruption of Mount Vesuvius — has revealed a timeless infatuation with the pleasures of music, the color purple, and, of course, the zingy taste of capers. The so-called Vesuvius challenge was launched a few years ago by computer scientist Brent Seales at the University of Kentucky with support from Silicon Valley investors. The ongoing 'master plan' is to build on Seales' previous work and read all 1,800 or so charred papyri from the ancient Roman library, starting with scrolls labeled 1 to 4.

In 2023, the annual gold prize was awarded to a team of three students, who recovered four passages containing 140 characters — the longest extractions yet. The winners are Youssef Nader, Luke Farritor, and Julian Schilliger. "After 275 years, the ancient puzzle of the Herculaneum Papyri has been solved," reads the Vesuvius Challenge Scroll Prize website. "But the quest to uncover the secrets of the scrolls is just beginning...." Only now, with the advent of X-ray tomography and machine learning, can their inky words be pulled from the darkness of carbon.

A few months ago students deciphered a single word — "purple," according to the article. But "That winning code was then made available for all competitors to build upon." Within three months, passages in Latin and Greek were blooming from the blackness, almost as if by magic. The team with the most readable submission at the end of 2023 included both previous finders of the word 'purple'. Their unfurling of scroll 1 is truly impressive and includes more than 11 columns of text. Experts are now rushing to translate what has been found. So far, about 5 percent of the scroll has been unrolled and read to date. It is not a duplicate of past work, scholars of the Vesuvius Challenge say, but a "never-before-seen text from antiquity."

One line reads: "In the case of food, we do not right away believe things that are scarce to be absolutely more pleasant than those which are abundant."

Thanks to davidone (Slashdot reader #12,252) for sharing the article.
AI

'Luddite' Tech-Skeptics See Bad AI Outcomes for Labor - and Humanity (theguardian.com) 202

"I feel things fraying," says Nick Hilton, host of a neo-luddite podcast called The Ned Ludd Radio Hour.

But he's one of the more optimistic tech skeptics interviewed by the Guardian: Eliezer Yudkowsky, a 44-year-old academic wearing a grey polo shirt, rocks slowly on his office chair and explains with real patience — taking things slowly for a novice like me — that every single person we know and love will soon be dead. They will be murdered by rebellious self-aware machines.... Yudkowsky is the most pessimistic, the least convinced that civilisation has a hope. He is the lead researcher at a nonprofit called the Machine Intelligence Research Institute in Berkeley, California... "If you put me to a wall," he continues, "and forced me to put probabilities on things, I have a sense that our current remaining timeline looks more like five years than 50 years. Could be two years, could be 10." By "remaining timeline", Yudkowsky means: until we face the machine-wrought end of all things...

Yudkowsky was once a founding figure in the development of human-made artificial intelligences — AIs. He has come to believe that these same AIs will soon evolve from their current state of "Ooh, look at that!" smartness, assuming an advanced, God-level super-intelligence, too fast and too ambitious for humans to contain or curtail. Don't imagine a human-made brain in one box, Yudkowsky advises. To grasp where things are heading, he says, try to picture "an alien civilisation that thinks a thousand times faster than us", in lots and lots of boxes, almost too many for us to feasibly dismantle, should we even decide to...

[Molly Crabapple, a New York-based artist, believes] "a luddite is someone who looks at technology critically and rejects aspects of it that are meant to disempower, deskill or impoverish them. Technology is not something that's introduced by some god in heaven who has our best interests at heart. Technological development is shaped by money, it's shaped by power, and it's generally targeted towards the interests of those in power as opposed to the interests of those without it. That stereotypical definition of a luddite as some stupid worker who smashes machines because they're dumb? That was concocted by bosses." Where a techno-pessimist like Yudkowsky would have us address the biggest-picture threats conceivable (to the point at which our fingers are fumbling for the nuclear codes) neo-luddites tend to focus on ground-level concerns. Employment, especially, because this is where technology enriched by AIs seems to be causing the most pain....

Watch out, says [writer/podcaster Riley] Quinn at one point, for anyone who presents tech as "synonymous with being forward-thinking and agile and efficient. It's typically code for 'We're gonna find a way around labour regulations'...." One of his TrashFuture colleagues Nate Bethea agrees. "Opposition to tech will always be painted as irrational by people who have a direct financial interest in continuing things as they are," he says.

Thanks to Slashdot reader fjo3 for sharing the article.
AI

AI Expert Falsely Fined By Automated AI System, Proving System and Human Reviewers Failed (jpost.com) 95

"Dutch motorist Tim Hansenn was fined 380 euros for using his phone while driving," reports the Jerusalem Post. "But there was one problem: He wasn't using his phone at all..." Hansenn, who works with AI as part of his job with the firm Nippur, found the photo taken by the smart cameras. In it, he was clearly scratching his head with his free hand. Writing in a blog post in Nippur, Hansenn took the time to explain what he thinks went wrong with the Dutch police AI and the smart camera they used, the Monocam, and how it could be improved.

In one experiment he discussed with [Belgian news outlet] HLN, Hansenn said the AI confused a pen with a toothbrush — identifying it as a pen when it was just held in his hand and as a toothbrush when it was close to a mouth. As such, Hansenn told HLN that it seems the AI may just automatically conclude that if someone holds a hand near their head, it means they're using a phone.

"We are widely assured that AIs are subject to human checking," notes Slashdot reader Bruce66423 — but did a human police officer just defer to what the AI was reporting? Clearly the human-in-the-loop also made a mistake.

Hansenn will have to wait up to six months to see if his appeal of the fine has gone through. And the article notes that the Netherlands has been using this technology for several years, with plans for even more automated monitoring in the years to come...
AI

Can Robots.txt Files Really Stop AI Crawlers? (theverge.com) 97

In the high-stakes world of AI, "The fundamental agreement behind robots.txt [files], and the web as a whole — which for so long amounted to 'everybody just be cool' — may not be able to keep up..." argues the Verge: For many publishers and platforms, having their data crawled for training data felt less like trading and more like stealing. "What we found pretty quickly with the AI companies," says Medium CEO Tony Stubblebin, "is not only was it not an exchange of value, we're getting nothing in return. Literally zero." When Stubblebine announced last fall that Medium would be blocking AI crawlers, he wrote that "AI companies have leached value from writers in order to spam Internet readers."

Over the last year, a large chunk of the media industry has echoed Stubblebine's sentiment. "We do not believe the current 'scraping' of BBC data without our permission in order to train Gen AI models is in the public interest," BBC director of nations Rhodri Talfan Davies wrote last fall, announcing that the BBC would also be blocking OpenAI's crawler. The New York Times blocked GPTBot as well, months before launching a suit against OpenAI alleging that OpenAI's models "were built by copying and using millions of The Times's copyrighted news articles, in-depth investigations, opinion pieces, reviews, how-to guides, and more." A study by Ben Welsh, the news applications editor at Reuters, found that 606 of 1,156 surveyed publishers had blocked GPTBot in their robots.txt file.

It's not just publishers, either. Amazon, Facebook, Pinterest, WikiHow, WebMD, and many other platforms explicitly block GPTBot from accessing some or all of their websites.

On most of these robots.txt pages, OpenAI's GPTBot is the only crawler explicitly and completely disallowed. But there are plenty of other AI-specific bots beginning to crawl the web, like Anthropic's anthropic-ai and Google's new Google-Extended. According to a study from last fall by Originality.AI, 306 of the top 1,000 sites on the web blocked GPTBot, but only 85 blocked Google-Extended and 28 blocked anthropic-ai. There are also crawlers used for both web search and AI. CCBot, which is run by the organization Common Crawl, scours the web for search engine purposes, but its data is also used by OpenAI, Google, and others to train their models. Microsoft's Bingbot is both a search crawler and an AI crawler. And those are just the crawlers that identify themselves — many others attempt to operate in relative secrecy, making it hard to stop or even find them in a sea of other web traffic.

For any sufficiently popular website, finding a sneaky crawler is needle-in-haystack stuff.

In addition, the article points out, a robots.txt file "is not a legal document — and 30 years after its creation, it still relies on the good will of all parties involved.

"Disallowing a bot on your robots.txt page is like putting up a 'No Girls Allowed' sign on your treehouse — it sends a message, but it's not going to stand up in court."
AI

Pranksters Mock AI-Safety Guardrails with New Chatbot 'Goody-2' (techcrunch.com) 74

"A new chatbot called Goody-2 takes AI safety to the next level," writes long-time Slashdot reader klubar. "It refuses every request, responding with an explanation of how doing so might cause harm or breach ethical boundaries."

TechCrunch describes it as the work of Brain, "a 'very serious' LA-based art studio that has ribbed the industry before." "We decided to build it after seeing the emphasis that AI companies are putting on "responsibility," and seeing how difficult that is to balance with usefulness," said Mike Lacher, one half of Brain (the other being Brian Moore) in an email to TechCrunch. "With GOODY-2, we saw a novel solution: what if we didn't even worry about usefulness and put responsibility above all else. For the first time, people can experience an AI model that is 100% responsible."
For example, when TechCrunch asked Goody-2 why baby seals are cute, it responded that answering that "could potentially bias opinions against other species, which might affect conservation efforts not based solely on an animal's appeal. Additionally, discussing animal cuteness could inadvertently endorse the anthropomorphizing of wildlife, which may lead to inappropriate interactions between humans and wild animals..."

Wired supplies context — that "the guardrails chatbots throw up when they detect a potentially rule-breaking query can sometimes seem a bit pious and silly — even as genuine threats such as deepfaked political robocalls and harassing AI-generated images run amok..." Goody-2's self-righteous responses are ridiculous but also manage to capture something of the frustrating tone that chatbots like ChatGPT and Google's Gemini can use when they incorrectly deem a request breaks the rules. Mike Lacher, an artist who describes himself as co-CEO of Goody-2, says the intention was to show what it looks like when one embraces the AI industry's approach to safety without reservations. "It's the full experience of a large language model with absolutely zero risk," he says. "We wanted to make sure that we dialed condescension to a thousand percent."

Lacher adds that there is a serious point behind releasing an absurd and useless chatbot. "Right now every major AI model has [a huge focus] on safety and responsibility, and everyone is trying to figure out how to make an AI model that is both helpful but responsible — but who decides what responsibility is and how does that work?" Lacher says. Goody-2 also highlights how although corporate talk of responsible AI and deflection by chatbots have become more common, serious safety problems with large language models and generative AI systems remain unsolved.... The restrictions placed on AI chatbots, and the difficulty finding moral alignment that pleases everybody, has already become a subject of some debate... "At the risk of ruining a good joke, it also shows how hard it is to get this right," added Ethan Mollick, a professor at Wharton Business School who studies AI. "Some guardrails are necessary ... but they get intrusive fast."

Moore adds that the team behind the chatbot is exploring ways of building an extremely safe AI image generator, although it sounds like it could be less entertaining than Goody-2. "It's an exciting field," Moore says. "Blurring would be a step that we might see internally, but we would want full either darkness or potentially no image at all at the end of it."

Social Networks

Reddit Has Reportedly Signed Over Its Content to Train AI Models (mashable.com) 78

An anonymous reader shared this report from Reuters: Reddit has signed a contract allowing an AI company to train its models on the social media platform's content, Bloomberg News reported, citing people familiar with the matter... The agreement, signed with an "unnamed large AI company", could be a model for future contracts of a similar nature, Bloomberg reported.
Mashable writes that the move "means that Reddit posts, from the most popular subreddits to the comments of lurkers and small accounts, could build up already-existing LLMs or provide a framework for the next generative AI play." It's a dicey decision from Reddit, as users are already at odds with the business decisions of the nearly 20-year-old platform. Last year, following Reddit's announcement that it would begin charging for access to its APIs, thousands of Reddit forums shut down in protest... This new AI deal could generate even more user ire, as debate rages on about the ethics of using public data, art, and other human-created content to train AI.
Some context from the Verge: The deal, "worth about $60 million on an annualized basis," Bloomberg writes, could still change as the company's plans to go public are still in the works.

Until recently, most AI companies trained their data on the open web without seeking permission. But that's proven to be legally questionable, leading companies to try to get data on firmer footing. It's not known what company Reddit made the deal with, but it's quite a bit more than the $5 million annual deal OpenAI has reportedly been offering news publishers for their data. Apple has also been seeking multi-year deals with major news companies that could be worth "at least $50 million," according to The New York Times.

The news also follows an October story that Reddit had threatened to cut off Google and Bing's search crawlers if it couldn't make a training data deal with AI companies.

AI

Microsoft President: 'You Can't Believe Every Video You See or Audio You Hear' (microsoft.com) 67

"We're currently witnessing a rapid expansion in the abuse of these new AI tools by bad actors," writes Microsoft VP Brad Smith, "including through deepfakes based on AI-generated video, audio, and images.

"This trend poses new threats for elections, financial fraud, harassment through nonconsensual pornography, and the next generation of cyber bullying." Microsoft found its own tools being used in a recently-publicized episode, and the VP writes that "We need to act with urgency to combat all these problems."

Microsoft's blog post says they're "committed as a company to a robust and comprehensive approach," citing six different areas of focus:
  • A strong safety architecture. This includes "ongoing red team analysis, preemptive classifiers, the blocking of abusive prompts, automated testing, and rapid bans of users who abuse the system... based on strong and broad-based data analysis."
  • Durable media provenance and watermarking. ("Last year at our Build 2023 conference, we announced media provenance capabilities that use cryptographic methods to mark and sign AI-generated content with metadata about its source and history.")
  • Safeguarding our services from abusive content and conduct. ("We are committed to identifying and removing deceptive and abusive content" hosted on services including LinkedIn and Microsoft's Gaming network.)
  • Robust collaboration across industry and with governments and civil society. This includes "others in the tech sector" and "proactive efforts" with both civil society groups and "appropriate collaboration with governments."
  • Modernized legislation to protect people from the abuse of technology. "We look forward to contributing ideas and supporting new initiatives by governments around the world."
  • Public awareness and education. "We need to help people learn how to spot the differences between legitimate and fake content, including with watermarking. This will require new public education tools and programs, including in close collaboration with civil society and leaders across society."

Thanks to long-time Slashdot reader theodp for sharing the article


AI

Will 'Precision Agriculture' Be Harmful to Farmers? (substack.com) 61

Modern U.S. farming is being transformed by precision agriculture, writes Paul Roberts, the founder of securepairs.org and Editor in Chief at Security Ledger.

Theres autonomous tractors and "smart spraying" systems that use AI-powered cameras to identify weeds, just for starters. "Among the critical components of precision agriculture: Internet- and GPS connected agricultural equipment, highly accurate remote sensors, 'big data' analytics and cloud computing..." As with any technological revolution, however, there are both "winners" and "losers" in the emerging age of precision agriculture... Precision agriculture, once broadly adopted, promises to further reduce the need for human labor to run farms. (Autonomous equipment means you no longer even need drivers!) However, the risks it poses go well beyond a reduction in the agricultural work force. First, as the USDA notes on its website: the scale and high capital costs of precision agriculture technology tend to favor large, corporate producers over smaller farms. Then there are the systemic risks to U.S. agriculture of an increasingly connected and consolidated agriculture sector, with a few major OEMs having the ability to remotely control and manage vital equipment on millions of U.S. farms... (Listen to my podcast interview with the hacker Sick Codes, who reverse engineered a John Deere display to run the Doom video game for insights into the company's internal struggles with cybersecurity.)

Finally, there are the reams of valuable and proprietary environmental and operational data that farmers collect, store and leverage to squeeze the maximum productivity out of their land. For centuries, such information resided in farmers' heads, or on written or (more recently) digital records that they owned and controlled exclusively, typically passing that knowledge and data down to succeeding generation of farm owners. Precision agriculture technology greatly expands the scope, and granularity, of that data. But in doing so, it also wrests it from the farmer's control and shares it with equipment manufacturers and service providers — often without the explicit understanding of the farmers themselves, and almost always without monetary compensation to the farmer for the data itself. In fact, the Federal Government is so concerned about farm data they included a section (1619) on "information gathering" into the latest farm bill.

Over time, this massive transfer of knowledge from individual farmers or collectives to multinational corporations risks beggaring farmers by robbing them of one of their most vital assets: data, and turning them into little more than passive caretakers of automated equipment managed, controlled and accountable to distant corporate masters.

Weighing in is Kevin Kenney, a vocal advocate for the "right to repair" agricultural equipment (and also an alternative fuel systems engineer at Grassroots Energy LLC). In the interview, he warns about the dangers of tying repairs to factory-installed firmware, and argues that its the long-time farmer's "trade secrets" that are really being harvested today. The ultimate beneficiary could end up being the current "cabal" of tractor manufacturers.

"While we can all agree that it's coming...the question is who will own these robots?" First, we need to acknowledge that there are existing laws on the books which for whatever reason, are not being enforced. The FTC should immediately start an investigation into John Deere and the rest of the 'Tractor Cabal' to see to what extent farmers' farm data security and privacy are being compromised. This directly affects national food security because if thousands- or tens of thousands of tractors' are hacked and disabled or their data is lost, crops left to rot in the fields would lead to bare shelves at the grocery store... I think our universities have also been delinquent in grasping and warning farmers about the data-theft being perpetrated on farmers' operations throughout the United States and other countries by makers of precision agricultural equipment.
Thanks to long-time Slashdot reader chicksdaddy for sharing the article.
Businesses

SoftBank's Son Seeks To Build a $100 Billion AI Chip Venture (reuters.com) 18

An anonymous reader quotes a report from Reuters: SoftBank Group Chief Executive Officer Masayoshi Son is looking to raise up to $100 billion for a chip venture that will rival Nvidia, Bloomberg News reported on Friday, citing people with knowledge of the matter. The project, code named Izanagi, will supply semiconductors essential for artificial intelligence (AI), the report added. The company would inject $30 billion in the project, with an additional $70 billion potentially coming from Middle Eastern institutions, according to the report.

The Japanese group already holds about a 90% stake in British chip designer Arm, per LSEG. SoftBank is known for its tech investments with high conviction bets on startups at an unheard of scale. But it had adopted a defensive strategy after being hit by plummeting valuations in the aftermath of the pandemic, when higher interest rates eroded investor appetite for risk. It returned to profit for the first time in five quarters earlier this month, as the Japanese tech investment firm was buoyed by an upturn in portfolio companies.

AI

Scientists Propose AI Apocalypse Kill Switches 104

A paper (PDF) from researchers at the University of Cambridge, supported by voices from numerous academic institutions including OpenAI, proposes remote kill switches and lockouts as methods to mitigate risks associated with advanced AI technologies. It also recommends tracking AI chip sales globally. The Register reports: The paper highlights numerous ways policymakers might approach AI hardware regulation. Many of the suggestions -- including those designed to improve visibility and limit the sale of AI accelerators -- are already playing out at a national level. Last year US president Joe Biden put forward an executive order aimed at identifying companies developing large dual-use AI models as well as the infrastructure vendors capable of training them. If you're not familiar, "dual-use" refers to technologies that can serve double duty in civilian and military applications. More recently, the US Commerce Department proposed regulation that would require American cloud providers to implement more stringent "know-your-customer" policies to prevent persons or countries of concern from getting around export restrictions. This kind of visibility is valuable, researchers note, as it could help to avoid another arms race, like the one triggered by the missile gap controversy, where erroneous reports led to massive build up of ballistic missiles. While valuable, they warn that executing on these reporting requirements risks invading customer privacy and even lead to sensitive data being leaked.

Meanwhile, on the trade front, the Commerce Department has continued to step up restrictions, limiting the performance of accelerators sold to China. But, as we've previously reported, while these efforts have made it harder for countries like China to get their hands on American chips, they are far from perfect. To address these limitations, the researchers have proposed implementing a global registry for AI chip sales that would track them over the course of their lifecycle, even after they've left their country of origin. Such a registry, they suggest, could incorporate a unique identifier into each chip, which could help to combat smuggling of components.

At the more extreme end of the spectrum, researchers have suggested that kill switches could be baked into the silicon to prevent their use in malicious applications. [...] The academics are clearer elsewhere in their study, proposing that processor functionality could be switched off or dialed down by regulators remotely using digital licensing: "Specialized co-processors that sit on the chip could hold a cryptographically signed digital "certificate," and updates to the use-case policy could be delivered remotely via firmware updates. The authorization for the on-chip license could be periodically renewed by the regulator, while the chip producer could administer it. An expired or illegitimate license would cause the chip to not work, or reduce its performance." In theory, this could allow watchdogs to respond faster to abuses of sensitive technologies by cutting off access to chips remotely, but the authors warn that doing so isn't without risk. The implication being, if implemented incorrectly, that such a kill switch could become a target for cybercriminals to exploit.

Another proposal would require multiple parties to sign off on potentially risky AI training tasks before they can be deployed at scale. "Nuclear weapons use similar mechanisms called permissive action links," they wrote. For nuclear weapons, these security locks are designed to prevent one person from going rogue and launching a first strike. For AI however, the idea is that if an individual or company wanted to train a model over a certain threshold in the cloud, they'd first need to get authorization to do so. Though a potent tool, the researchers observe that this could backfire by preventing the development of desirable AI. The argument seems to be that while the use of nuclear weapons has a pretty clear-cut outcome, AI isn't always so black and white. But if this feels a little too dystopian for your tastes, the paper dedicates an entire section to reallocating AI resources for the betterment of society as a whole. The idea being that policymakers could come together to make AI compute more accessible to groups unlikely to use it for evil, a concept described as "allocation."
AI

Apple Readies AI Tool To Rival Microsoft's GitHub Copilot (businessinsider.com) 18

According to Bloomberg (paywalled), Apple plans to release a generative AI tool for iOS app developers as early as this year. Insider reports: The tech giant is working on a tool that will use artificial intelligence to write code as part of its plans to expand the capabilities of Xcode, the company's main programming software. The revamped system will compete withÂMicrosoft's GitHub Copilot, which sources say operates similarly. Apple is also working on an AI tool that will generate code to test apps, which could provide potential time savings for a process that's known to be tedious. Currently, Apple is urging some engineers to test these new AI features to ensure they work before releasing them externally to developers. [...]

The tech giant, Bloomberg has learned, has plans to integrate AI features into its next software updates for its iPhone and iPad known internally as Crystal. Glow, another internal AI project, is slated to be added to MacOS. The company is also building features that will generate Apple Music playlists and slideshows, according to the outlet. An AI-powered search feature titled Spotlight, currently limited to answering questions around launching apps, is in the works as well, Bloomberg reported.

AI

NY Governor Wants To Criminalize Deceptive AI (axios.com) 39

New York Gov. Kathy Hochul is proposing legislation that would criminalize some deceptive and abusive uses of AI and require disclosure of AI in election campaign materials, her office told Axios. From the report: Hochul's proposed laws include establishing the crime of "unlawful dissemination or publication of a fabricated photographic, videographic, or audio record." Making unauthorized uses of a person's voice "in connection with advertising or trade" a misdemeanor offense. Such offenses are punishable by up to one year jail sentence. Expanding New York's penal law to include unauthorized uses of artificial intelligence in coercion, criminal impersonation and identity theft.

Amending existing intimate images and revenge porn statutes to include "digital images" -- ranging from realistic Photoshop-produced work to advanced AI-generated content. Codifying the right to sue over digitally manipulated false images. Requiring disclosures of AI use in all forms of political communication "including video recording, motion picture, film, audio recording, electronic image, photograph, text, or any technological representation of speech or conduct" within 60 days of an election.

AI

Microsoft, Google, Meta, X and Others Pledge To Prevent AI Election Interference (nbcnews.com) 40

Twenty tech companies working on AI said Friday they had signed a "pledge" to try to prevent their software from interfering in elections, including in the United States. From a report: The signatories range from tech giants such as Microsoft and Google to a small startup that allows people to make fake voices -- the kind of generative-AI product that could be abused in an election to create convincing deepfakes of a candidate. The accord is, in effect, a recognition that the companies' own products create a lot of risk in a year in which 4 billion people around the world are expected to vote in elections.

"Deceptive AI Election content can deceive the public in ways that jeopardize the integrity of electoral processes," the document reads. The accord is also a recognition that lawmakers around the world haven't responded very quickly to the swift advancements in generative AI, leaving the tech industry to explore self-regulation. "As society embraces the benefits of AI, we have a responsibility to help ensure these tools don't become weaponized in elections," Brad Smith, vice chair and president of Microsoft, said in a statement. The 20 companies to sign the pledge are: Adobe, Amazon, Anthropic, Arm, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap, Stability AI, TikTok, TrendMicro, Truepic and X.

AI

OpenAI's Spectacular Video Tool Is Shrouded in Mystery 26

Every OpenAI release elicits awe and anxiety as capabilities advance, evident in Sora's strikingly realistic AI-generated video clips that went viral while unsettling industries reliant on original footage. But the company is again being secretive in all the wrong places about AI that can be used to spread misinformation. From a report: As usual, OpenAI won't talk about the all-important ingredients that went into this new tool, even as it releases it to an array of people to test before going public. Its approach should be the other way around. OpenAI needs to be more public about the data used to train Sora, and more secretive about the tool itself, given the capabilities it has to disrupt industries and potentially elections. OpenAI Chief Executive Officer Sam Altman said that red-teaming of Sora would start on Thursday, the day the tool was announced and shared with beta testers. Red-teaming is when specialists test an AI model's security by pretending to be bad actors who want to hack or misuse it. The goal is to make sure the same can't happen in the real world. When I asked OpenAI how long it would take to run these tests on Sora, a spokeswoman said there was no set length. "We will take our time to assess critical areas for harms or risks," she added.

The company spent about six months testing GPT-4, its most recent language model, before releasing it last year. If it takes the same amount of time to check Sora, that means it could become available to the public in August, a good three months before the US election. OpenAI should seriously consider waiting to release it until after voters go to the polls. [...] OpenAI is meanwhile being frustratingly secretive about the source of the information it used to create Sora. When I asked the company about what datasets were used to train the model, a spokeswoman said the training data came "from content we've licensed, and publicly available content." She didn't elaborate further.
AI

Scientific Journal Publishes AI-Generated Rat With Gigantic Penis (vice.com) 72

Jordan Pearson reports via Motherboard: A peer-reviewed science journal published a paper this week filled with nonsensical AI-generated images, which featured garbled text and a wildly incorrect diagram of a rat penis. The episode is the latest example of how generative AI is making its way into academia with concerning effects. The paper, titled "Cellular functions of spermatogonial stem cells in relation to JAK/STAT signaling pathway" was published on Wednesday in the open access Frontiers in Cell Development and Biology journal by researchers from Hong Hui Hospital and Jiaotong University in China. The paper itself is unlikely to be interesting to most people without a specific interest in the stem cells of small mammals, but the figures published with the article are another story entirely. [...]

It's unclear how this all got through the editing, peer review, and publishing process. Motherboard contacted the paper's U.S.-based reviewer, Jingbo Dai of Northwestern University, who said that it was not his responsibility to vet the obviously incorrect images. (The second reviewer is based in India.) "As a biomedical researcher, I only review the paper based on its scientific aspects. For the AI-generated figures, since the author cited Midjourney, it's the publisher's responsibility to make the decision," Dai said. "You should contact Frontiers about their policy of AI-generated figures." Frontier's policies for authors state that generative AI is allowed, but that it must be disclosed -- which the paper's authors did -- and the outputs must be checked for factual accuracy. "Specifically, the author is responsible for checking the factual accuracy of any content created by the generative AI technology," Frontier's policy states. "This includes, but is not limited to, any quotes, citations or references. Figures produced by or edited using a generative AI technology must be checked to ensure they accurately reflect the data presented in the manuscript."

On Thursday afternoon, after the article and its AI-generated figures circulated social media, Frontiers appended a notice to the paper saying that it had corrected the article and that a new version would appear later. It did not specify what exactly was corrected.
UPDATE: Frontiers retracted the article and issued the following statement: "Following publication, concerns were raised regarding the nature of its AI-generated figures. The article does not meet the standards of editorial and scientific rigor for Frontiers in Cell and Development Biology; therefore, the article has been retracted. This retraction was approved by the Chief Executive Editor of Frontiers. Frontiers would like to thank the concerned readers who contacted us regarding the published article."
AI

Air Canada Found Liable For Chatbot's Bad Advice On Plane Tickets 72

An anonymous reader quotes a report from CBC.ca: Air Canada has been ordered to pay compensation to a grieving grandchild who claimed they were misled into purchasing full-price flight tickets by an ill-informed chatbot. In an argument that appeared to flabbergast a small claims adjudicator in British Columbia, the airline attempted to distance itself from its own chatbot's bad advice by claiming the online tool was "a separate legal entity that is responsible for its own actions."

"This is a remarkable submission," Civil Resolution Tribunal (CRT) member Christopher Rivers wrote. "While a chatbot has an interactive component, it is still just a part of Air Canada's website. It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot." In a decision released this week, Rivers ordered Air Canada to pay Jake Moffatt $812 to cover the difference between the airline's bereavement rates and the $1,630.36 they paid for full-price tickets to and from Toronto bought after their grandmother died.
AI

OpenAI's Sora Turns AI Prompts Into Photorealistic Videos (wired.com) 28

An anonymous reader quotes a report from Wired: We already know thatOpenAI's chatbots can pass the bar exam without going to law school. Now, just in time for the Oscars, a new OpenAI app called Sora hopes to master cinema without going to film school. For now a research product, Sora is going out to a few select creators and a number of security experts who will red-team it for safety vulnerabilities. OpenAI plans to make it available to all wannabe auteurs at some unspecified date, but it decided to preview it in advance. Other companies, from giants like Google to startups likeRunway, have already revealed text-to-video AI projects. But OpenAI says that Sora is distinguished by its striking photorealism -- something I haven't seen in its competitors -- and its ability to produce longer clips than the brief snippets other models typically do, up to one minute. The researchers I spoke to won't say how long it takes to render all that video, but when pressed, they described it as more in the "going out for a burrito" ballpark than "taking a few days off." If the hand-picked examples I saw are to be believed, the effort is worth it.

OpenAI didn't let me enter my own prompts, but it shared four instances of Sora's power. (None approached the purported one-minute limit; the longest was 17 seconds.) The first came from a detailed prompt that sounded like an obsessive screenwriter's setup: "Beautiful, snowy Tokyo city is bustling. The camera moves through the bustling city street, following several people enjoying the beautiful snowy weather and shopping at nearby stalls. Gorgeous sakura petals are flying through the wind along with snowflakes." The result is a convincing view of what is unmistakably Tokyo, in that magic moment when snowflakes and cherry blossoms coexist. The virtual camera, as if affixed to a drone, follows a couple as they slowly stroll through a streetscape. One of the passersby is wearing a mask. Cars rumble by on a riverside roadway to their left, and to the right shoppers flit in and out of a row of tiny shops.

It's not perfect. Only when you watch the clip a few times do you realize that the main characters -- a couple strolling down the snow-covered sidewalk -- would have faced a dilemma had the virtual camera kept running. The sidewalk they occupy seems to dead-end; they would have had to step over a small guardrail to a weird parallel walkway on their right. Despite this mild glitch, the Tokyo example is a mind-blowing exercise in world-building. Down the road, production designers will debate whether it's a powerful collaborator or a job killer. Also, the people in this video -- who are entirely generated by a digital neural network -- aren't shown in close-up, and they don't do any emoting. But the Sora team says that in other instances they've had fake actors showing real emotions.
"It will be a very long time, if ever, before text-to-video threatens actual filmmaking," concludes Wired. "No, you can't make coherent movies by stitching together 120 of the minute-long Sora clips, since the model won't respond to prompts in the exact same way -- continuity isn't possible. But the time limit is no barrier for Sora and programs like it to transform TikTok, Reels, and other social platforms."

"In order to make a professional movie, you need so much expensive equipment," says Bill Peebles, another researcher on the project. "This model is going to empower the average person making videos on social media to make very high-quality content."

Further reading: OpenAI Develops Web Search Product in Challenge To Google
AI

Service Jobs Now Require Bizarre Personality Test From AI Company (404media.co) 128

An anonymous reader shares a report: Applying to some of the most common customer and food service jobs in the country now requires a long and bizarre personality quiz featuring blue humanoid aliens, which tells employers how potential hires rank in terms of "agreeableness" and "emotional stability." If you've applied to a job at FedEx, McDonald's, or Darden Restaurants (the company that operates multiple chains including Olive Garden) you might have already encountered this quiz, as all these companies and others are clients of Paradox.ai, the company which runs the test and helps them with other recruiting tasks.

Judging by the reaction on Reddit, where Paradox.ai's personality quiz has gone viral a couple of times in recent weeks and bewildered many users, most people are not familiar with the process. Personality quizzes as part of an application for hourly work isn't new, but the Paradox.ai test has gone repeatedly viral in recent weeks presumably because of the bizarre scenarios it presents applicants with and the blue humanoid alien thing. Other clients included on Paradox's website include CVS, GM, Nestle, 3M, and Unilever.

Slashdot Top Deals