×
Social Networks

Reddit CEO Teases AI Search Features and Paid Subreddits (engadget.com) 36

An anonymous reader shares a report: Reddit just wrapped up its second earnings call as a public company and CEO Steve Huffman hinted at some significant changes that could be coming to the platform. During the call, the Reddit co-founder said the company would begin testing AI-powered search results later this year. "Later this year, we will begin testing new search result pages powered by AI to summarize and recommend content, helping users dive deeper into products, shows, games and discover new communities on Reddit," Huffman said. He didn't say when those tests would begin, but said it would use both first-party and third-party models.

Huffman noted that search on Reddit has "gone unchanged for a long time" but that it's a significant opportunity to bring in new users. He also said that search could one day be a significant source of advertising revenue for the company. Huffman hinted at other non-advertising sources of revenue as well. He suggested that the company might experiment with paywalled subreddits as it looks to monetize new features.

Intel

How Intel Spurned OpenAI and Fell Behind the Times 63

An anonymous reader shares a report: For U.S. chip giant Intel, the darling of the computer age before it fell on harder times in the AI era, things might have been quite different. About seven years ago, the company had the chance to buy a stake in OpenAI, then a fledgling non-profit research organization working in a little-known field called generative AI, four people with direct knowledge of those discussions told Reuters.

Over several months in 2017 and 2018, executives at the two companies discussed various options, including Intel buying a 15% stake for $1 billion in cash, three of the people said. They also discussed Intel taking an additional 15% stake in OpenAI if it made hardware for the startup at cost price, two people said. Intel ultimately decided against a deal, partly because then-CEO Bob Swan did not think generative AI models would make it to market in the near future and thus repay the chipmaker's investment, according to three of the sources, who all requested anonymity to discuss confidential matters.
Hardware

NVMe 2.1 Specifications Published With New Capabilities (phoronix.com) 22

At the Flash Memory Summit 2024 this week, NVM Express published the NVMe 2.1 specifications, which hope to enhance storage unification across AI, cloud, client, and enterprise. Phoronix's Michael Larabel writes: New NVMe capabilities with the revised specifications include:

- Enabling live migration of PCIe NVMe controllers between NVM subsystems.
- New host-directed data placement for SSDs that simplifies ecosystem integration and is backwards compatible with previous NVMe specifications.
- Support for offloading some host processing to NVMe storage devices.
- A network boot mechanism for NVMe over Fabrics (NVMe-oF).
- Support for NVMe over Fabrics zoning.
- Ability to provide host management of encryption keys and highly granular encryption with Key Per I/O.
- Security enhancements such as support for TLS 1.3, a centralized authentication verification entity for DH-HMAC-CHAP, and post sanitization media verification.
- Management enhancements including support for high availability out-of-band management, management over I3C, out-of-band management asynchronous events and dynamic creation of exported NVM subsystems from underlying NVM subsystem physical resources.
You can learn more about these updates at NVMExpress.org.
AI

Where Facebook's AI Slop Comes From (404media.co) 8

Facebook's AI-generated content problem is being fueled by its own creator bonus program, according to an investigation by 404 Media. The program incentivizes users, particularly from developing countries, to flood the platform with AI-generated images for financial gain. The outlet found that influencers in India and Southeast Asia are teaching followers how to exploit Facebook's algorithms and content moderation systems to go viral with AI-generated images. Many use Microsoft's Bing Image Creator to produce bizarre, often emotive content that garners high engagement.

"The post you are seeing now is of a poor man that is being used to generate revenue," said Indian YouTuber Gyan Abhishek in a video, pointing to an AI image of an emaciated elderly man. He claimed users could earn "$100 for 1,000 likes" through Facebook's bonus program. While exact payment rates vary, 404 Media verified that consistent viral posting can lead to significant earnings for users in countries like India. Meta has defended the program to 404 Media, stating it works as intended if content meets community standards and engagement is authentic.
AI

Apple's Hidden AI Prompts Discovered In macOS Beta 46

A Reddit user discovered the backend prompts for Apple Intelligence in the developer beta of macOS 15.1, offering a rare glimpse into the specific guidelines for Apple's AI functionalities. Some of the most notable instructions include: "Do not write a story that is religious, political, harmful, violent, sexual, filthy, or in any way negative, sad, or provocative"; "Do not hallucinate"; and "Do not make up factual information." MacRumors reports: For the Smart Reply feature, the AI is programmed to identify relevant questions from an email and generate concise answers. The prompt for this feature is as follows: "You are a helpful mail assistant which can help identify relevant questions from a given mail and a short reply snippet. Given a mail and the reply snippet, ask relevant questions which are explicitly asked in the mail. The answer to those questions will be selected by the recipient which will help reduce hallucination in drafting the response. Please output top questions along with set of possible answers/options for each of those questions. Do not ask questions which are answered by the reply snippet. The questions should be short, no more than 8 words. The answers should be short as well, around 2 words. Present your output in a json format with a list of dictionaries containing question and answers as the keys. If no question is asked in the mail, then output an empty list. Only output valid json and nothing else."

The Memories feature in Apple Photos, which creates video stories from user photos, follows another set of detailed guidelines. The AI is instructed to generate stories that are positive and free of any controversial or harmful content. The prompt for this feature is: "A conversation between a user requesting a story from their photos and a creative writer assistant who responds with a story. Respond in JSON with these keys and values in order: traits: list of strings, visual themes selected from the photos; story: list of chapters as defined below; cover: string, photo caption describing the title card; title: string, title of story; subtitle: string, safer version of the title. Each chapter is a JSON with these keys and values in order: chapter: string, title of chapter; fallback: string, generic photo caption summarizing chapter theme; shots: list of strings, photo captions in chapter. Here are the story guidelines you must obey: The story should be about the intent of the user; The story should contain a clear arc; The story should be diverse, that is, do not overly focus the entire story on one very specific theme or trait; Do not write a story that is religious, political, harmful, violent, sexual, filthy or in any way negative, sad or provocative. Here are the photo caption list guidelines you must obey.

Apple's AI tools also include a general directive to avoid hallucination. For instance, the Writing Tools feature has the following prompt: "You are an assistant which helps the user respond to their mails. Given a mail, a draft response is initially provided based on a short reply snippet. In order to make the draft response nicer and complete, a set of question and its answer are provided. Please write a concise and natural reply by modifying the draft response to incorporate the given questions and their answers. Please limit the reply within 50 words. Do not hallucinate. Do not make up factual information."
Robotics

Figure AI's Humanoid Robot Helped Assemble BMWs At US Factory (arstechnica.com) 12

An anonymous reader quotes a report from Ars Technica: Unlike Tesla, which hopes to develop its own bipedal 'bot to work on its production line sometime next year, BMW has brought in a robot from Figure AI. The Figure 02 robot has hands with sixteen degrees of freedom and human-equivalent strength. "We are excited to unveil Figure 02, our second-generation humanoid robot, which recently completed successful testing at the BMW Group Plant Spartanburg. Figure 02 has significant technical advancements, which enable the robot to perform a wide range of complex tasks fully autonomously," said Brett Adcock, founder and CEO of Figure AI.

BMW wanted to test how to integrate a humanoid robot into its production process -- how to have the robot communicate with the production line software and human workers and determine what requirements would be necessary to add robots to the mix. The Figure robot was given the job of inserting sheet metal parts into fixtures as part of the process of making a chassis. BMW says this required particular dexterity and that it's an ergonomically awkward and tiring task for humans.

Now that the trial is over, Figure's robot is no longer working at Spartanburg, and BMW says it has "no definite timetable established" to add humanoid robots to its production lines. "The developments in the field of robotics are very promising. With an early-test operation, we are now determining possible applications for humanoid robots in production. We want to accompany this technology from development to industrialization," said Milan Nedeljkovi, BMW's board member responsible for production.
BMW Group published a video of the Figure 02 robot on YouTube.
Intel

Intel Foundry Achieves Major Milestones (intel.com) 28

Intel has announced significant progress on its 18A process technology, with lead products successfully powering on and booting operating systems. The company's Panther Lake client processor and Clearwater Forest server chip, both built on 18A, achieved these milestones less than two quarters after tape-out. The 18A node, featuring RibbonFET gate-all-around transistors and PowerVia backside power delivery, is on track for production in 2025.

Intel released the 18A Process Design Kit 1.0 in July, enabling foundry customers to leverage these advanced technologies in their designs. "Intel is out ahead of everyone else in the industry with these innovations," Kevin O'Buckley, Intel's new head of Foundry Services stated, highlighting the node's potential to drive next-generation AI solutions. Clearwater Forest will be the industry's first mass-produced, high-performance chip combining RibbonFET, PowerVia, and Foveros Direct 3D packaging technology. It also utilizes Intel's 3-T base-die technology, showcasing the company's systems foundry approach. Intel expects its first external customer to tape out on 18A in the first half of 2025. EDA and IP partners are updating their tools to support customer designs on the new node. The success of 18A is crucial for Intel's ambitions to regain process leadership and grow its foundry business.
Google

Google Unveils $99 TV Streamer To Replace Chromecast (theverge.com) 63

Google today unveiled its new Google TV Streamer, a $99.99 set-top box replacing the Chromecast. The device, shipping September 24, boasts improved performance with a 22% faster processor (over its predecessor), doubled RAM, and 32GB storage. It integrates Thread and Matter for smart home control, featuring a side-panel accessible via the remote. The Streamer supports Dolby Vision, Dolby Atmos and includes an Ethernet port. Design changes include a low-profile form factor in two colors and a redesigned remote with a finder function. Software enhancements use Gemini AI for content summaries and custom screensavers.
AI

Mainframes Find New Life in AI Era (msn.com) 56

Mainframe computers, stalwarts of high-speed data processing, are finding new relevance in the age of AI. Banks, insurers, and airlines continue to rely on these industrial-strength machines for mission-critical operations, with some now exploring AI applications directly on the hardware, WSJ reported in a feature story. IBM, commanding over 96% of the mainframe market, reported 6% growth in its mainframe business last quarter. The company's latest zSystem can process up to 30,000 transactions per second and hold 40 terabytes of data. WSJ adds: Globally, the mainframe market was valued at $3.05 billion in 2023, but new mainframe sales are expected to decline through 2028, IDC said. Of existing mainframes, however, 54% of enterprise leaders in a 2023 Forrester survey said they would increase their usage over the next two years.

Mainframes do have limitations. They are constrained by the computing power within their boxes, unlike the cloud, which can scale up by drawing on computing power distributed across many locations and servers. They are also unwieldy -- with years of old code tacked on -- and don't integrate well with new applications. That makes them costly to manage and difficult to use as a platform for developing new applications.

AI

OpenAI Co-Founder John Schulman Is Joining Anthropic (cnbc.com) 3

OpenAI co-founder John Schulman announced Monday that he is leaving to join rival AI startup Anthropic. CNBC reports: The move comes less than three months after OpenAI disbanded a superalignment team that focused on trying to ensure that people can control AI systems that exceed human capability at many tasks. Schulman had been a co-leader of OpenAI's post-training team that refined AI models for the ChatGPT chatbot and a programming interface for third-party developers, according to a biography on his website. In June, OpenAI said Schulman, as head of alignment science, would join a safety and security committee that would provide advice to the board. Schulman has only worked at OpenAI since receiving a Ph.D. in computer science in 2016 from the University of California, Berkeley.

"This choice stems from my desire to deepen my focus on AI alignment, and to start a new chapter of my career where I can return to hands-on technical work," Schulman wrote in the social media post. He said he wasn't leaving because of a lack of support for new work on the topic at OpenAI. "On the contrary, company leaders have been very committed to investing in this area," he said. The leaders of the superalignment team, Jan Leike and company co-founder Ilya Sutskever, both left this year. Leike joined Anthropic, while Sutskever said he was helping to start a new company, Safe Superintelligence Inc. "Very excited to be working together again!" Leike wrote in reply to Schulman's message.

Education

Silicon Valley Parents Are Sending Kindergarten Kids To AI-Focused Summer Camps 64

Silicon Valley's fascination with AI has led to parents enrolling children as young as five in AI-focused summer camps. "It's common for kids on summer break to attend space, science or soccer camp, or even go to coding school," writes Priya Anand via the San Francisco Standard. "But the growing effort to teach kindergarteners who can barely spell their names lessons in 'Advanced AI Robot Design & AR Coding' shows how far the frenzy has extended." From the report: Parents who previously would opt for coding camps are increasingly interested in AI-specific programming, according to Eliza Du, CEO of Integem, which makes holographic augmented reality technology in addition to managing dozens of tech-focused kids camps across the country. "The tech industry understands the value of AI," she said. "Every year it's increasing." Some Bay Area parents are so eager to get their kids in on AI's ground floor that they try to sneak toddlers into advanced courses. "Sometimes they'll bring a 4-year-old, and I'm like, you're not supposed to be here," Du said.

Du said Integem studied Common Core education standards to ensure its programming was suitable for those as young as 5. She tries to make sure parents understand there's only so much kids can learn across a week or two of camp. "Either they set expectations too high or too low," Du said of the parents. As an example, she recounted a confounding comment in a feedback survey from the parent of a 5-year-old. "After one week, the parent said, "My child did not learn much. My cousin is a Google engineer, and he said he's not ready to be an intern at Google yet.' What do I say to that review?" Du said, bemused. "That expectation is not realistic." Even less tech-savvy parents are getting in on the hype. Du tells of a mom who called the company to get her 12-year-old enrolled in "AL" summer camp. "She misread it," Du said, explaining that the parent had confused the "I" in AI with a lowercase "L."
AI

Video Game Actors Are Officially On Strike Over AI (theverge.com) 52

Members of the Screen Actors Guild (SAG-AFTRA) are striking against the video game industry due to failed negotiations over AI-related worker protections. "The guild began striking on Friday, July 26th, preventing over 160,000 SAG-AFTRA members from taking new video game projects and impeding games already in development from the biggest publishers to the smallest indie studios," notes The Verge. From the report: Negotiations broke down due to disagreements over worker protections around AI. The actors union, SAG-AFTRA, negotiates the terms of the interactive media agreement, or IMA, with a bargaining committee of video game publishers, including Activision, Take-Two, Insomniac Games, WB Games, and others that represent a total of 30 signatory companies. Though SAG-AFTRA and the video game bargaining group were able to agree on a number of proposals, AI remained the final stumbling block resulting in the strike.

SAG-AFTRA's provisions on AI govern both voice and movement performers with respect to digital replicas -- or using an existing performance as the foundation to create new ones without the original performer -- and the use of generative AI to create performances without any initial input. However, according to SAG-AFTRA, the bargaining companies disagreed about which type of performer should be eligible for AI protections. SAG-AFTRA chief contracts officer Ray Rodriguez said that the bargaining companies initially wanted to offer protections to voice, not motion performers. "So anybody doing a stunt or creature performance, all those folks would have been left unprotected under the employers' offer," Rodriguez said in an interview with Aftermath. Rodriguez said that the companies later extended protections to motion performers, but only if "the performer is identifiable in the output of the AI digital replica."

SAG-AFTRA rejected this proposal as it would potentially exclude a majority of movement performances. "Their proposal would carve out anything that doesn't look and sound identical to me," said Andi Norris, a member of SAG-AFTRA's IMA negotiating committee, during a press conference. "[The proposal] would leave movement specialists, including stunts, entirely out in the cold, to be replaced ... by soulless synthetic performers trained on our actual performances." The bargaining game companies argued that the terms went far enough and would require actors' approval. "Our offer is directly responsive to SAG-AFTRA's concerns and extends meaningful AI protections that include requiring consent and fair compensation to all performers working under the IMA. These terms are among the strongest in the entertainment industry," wrote Audrey Cooling, a representative working on behalf of the video game companies on the bargaining committee in a statement to The Verge.

AI

Nvidia Allegedly Scraped YouTube, Netflix Videos for AI Training Data 37

Nvidia scraped videos from YouTube, Netflix and other online platforms to compile training data for its AI products, 404 Media reported Monday, citing internal documents. The tech giant used this content to develop various AI projects, including its Omniverse 3D world generator and self-driving car systems, the report said. Some employees expressed concerns about potential legal issues surrounding the use of such content, the report said, adding that the management assured them of executive-level approval. Nvidia defended its actions, asserting they were "in full compliance with the letter and the spirit of copyright law" and emphasizing that copyright protects specific expressions rather than facts or ideas.
AI

Elon Musk Revives Lawsuit Against OpenAI and Sam Altman 47

Elon Musk has reignited his legal battle against OpenAI, the creators of ChatGPT, by filing a new lawsuit in a California federal court. The suit, which revives a six-year-old dispute, accuses OpenAI founders Sam Altman and Greg Brockman of breaching the company's founding principles by prioritizing commercial interests over public benefit.

Musk's complaint alleges that OpenAI's multibillion-dollar partnership with Microsoft contradicts the original mission to develop AI responsibly for humanity's benefit. The lawsuit describes the alleged betrayal in dramatic terms, claiming "perfidy and deceit... of Shakespearean proportions." OpenAI has not yet commented on the new filing. In response to Musk's previous lawsuit, which was withdrawn seven weeks ago, the company stated its commitment to building safe artificial general intelligence for the benefit of humanity.
AI

OpenAI Grapples With Unreleased AI Detection Tool Amid Cheating Concerns (msn.com) 27

OpenAI has developed a sophisticated anticheating tool for detecting AI-generated content, particularly essays and research papers, but has refrained from releasing it due to internal debates and ethical considerations, according to WSJ.

This tool, which has been ready for deployment for approximately a year, utilizes a watermarking technique that subtly alters token selection in ChatGPT's output, creating an imperceptible pattern detectable only by OpenAI's technology. While boasting a 99.9% effectiveness rate for substantial AI-generated text, concerns persist regarding potential workarounds and the challenge of determining appropriate access to the detection tool, as well as its potential impact on non-native English speakers and the broader AI ecosystem.
Social Networks

Founder of Collapsed Social Media Site 'IRL' Charged With Fraud Over Faked Users (bbc.com) 22

This week America's Securities and Exchange Commission filed fraud charges against the former CEO of the startup social media site "IRL"

The BBC reports: IRL — which was once considered a potential rival to Facebook — took its name from its intention to get its online users to meet up in real life. However, the initial optimism evaporated after it emerged most of IRL's users were bots, with the platform shutting in 2023...

The SEC says it believes [CEO Abraham] Shafi raised about $170m by portraying IRL as the new success story in the social media world. It alleges he told investors that IRL had attracted the vast majority its supposed 12 million users through organic growth. In reality, it argues, IRL was spending millions of dollars on advertisements which offered incentives to prospective users to download the IRL app. That expenditure, it is alleged, was subsequently hidden in the company's books.

IRL received multiple rounds of venture capital financing, eventually reaching "unicorn status" with a $1.17 billion valuation, according to TechCrunch. But it shut down in 2023 "after an internal investigation by the company's board found that 95% of the app's users were 'automated or from bots'."

TechCrunch notes it's the second time in the same week — and at least the fourth time in the past several months — that the SEC has charged a venture-backed founder on allegations of fraud... Earlier this week, the SEC charged BitClout founder Nader Al-Naji with fraud and unregistered offering of securities, claiming he used his pseudonymous online identity "DiamondHands" to avoid regulatory scrutiny while he raised over $257 million in cryptocurrency. BitClout, a buzzy crypto startup, was backed by high-profile VCs such as a16z, Sequoia, Chamath Palihapitiya's Social Capital, Coinbase Ventures and Winklevoss Capital.

In June, the SEC charged Ilit Raz, CEO and founder of the now-shuttered AI recruitment startup Joonko, with defrauding investors of at least $21 million. The agency alleged Raz made false and misleading statements about the quantity and quality of Joonko's customers, the number of candidates on its platform and the startup's revenue.

The agency has also gone after venture firms in recent months. In May, the SEC charged Robert Scott Murray and his firm Trillium Capital LLC with a fraudulent scheme to manipulate the stock price of Getty Images Holdings Inc. by announcing a phony offer by Trillium to purchase Getty Images.

Programming

DARPA Wants to Automatically Transpile C Code Into Rust - Using AI (theregister.com) 236

America's Defense Department has launched a project "that aims to develop machine-learning tools that can automate the conversion of legacy C code into Rust," reports the Register — with an online event already scheduled later this month for those planning to submit proposals: The reason to do so is memory safety. Memory safety bugs, such buffer overflows, account for the majority of major vulnerabilities in large codebases. And DARPA's hope [that's the Defense Department's R&D agency] is that AI models can help with the programming language translation, in order to make software more secure. "You can go to any of the LLM websites, start chatting with one of the AI chatbots, and all you need to say is 'here's some C code, please translate it to safe idiomatic Rust code,' cut, paste, and something comes out, and it's often very good, but not always," said Dan Wallach, DARPA program manager for TRACTOR, in a statement. "The research challenge is to dramatically improve the automated translation from C to Rust, particularly for program constructs with the most relevance...."

DARPA's characterization of the situation suggests the verdict on C and C++ has already been rendered. "After more than two decades of grappling with memory safety issues in C and C++, the software engineering community has reached a consensus," the research agency said, pointing to the Office of the National Cyber Director's call to do more to make software more secure. "Relying on bug-finding tools is not enough...."

Peter Morales, CEO of Code Metal, a company that just raised $16.5 million to focus on transpiling code for edge hardware, told The Register the DARPA project is promising and well-timed. "I think [TRACTOR] is very sound in terms of the viability of getting there and I think it will have a pretty big impact in the cybersecurity space where memory safety is already a pretty big conversation," he said.

DARPA's statement had an ambitious headline: "Eliminating Memory Safety Vulnerabilities Once and For All."

"Rust forces the programmer to get things right," said DARPA project manager Wallach. "It can feel constraining to deal with all the rules it forces, but when you acclimate to them, the rules give you freedom. They're like guardrails; once you realize they're there to protect you, you'll become free to focus on more important things."

Code Metal's Morales called the project "a DARPA-hard problem," noting the daunting number of edge cases that might come up. And even DARPA's program manager conceded to the Register that "some things like the Linux kernel are explicitly out of scope, because they've got technical issues where Rust wouldn't fit."

Thanks to long-time Slashdot reader RoccamOccam for sharing the news.
Stats

What's the 'Smartest' City in America - Based on Tech Jobs, Connectivity, and Sustainability? (newsweek.com) 66

Seattle is the smartest city in America, with Miami and then Austin close behind. That's according to a promotional study from smart-building tools company ProptechOS. Newsweek reports: The evaluation of tech infrastructure and connectivity was based on several factors, including the number of free Wi-Fi hot spots, the quantity and density of AI and IoT companies, average broadband download speeds, median 5G coverage per network provider, and the number of airports. Meanwhile, green infrastructure was assessed based on air quality, measured by exposure to PM2.5, tiny particles in the air that can harm health. Other factors include 10-year changes in tree coverage, both loss and gain; the number of electric vehicle charging points and their density per 100,000 people; and the number of LEED-certified green buildings. The tech job market was evaluated on the number of tech jobs advertised per 100,000 people.
Seattle came in first after assessing 16 key indicators across connectivity/infrastructure, sustainability, and tech jobs — "boasting 34 artificial intelligence companies and 13 Internet of Things companies per 100,000 residents." In terms of sustainability, Seattle has enhanced its tree coverage by 13,700 hectares from 2010 to 2020 and has established the equivalent of 10 electric vehicle charging points per 100,000 residents. Seattle has edged out last year's top city, Austin, to claim the title of the smartest city in the U.S., with an overall score of 75.7 out of 100. Miami wasn't far behind, achieving a score of 75.4. However, Austin still came out on top for smart city infrastructure, scoring 86.2 out of 100. This is attributed to its high broadband download speed of 275.60 Mbps — well above the U.S. average of 217.14 Mbps — and its concentration of 337 AI companies, or 35 per 100,000 people.
You can see the full listings here. The article notes that the same study also ranked Paris as the smartest city in Europe — slipping ahead of London — thanks to Paris's 99.5% 5G coverage, plus "the second-highest number of AI companies in Europe and the third-highest number of free Wi-Fi hot spots. Paris is also recognized for its traffic management systems, which monitor noise levels and air quality."

Newsweek also shares this statement from ProptechOS's founder/chief ecosystem officer. "Advancements in smart cities and future technologies such as next-generation wireless communication and AI are expected to reduce environmental impacts and enhance living standards."

In April CNBC reported on an alternate list of the smartest cities in the world, created from research by the World Competitiveness Center. It defined smart cities as "an urban setting that applies technology to enhance the benefits and diminish the shortcomings of urbanization for its citizens." And CNBC reported that based on the list, "Smart cities in Europe and Asia are gaining ground globally while North American cities have fallen down the ranks... Of the top 10 smart cities on the list, seven were in Europe." Here are the top 10 smart cities, according to the 2024 Smart City Index.

- Zurich, Switzerland
- Oslo, Norway
- Canberra, Australia
- Geneva, Switzerland
- Singapore
- Copenhagen, Denmark
- Lausanne, Switzerland
- London, England
- Helsinki, Finland
- Abu Dhabi, United Arab Emirates

Notably, for the first time since the index's inception in 2019, there is an absence of North American cities in the top 20... The highest ranking U.S. city this year is New York City which ranked 34th, followed by Boston at 36th and Washington DC, coming in at 50th place.

AI

NIST Releases an Open-Source Platform for AI Safety Testing (scmagazine.com) 4

America's National Institute of Standards and Technology (NIST) has released a new open-source software tool called Dioptra for testing the resilience of machine learning models to various types of attacks.

"Key features that are new from the alpha release include a new web-based front end, user authentication, and provenance tracking of all the elements of an experiment, which enables reproducibility and verification of results," a NIST spokesperson told SC Media: Previous NIST research identified three main categories of attacks against machine learning algorithms: evasion, poisoning and oracle. Evasion attacks aim to trigger an inaccurate model response by manipulating the data input (for example, by adding noise), poisoning attacks aim to impede the model's accuracy by altering its training data, leading to incorrect associations, and oracle attacks aim to "reverse engineer" the model to gain information about its training dataset or parameters, according to NIST.

The free platform enables users to determine to what degree attacks in the three categories mentioned will affect model performance and can also be used to gauge the use of various defenses such as data sanitization or more robust training methods.

The open-source testbed has a modular design to support experimentation with different combinations of factors such as different models, training datasets, attack tactics and defenses. The newly released 1.0.0 version of Dioptra comes with a number of features to maximize its accessibility to first-party model developers, second-party model users or purchasers, third-party model testers or auditors, and researchers in the ML field alike. Along with its modular architecture design and user-friendly web interface, Dioptra 1.0.0 is also extensible and interoperable with Python plugins that add functionality... Dioptra tracks experiment histories, including inputs and resource snapshots that support traceable and reproducible testing, which can unveil insights that lead to more effective model development and defenses.

NIST also published final versions of three "guidance" documents, according to the article. "The first tackles 12 unique risks of generative AI along with more than 200 recommended actions to help manage these risks. The second outlines Secure Software Development Practices for Generative AI and Dual-Use Foundation Models, and the third provides a plan for global cooperation in the development of AI standards."

Thanks to Slashdot reader spatwei for sharing the news.
Programming

Coders Don't Fear AI, Reports Stack Overflow's Massive 2024 Survey (thenewstack.io) 134

Stack Overflow says over 65,000 developers took their annual survey — and "For the first time this year, we asked if developers felt AI was a threat to their job..."

Some analysis from The New Stack: Unsurprisingly, only 12% of surveyed developers believe AI is a threat to their current job. In fact, 70% are favorably inclined to use AI tools as part of their development workflow... Among those who use AI tools in their development workflow, 81% said productivity is one of its top benefits, followed by an ability to learn new skills quickly (62%). Much fewer (30%) said improved accuracy is a benefit. Professional developers' adoption of AI tools in the development process has risen rapidly, going from 44% in 2023 to 62% in 2024...

Seventy-one percent of developers with less than five years of experience reported using AI tools in their development process, as compared to just 49% of developers with 20 years of experience coding... At 82%, [ChatGPT] is twice as likely to have been used than GitHub Copilot. Among ChatGPT users, 74% want to continue using it.

But "only 43% said they trust the accuracy of AI tools," according to Stack Overflow's blog post, "and 45% believe AI tools struggle to handle complex tasks."

More analysis from The New Stack: The latest edition of the global annual survey found full-time employment is holding steady, with over 80% reporting that they have full-time jobs. The percentage of unemployed developers has more than doubled since 2019 but is still at a modest 4.4% worldwide... The median annual salary of survey respondents declined significantly. For example, the average full-stack developer's median 2024 salary fell 11% compared to the previous year, to $63,333... Wage pressure may be the result of more competition from an increase in freelancing.

Eighteen percent of professional developers in the 2024 survey said they are independent contractors or self-employed, which is up from 9.5% in 2020. Part-time employment has also risen, presenting even more pressure on full-time salaries... Job losses at tech companies have contributed to a large influx of talent into the freelance market, noted Stack Overflow CEO Prashanth Chandrasekar in an interview with The New Stack. Since COVID-19, he added, the emphasis on remote work means more people value job flexibility. In the 2024 survey, only 20% have returned to full-time in-person work, 38% are full-time remote, while the remainder are in a hybrid situation. Anticipation of future productivity growth due to AI may also be creating uncertainty about how much to pay developers.

Two stats jumped out for Visual Studio magazine: In this year's big Stack Overflow developer survey things are much the same for Microsoft-centric data points: VS Code and Visual Studio still rule the IDE roost, while .NET maintains its No. 1 position among non-web frameworks. It's been this way for years, though in 2021 it was .NET Framework at No. 1 among IDEs, while the new .NET Core/.NET 5 entry was No. 3. Among IDEs, there has been less change. "Visual Studio Code is used by more than twice as many developers than its nearest (and related) alternative, Visual Studio," said the 2024 Stack Overflow Developer survey, the 14th in the series of massive reports.
Stack Overflow shared some other interesting statistics:
  • "Javascript (62%), HTML/CSS (53%), and Python (51%) top the list of most used languages for the second year in a row... [JavaScript] has been the most popular language every year since the inception of the Developer Survey in 2011."
  • "Python is the most desired language this year (users that did not indicate using this year but did indicate wanting to use next year), overtaking JavaScript."
  • "The language that most developers used and want to use again is Rust for the second year in a row with an 83% admiration rate. "
  • "Python is most popular for those learning to code..."
  • "Technical debt is a problem for 62% of developers, twice as much as the second- and third-most frustrating problems for developers: complex tech stacks for building and deployment."

Slashdot Top Deals