×
AI

Anthropic Releases New Version of Claude That Beats GPT-4 and Gemini Ultra in Some Benchmark Tests (venturebeat.com) 33

Anthropic, a leading artificial intelligence startup, unveiled its Claude 3 series of AI models today, designed to meet the diverse needs of enterprise customers with a balance of intelligence, speed, and cost efficiency. The lineup includes three models: Opus, Sonnet, and the upcoming Haiku. From a report: The star of the lineup is Opus, which Anthropic claims is more capable than any other openly available AI system on the market, even outperforming leading models from rivals OpenAI and Google. "Opus is capable of the widest range of tasks and performs them exceptionally well," said Anthropic cofounder and CEO Dario Amodei in an interview with VentureBeat. Amodei explained that Opus outperforms top AI models like GPT-4, GPT-3.5 and Gemini Ultra on a wide range of benchmarks. This includes topping the leaderboard on academic benchmarks like GSM-8k for mathematical reasoning and MMLU for expert-level knowledge.

"It seems to outperform everyone and get scores that we haven't seen before on some tasks," Amodei said. While companies like Anthropic and Google have not disclosed the full parameters of their leading models, the reported benchmark results from both companies imply Opus either matches or surpasses major alternatives like GPT-4 and Gemini in core capabilities. This, at least on paper, establishes a new high watermark for commercially available conversational AI. Engineered for complex tasks requiring advanced reasoning, Opus stands out in Anthropic's lineup for its superior performance. Sonnet, the mid-range model, offers businesses a more cost-effective solution for routine data analysis and knowledge work, maintaining high performance without the premium price tag of the flagship model. Meanwhile, Haiku is designed to be swift and economical, suited for applications such as consumer-facing chatbots, where responsiveness and cost are crucial factors. Amodei told VentureBeat he expects Haiku to launch publicly in a matter of "weeks, not months."

Portables (Apple)

Apple Unveils New MacBook Air, Powered By M3 Chip (apple.com) 150

Apple has announced the launch of its new MacBook Air laptops powered by the company's latest M3 chip, offering up to 60% faster performance compared to the previous generation (M1-powered MacBook Air). The new 13-inch and 15-inch models feature a thin and light design, up to 18 hours of battery life, and a Liquid Retina display. The M3 chip, built using 3-nanometer technology, boasts an 8-core CPU, up to a 10-core GPU, and supports up to 24GB of unified memory.

The laptops also offer enhanced AI capabilities, with a faster 16-core Neural Engine and accelerators in the CPU and GPU for improved on-device machine learning performance. This enables features such as real-time speech-to-text, translation, and visual understanding. The 13-inch MacBook Air with M3 starts at $1,099, while the 15-inch model starts at $1,299. Both models are available for order starting Monday and will begin arriving to customers and be available in stores on Friday, March 8. Apple also reduced the starting price of the 13-inch MacBook Air with M2 chip to $999.
AI

India Reverses AI Stance, Requires Government Approval For Model Launches (techcrunch.com) 19

An anonymous reader shares a report: India has waded into global AI debate by issuing an advisory that requires "significant" tech firms to get government permission before launching new models. India's Ministry of Electronics and IT issued the advisory to firms on Friday. The advisory -- not published on public domain but a copy of which TechCrunch has reviewed -- also asks tech firms to ensure that their services or products "do not permit any bias or discrimination or threaten the integrity of the electoral process."

Though the ministry admits the advisory is not legally binding, India's IT Deputy Minister Rajeev Chandrasekhar says the notice is "signalling that this is the future of regulation." He adds: "We are doing it as an advisory today asking you to comply with it." In a tweet Monday, Chandrasekhar said the advisory is aimed at "untested AI platforms deploying on the India internet" and doesn't apply to startups.
About-face from India's position on AI a year ago.
AI

Homeless Man Tries to Steal Waymo Robotaxi in Los Angeles (msn.com) 93

A homeless man "was taken into custody on suspicion of grand theft auto," reports the Los Angeles Times, "after police said he tried to steal a Waymo self-driving car in downtown Los Angeles on Saturday night." The man entered and tried to operate a Waymo vehicle that had stopped to let out a passenger at the corner of 1st and Main at 10:30 p.m., Los Angeles Police Department detective Meghan Aguilar said. After the man, whom a Waymo spokesman described as an "unauthorized pedestrian," entered the vehicle, the company's Rider Support team instructed him to exit the car. When he did not, the company contacted the police, "who were then able to remove and arrest" the man, said Chris Bonelli, a Waymo spokesman...

No injuries were reported by the rider, and there was no damage to the vehicle, Bonelli said. The car was stationary during the entire incident because an unauthorized person was identified by the company to be in the vehicle, according to Waymo.

Social Networks

How Will Reddit's IPO Change the Service? (bbc.co.uk) 86

"Reddit users have been reacting with deep gloom to the firm saying it plans to sell shares to the public..." the BBC recently reported: The company has said its plans are "exciting" and will offer the business opportunities for growth. However many users worry the move will fundamentally change the website... "When the most important customers shift from [users] to shareholders, the product always [suffers]," said one person. "It becomes 'what can we do this quarter to squeak out an additional point of revenue', instead of 'how can we make this product better'...."

[T]he company has recorded losses every year since its start, including more than $90m last year. In the filing, Reddit said it had not started trying to make money seriously until 2018. It reported $804m in revenue last year, up more than 20% from 2022. Advertising accounted for nearly all of the revenue, but in a note to prospective investors chief executive Steve Huffman said he was excited about opportunities to make the platform a venue for commerce and license its content to AI companies.

AI

Researchers Create AI Worms That Can Spread From One System to Another (arstechnica.com) 46

Long-time Slashdot reader Greymane shared this article from Wired: [I]n a demonstration of the risks of connected, autonomous AI ecosystems, a group of researchers has created one of what they claim are the first generative AI worms — which can spread from one system to another, potentially stealing data or deploying malware in the process. "It basically means that now you have the ability to conduct or to perform a new kind of cyberattack that hasn't been seen before," says Ben Nassi, a Cornell Tech researcher behind the research. Nassi, along with fellow researchers Stav Cohen and Ron Bitton, created the worm, dubbed Morris II, as a nod to the original Morris computer worm that caused chaos across the Internet in 1988. In a research paper and website shared exclusively with WIRED, the researchers show how the AI worm can attack a generative AI email assistant to steal data from emails and send spam messages — breaking some security protections in ChatGPT and Gemini in the process...in test environments [and not against a publicly available email assistant]...

To create the generative AI worm, the researchers turned to a so-called "adversarial self-replicating prompt." This is a prompt that triggers the generative AI model to output, in its response, another prompt, the researchers say. In short, the AI system is told to produce a set of further instructions in its replies... To show how the worm can work, the researchers created an email system that could send and receive messages using generative AI, plugging into ChatGPT, Gemini, and open source LLM, LLaVA. They then found two ways to exploit the system — by using a text-based self-replicating prompt and by embedding a self-replicating prompt within an image file.

In one instance, the researchers, acting as attackers, wrote an email including the adversarial text prompt, which "poisons" the database of an email assistant using retrieval-augmented generation (RAG), a way for LLMs to pull in extra data from outside its system. When the email is retrieved by the RAG, in response to a user query, and is sent to GPT-4 or Gemini Pro to create an answer, it "jailbreaks the GenAI service" and ultimately steals data from the emails, Nassi says. "The generated response containing the sensitive user data later infects new hosts when it is used to reply to an email sent to a new client and then stored in the database of the new client," Nassi says. In the second method, the researchers say, an image with a malicious prompt embedded makes the email assistant forward the message on to others. "By encoding the self-replicating prompt into the image, any kind of image containing spam, abuse material, or even propaganda can be forwarded further to new clients after the initial email has been sent," Nassi says.

In a video demonstrating the research, the email system can be seen forwarding a message multiple times. The researchers also say they could extract data from emails. "It can be names, it can be telephone numbers, credit card numbers, SSN, anything that is considered confidential," Nassi says.

The researchers reported their findings to Google and OpenAI, according to the article, with OpenAI confirming "They appear to have found a way to exploit prompt-injection type vulnerabilities by relying on user input that hasn't been checked or filtered." OpenAI says they're now working to make their systems "more resilient."

Google declined to comment on the research.
AI

How AI is Taking Water From the Desert (msn.com) 108

Microsoft built two datacenters west of Phoenix, with plans for seven more (serving, among other companies, OpenAI). "Microsoft has been adding data centers at a stupendous rate, spending more than $10 billion on cloud-computing capacity in every quarter of late," writes the Atlantic. "One semiconductor analyst called this "the largest infrastructure buildout that humanity has ever seen."

But is this part of a concerning trend? Microsoft plans to absorb its excess heat with a steady flow of air and, as needed, evaporated drinking water. Use of the latter is projected to reach more than 50 million gallons every year. That might be a burden in the best of times. As of 2023, it seemed absurd. Phoenix had just endured its hottest summer ever, with 55 days of temperatures above 110 degrees. The weather strained electrical grids and compounded the effects of the worst drought the region has faced in more than a millennium. The Colorado River, which provides drinking water and hydropower throughout the region, has been dwindling. Farmers have already had to fallow fields, and a community on the eastern outskirts of Phoenix went without tap water for most of the year... [T]here were dozens of other facilities I could visit in the area, including those run by Apple, Amazon, Meta, and, soon, Google. Not too far from California, and with plenty of cheap land, Greater Phoenix is among the fastest-growing hubs in the U.S. for data centers....

Microsoft, the biggest tech firm on the planet, has made ambitious plans to tackle climate change. In 2020, it pledged to be carbon-negative (removing more carbon than it emits each year) and water-positive (replenishing more clean water than it consumes) by the end of the decade. But the company also made an all-encompassing commitment to OpenAI, the most important maker of large-scale AI models. In so doing, it helped kick off a global race to build and deploy one of the world's most resource-intensive digital technologies. Microsoft operates more than 300 data centers around the world, and in 2021 declared itself "on pace to build between 50 and 100 new datacenters each year for the foreseeable future...."

Researchers at UC Riverside estimated last year... that global AI demand could cause data centers to suck up 1.1 trillion to 1.7 trillion gallons of freshwater by 2027. A separate study from a university in the Netherlands, this one peer-reviewed, found that AI servers' electricity demand could grow, over the same period, to be on the order of 100 terawatt hours per year, about as much as the entire annual consumption of Argentina or Sweden... [T]ensions over data centers' water use are cropping up not just in Arizona but also in Oregon, Uruguay, and England, among other places in the world.

The article points out that Microsoft "is transitioning some data centers, including those in Arizona, to designs that use less or no water, cooling themselves instead with giant fans." And an analysis (commissioned by Microsoft) on the impact of one building said it would use about 56 million gallons of drinking water each year, equivalent to the amount used by 670 families, according to the article. "In other words, a campus of servers pumping out ChatGPT replies from the Arizona desert is not about to make anyone go thirsty."
Windows

Microsoft Begins Adding 'Copilot' Icon to Windows 11 Taskbars (techrepublic.com) 81

Microsoft is "delighted to introduce some useful new features" for its "Copilot Preview for Windows 11," according to a recent blog post.

TechRepublic adds that "most features will be enabled by default... rolling out from today until April 2024." Windows 11 users will be able to change system settings through prompts typed directly into Copilot in Windows, currently accessible in the Copilot Preview via an icon on the taskbar, or by pressing Windows + C. Microsoft Copilot will be able to perform the following actions:

- Turn on/off battery saver.
- Show device information.
- Show system information.
- Show battery information.
- Open storage page.
- Launch Live Captions.
- Launch Narrator.
- Launch Screen Magnifier.
- Open Voice Access page.
- Open Text size page.
- Open contrast themes page.
- Launch Voice input.
- Show available Wi-Fi network.
- Display IP Address.
- Show Available Storage.

The new third-party app integrations for Copilot will give Windows 11 users new ways to interact with various applications. For example, making business lunch reservations through OpenTable...

Other new AI features for Windows 11 rolling out today include a new, AI-powered Generative Erase tool, which sounds reminiscent of Google's Magic Eraser tool for Google Photos. Generative Erase allows users to remove unwanted objects or artifacts from their photos in the Photos app.

Likewise, Microsoft's video editing tool Clipchamp is receiving a Silence Removal tool, which functions much as the name implies  — it allows users to remove gaps in conversation or audio from a video clip.

Voice access is another focal point of Microsoft's latest Windows 11 update, detailed in a separate blog post by Windows Commercial Product Marketing Manager Harjit Dhaliwal. Users can now use voice controls to navigate between multiple displays, aided by number and grid overlays that provide easy switching between screens.

A Copilot icon has already started appearing in the taskbar of some Windows systems. If you Google "microsoft installs copilot preview windows," Google adds these helpful suggestions.

People also ask: Why is Copilot preview on my computer?

How do I get rid of Copilot preview on Windows 10?


"Apparently there was some sort of update..." writes one Windows users. "Anyway, there is a logo at the bottom of the screen that is distracting and I'd like to get rid of it."

Lifehacker has already published an article titled "How to Hide (or Disable) Copilot in Windows 11."

"Artificial intelligence is feeling harder and harder to avoid," it begins, "but you still have options."
AI

Copilot For OneDrive Will Fetch Your Files and Summarize Them (theverge.com) 39

An upcoming April release of Copilot for OneDrive will be able to find, summarize, and extract information from a wide range of files, including text documents, presentations, spreadsheets, HTML pages and PDF files. "Users can ask Copilot to tailor summaries to their liking, such as only including key points or highlights from a specific section," reports The Verge. From the report: The chatbot will also be able to respond to natural language prompts and answer highly specific questions about the contents of a user's files. Some examples given by Microsoft included asking Copilot to tabulate a week's worth of beverage sales and throw the data in a table view by day. Or, asking it to list the pros and cons of a project, or display the most recent or relevant files. Users can even ask Copilot for advice on how to make their documents better. Copilot on OneDrive will also be able to create outlines, tables, and lists for users, based on existing files. A few examples given were:

- Using the /sales-enablement.docx as reference, create an outline of a sales pitch to a new customer.
- For these selected resumes, create a table with names, current title, years of experience, educational qualifications, and current location.
- Create a list of frequently asked questions about project Moonshot.

Programming

Stack Overflow To Charge LLM Developers For Access To Its Coding Content (theregister.com) 32

Stack Overflow has launched an API that will require all AI models trained on its coding question-and-answer content to attribute sources linking back to its posts. And it will cost money to use the site's content. From a report: "All products based on models that consume public Stack Overflow data are required to provide attribution back to the highest relevance posts that influenced the summary given by the model," it confirmed in a statement. The Overflow API is designed to act as a knowledge database to help developers build more accurate and helpful code-generation models. Google announced it was using the service to access relevant information from Stack Overflow via the API and integrate the data with its latest Gemini models, and for its cloud storage console.
AI

Elon Musk Sues OpenAI and Sam Altman (techcrunch.com) 179

Elon Musk has sued OpenAI, its co-founders Sam Altman and Greg Brockman and affiliated entities, alleging the ChatGPT makers have breached their original contractual agreements by pursuing profits instead of the non-profit's founding mission to develop AI that benefits humanity. TechCrunch: Musk, a co-founder and early backer of OpenAI, claims Altman and Brockman convinced him to help found and bankroll the startup in 2015 with promises it would be a non-profit focused on countering the competitive threat from Google. The founding agreement required OpenAI to make its technology "freely available" to the public, the lawsuit alleges.

The lawsuit, filed in a court in San Francisco late Thursday, says that OpenAI, the world's most valuable AI startup, has shifted to a for-profit model focused on commercializing its AGI research after partnering with Microsoft, the world's most valuable company that has invested about $13 billion into the startup. "In reality, however, OpenAI, Inc. has been transformed into a closed-source de facto subsidiary of the largest technology company in the world: Microsoft. Under its new board, it is not just developing but is actually refining an AGI to maximize profits for Microsoft, rather than for the benefit of humanity," the lawsuit adds. "This was a stark betrayal of the Founding Agreement."

AI

AI-Generated Articles Prompt Wikipedia To Downgrade CNET's Reliability Rating (arstechnica.com) 54

Wikipedia has downgraded tech website CNET's reliability rating following extensive discussions among its editors regarding the impact of AI-generated content on the site's trustworthiness. "The decision reflects concerns over the reliability of articles found on the tech news outlet after it began publishing AI-generated stories in 2022," adds Ars Technica. Futurism first reported the news. From the report: Wikipedia maintains a page called "Reliable sources/Perennial sources" that includes a chart featuring news publications and their reliability ratings as viewed from Wikipedia's perspective. Shortly after the CNET news broke in January 2023, Wikipedia editors began a discussion thread on the Reliable Sources project page about the publication. "CNET, usually regarded as an ordinary tech RS [reliable source], has started experimentally running AI-generated articles, which are riddled with errors," wrote a Wikipedia editor named David Gerard. "So far the experiment is not going down well, as it shouldn't. I haven't found any yet, but any of these articles that make it into a Wikipedia article need to be removed." After other editors agreed in the discussion, they began the process of downgrading CNET's reliability rating.

As of this writing, Wikipedia's Perennial Sources list currently features three entries for CNET broken into three time periods: (1) before October 2020, when Wikipedia considered CNET a "generally reliable" source; (2) between October 2020 and present, when Wikipedia notes that the site was acquired by Red Ventures in October 2020, "leading to a deterioration in editorial standards" and saying there is no consensus about reliability; and (3) between November 2022 and January 2023, when Wikipedia considers CNET "generally unreliable" because the site began using an AI tool "to rapidly generate articles riddled with factual inaccuracies and affiliate links."

Futurism reports that the issue with CNET's AI-generated content also sparked a broader debate within the Wikipedia community about the reliability of sources owned by Red Ventures, such as Bankrate and CreditCards.com. Those sites published AI-generated content around the same period of time as CNET. The editors also criticized Red Ventures for not being forthcoming about where and how AI was being implemented, further eroding trust in the company's publications. This lack of transparency was a key factor in the decision to downgrade CNET's reliability rating.
A CNET spokesperson said in a statement: "CNET is the world's largest provider of unbiased tech-focused news and advice. We have been trusted for nearly 30 years because of our rigorous editorial and product review standards. It is important to clarify that CNET is not actively using AI to create new content. While we have no specific plans to restart, any future initiatives would follow our public AI policy."
AI

BC Lawyer Reprimanded For Citing Fake Cases Invented By ChatGPT 42

A B.C. lawyer has been ordered to pay costs for opposing counsel for the time they took to discover that two cases she cited as precedent were created by ChatGPT. CBC News reports: The cases would have provided compelling precedent for a divorced dad to take his children to China -- had they been real. But instead of savouring courtroom victory, the Vancouver lawyer for a millionaire embroiled in an acrimonious split has been told to personally compensate her client's ex-wife's lawyers for the time it took them to learn the cases she hoped to cite were conjured up by ChatGPT. In a decision released Monday, a B.C. Supreme Court judge reprimanded lawyer Chong Ke for including two AI "hallucinations" in an application filed last December. The cases never made it into Ke's arguments; they were withdrawn once she learned they were non-existent.

Justice David Masuhara said he didn't think the lawyer intended to deceive the court -- but he was troubled all the same. "As this case has unfortunately made clear, generative AI is still no substitute for the professional expertise that the justice system requires of lawyers," Masuhara wrote in a "final comment" appended to his ruling. "Competence in the selection and use of any technology tools, including those powered by AI, is critical."
AI

Apple Wants You To Know It's Working On AI (reuters.com) 47

Apple plans to disclose more about its plans to put generative AI to use later this year, Chief Executive Officer Tim Cook said during the company's annual shareholder meeting on Wednesday. From a report: Cook said that the iPhone maker sees "incredible breakthrough potential for generative AI, which is why we're currently investing significantly in this area. We believe that will unlock transformative opportunities for users when it comes to productivity, problem solving and more."

Apple has been slower in rolling out generative AI, which can generate human-like responses to written prompts, than rivals such as Microsoftand Alphabet's Google, which are weaving them into products. On Wednesday, Cook argued that AI is already at work behind the scenes in Apple's products but said there would be more news on explicit AI features later this year. Bloomberg previously reported Apple plans to use AI to improve the ability to search through data stored on Apple devices. "Every Mac that is powered by Apple silicon is an extraordinarily capable AI machine. In fact, there's no better computer for AI on the market today," Cook said.

Microsoft

Microsoft is Working With Nvidia, AMD and Intel To Improve Upscaling Support in PC Games (theverge.com) 22

Microsoft has outlined a new Windows API designed to offer a seamless way for game developers to integrate super resolution AI-upscaling features from Nvidia, AMD, and Intel. From a report: In a new blog post, program manager Joshua Tucker describes Microsoft's new DirectSR API as the "missing link" between games and super resolution technologies, and says it should provide "a smoother, more efficient experience that scales across hardware."

"This API enables multi-vendor SR [super resolution] through a common set of inputs and outputs, allowing a single code path to activate a variety of solutions including Nvidia DLSS Super Resolution, AMD FidelityFX Super Resolution, and Intel XeSS," the post reads. The pitch seems to be that developers will be able to support this DirectSR API, rather than having to write code for each and every upscaling technology.

The blog post comes a couple of weeks after an "Automatic Super Resolution" feature was spotted in a test version of Windows 11, which promised to "use AI to make supported games play more smoothly with enhanced details." Now, it seems the feature will plug into existing super resolution technologies like DLSS, FSR, and XeSS rather than offering a Windows-level alternative.

AI

Adobe's New Prototype Generative AI Tool Is the 'Photoshop' of Music-Making and Editing (theverge.com) 51

Adobe has announced a new prototype tool called Project Music GenAI Control that allows users to create original music by inputting text prompts, then edit the audio without switching to separate software. Users can specify musical styles in their prompts to produce tracks like "happy dance" or "sad jazz."

Adobe says integrated editing controls let users tweak patterns, tempo, intensity and structure of the AI-generated music. Sections can be remixed and looped as backing tracks or background music. The tool can also adjust audio "based on a reference melody" and extend clip length for set animations or podcasts. Details on editing interface and upload options for custom reference tracks are unclear.
AI

The Intercept, Raw Story, and AlterNet Sue OpenAI and Microsoft (theverge.com) 58

The Intercept, Raw Story, and AlterNet have filed separate lawsuits against OpenAI and Microsoft, alleging copyright infringement and the removal of copyright information while training AI models. The Verge reports: The publications said ChatGPT "at least some of the time" reproduces "verbatim or nearly verbatim copyright-protected works of journalism without providing author, title, copyright or terms of use information contained in those works." According to the plaintiffs, if ChatGPT trained on material that included copyright information, the chatbot "would have learned to communicate that information when providing responses."

Raw Story and AlterNet's lawsuit goes further (PDF), saying OpenAI and Microsoft "had reason to know that ChatGPT would be less popular and generate less revenue if users believed that ChatGPT responses violated third-party copyrights." Both Microsoft and OpenAI offer legal cover to paying customers in case they get sued for violating copyright for using Copilot or ChatGPT Enterprise. The lawsuits say that OpenAI and Microsoft are aware of potential copyright infringement. As evidence, the publications point to how OpenAI offers an opt-out system so website owners can block content from its web crawlers.
The New York Times also filed a lawsuit in December against OpenAI, claiming ChatGPT faithfully reproduces journalistic work. OpenAI claims the publication exploited a bug on the chatbot to regurgitate its articles.
AI

StarCoder 2 Is a Code-Generating AI That Runs On Most GPUs (techcrunch.com) 44

An anonymous reader quotes a report from TechCrunch: Perceiving the demand for alternatives, AI startup Hugging Face several years ago teamed up with ServiceNow, the workflow automation platform, to create StarCoder, an open source code generator with a less restrictive license than some of the others out there. The original came online early last year, and work has been underway on a follow-up, StarCoder 2, ever since. StarCoder 2 isn't a single code-generating model, but rather a family. Released today, it comes in three variants, the first two of which can run on most modern consumer GPUs: A 3-billion-parameter (3B) model trained by ServiceNow; A 7-billion-parameter (7B) model trained by Hugging Face; and A 15-billion-parameter (15B) model trained by Nvidia, the newest supporter of the StarCoder project. (Note that "parameters" are the parts of a model learned from training data and essentially define the skill of the model on a problem, in this case generating code.)a

Like most other code generators, StarCoder 2 can suggest ways to complete unfinished lines of code as well as summarize and retrieve snippets of code when asked in natural language. Trained with 4x more data than the original StarCoder (67.5 terabytes versus 6.4 terabytes), StarCoder 2 delivers what Hugging Face, ServiceNow and Nvidia characterize as "significantly" improved performance at lower costs to operate. StarCoder 2 can be fine-tuned "in a few hours" using a GPU like the Nvidia A100 on first- or third-party data to create apps such as chatbots and personal coding assistants. And, because it was trained on a larger and more diverse data set than the original StarCoder (~619 programming languages), StarCoder 2 can make more accurate, context-aware predictions -- at least hypothetically.

[I]s StarCoder 2 really superior to the other code generators out there -- free or paid? Depending on the benchmark, it appears to be more efficient than one of the versions of Code Llama, Code Llama 33B. Hugging Face says that StarCoder 2 15B matches Code Llama 33B on a subset of code completion tasks at twice the speed. It's not clear which tasks; Hugging Face didn't specify. StarCoder 2, as an open source collection of models, also has the advantage of being able to deploy locally and "learn" a developer's source code or codebase -- an attractive prospect to devs and companies wary of exposing code to a cloud-hosted AI. Hugging Face, ServiceNow and Nvidia also make the case that StarCoder 2 is more ethical -- and less legally fraught -- than its rivals. [...] As opposed to code generators trained using copyrighted code (GitHub Copilot, among others), StarCoder 2 was trained only on data under license from the Software Heritage, the nonprofit organization providing archival services for code. Ahead of StarCoder 2's training, BigCode, the cross-organizational team behind much of StarCoder 2's roadmap, gave code owners a chance to opt out of the training set if they wanted. As with the original StarCoder, StarCoder 2's training data is available for developers to fork, reproduce or audit as they please.
StarCoder 2's license may still be a roadblock for some. "StarCoder 2 is licensed under the BigCode Open RAIL-M 1.0, which aims to promote responsible use by imposing 'light touch' restrictions on both model licensees and downstream users," writes TechCrunch's Kyle Wiggers. "While less constraining than many other licenses, RAIL-M isn't truly 'open' in the sense that it doesn't permit developers to use StarCoder 2 for every conceivable application (medical advice-giving apps are strictly off limits, for example). Some commentators say RAIL-M's requirements may be too vague to comply with in any case -- and that RAIL-M could conflict with AI-related regulations like the EU AI Act."
Intel

Intel Puts 1nm Process (10A) on the Roadmap For 2027 (tomshardware.com) 35

Intel's previously-unannounced Intel 10A (analogous to 1nm) will enter production/development in late 2027, marking the arrival of the company's first 1nm node, and its 14A (1.4nm) node will enter production in 2026. The company is also working to create fully autonomous AI-powered fabs in the future. Tom's Hardware: Intel's Keyvan Esfarjani, the company's EVP and GM and Foundry Manufacturing and Supply, held a very insightful session that covered the company's latest developments and showed how the roadmap unfolds over the coming years. Here, we can see two charts, with the first outlining the company's K-WSPW (thousands of wafer starts per week) capacity for Intel's various process nodes. Notably, capacity typically indicates how many wafers can be started, but not the total output -- output varies based on yields. You'll notice there isn't a label for the Y-axis, which would give us a direct read on Intel's production volumes. However, this does give us a solid idea of the proportionality of Intel's planned node production over the next several years.

Intel did not specify the arrival date of its coming 14A node in its previous announcements, but here, the company indicates it will begin production of the Intel 14A node in 2026. Even more importantly, Intel will begin production/development of its as-yet-unannounced 10A node in late 2027, filling out its roster of nodes produced with EUV technology. Intel's 'A' suffix in its node naming convention represents Angstroms, and 10 Angstroms converts to 1nm, meaning this is the company's first 1nm-class node. Intel hasn't shared any details about the 10A/1nm node but has told us that it classifies a new node as at least having a double-digit power/performance improvement. Intel CEO Pat Gelsinger has told us the cutoff for a new node is around a 14% to 15% improvement, so we can expect that 10A will have at least that level of improvement over the 14A node. (For example, the difference between Intel 7 and Intel 4 was a 15% improvement.)

United States

AI, Drones, Security Cameras: San Francisco Mayor's Arsenal To Fight Crime (reuters.com) 65

San Francisco will vote next week on a divisive ballot measure that would authorize police to use surveillance cameras, drones and AI-powered facial recognition as the city struggles to restore a reputation tarnished by street crime and drugs. From a report: The Safer San Francisco initiative, formally called Proposition E, is championed by Mayor London Breed who believes disgruntled citizens will approve the proposal on Tuesday. Although technology fueled the Silicon Valley-adjacent city's decades-long boom, residents have a history of being deeply suspicious. In 2019, San Francisco, known for its progressive politics, became the first large U.S. city to ban government use of facial recognition due to concerns about privacy and misuse.

Breed, who is running for re-election in November, played down the potential for abuse under the ballot measure, saying safeguards are in place. "I get that people are concerned about privacy rights and other things, but technology is all around us," she said in an interview. "It's coming whether we want it to or not. And everyone is walking around with AI in their hands with their phones, recording, videotaping," Breed said. Critics of the proposal contend it could hurt disadvantaged communities and lead to false arrests, arguing surveillance technology requires greater oversight.

Slashdot Top Deals