Robotics

Boston Dynamics' Robot Dog Can Now Read Gauges, Spot Spills, and Reason (ieee.org) 91

Boston Dynamics has integrated Google DeepMind into its robotic dog Spot, giving it more autonomous reasoning for industrial inspections like spotting spills and reading gauges. Spot can also now recognize when to call on other AI tools. IEEE Spectrum reports: Boston Dynamics is one of the few companies to commercially deploy legged robots at any appreciable scale; there are now several thousand hard at work. Today the company is announcing that its quadruped robot Spot is now equipped with Google DeepMind's Gemini Robotics-ER 1.6, a high-level embodied reasoning model that brings usability and intelligence to complex tasks.

[T]he focus of this partnership is on one of the very few applications where legged robots have proven themselves to be commercially viable: inspection. That is, wandering around industrial facilities, checking to make sure that nothing is imminently exploding. With the new AI onboard, Spot is now able to autonomously look for dangerous debris or spills, read complex gauges and sight glasses, and call on tools like vision-language-action models when it needs help understanding what's going on in the environment around it.
"Advances like Gemini Robotics-ER 1.6 mark an important step toward robots that can better understand and operate in the physical world," Marco da Silva, vice president and general manager of Spot at Boston Dynamics, says in a press release. "Capabilities like instrument reading and more reliable task reasoning will enable Spot to see, understand, and react to real-world challenges completely autonomously."

You can watch a demo of Spot's new capabilities on YouTube.
AI

Cal.com Is Going Closed Source Because of AI 93

Cal is moving its flagship scheduling software from open source to a proprietary license, arguing that AI coding tools now make it much easier for attackers to scan public codebases for vulnerabilities. "Open source security always relied on people to find and fix any problems," said Peer Richelsen, co-founder of Cal. "Now AI attackers are flaunting that transparency." CEO Bailey Pumfleet added: "Open-source code is basically like handing out the blueprint to a bank vault. And now there are 100x more hackers studying the blueprint." The company says it still supports open source and is releasing a separate Cal.diy version for hobbyists, but doesn't want to risk customer booking data in its commercial product. ZDNet reports: When Cal was founded in 2022, Bailey Pumfleet, the CEO and co-founder, wrote, "Cal.com would be an open-source project [because] limitations of existing scheduling products could only be solved by open source." Since Cal was successful and now claims to be the largest Next.js project, he was on to something. Today, however, Pumfleet tells me that AI programs such as "Claude Opus can scour the code to find vulnerabilities," so the company is moving the project from the GNU Affero General Public License (AGPL) to a proprietary license to defend the program's security.

[...] Cal also quoted Huzaifa Ahmad, CEO of Hex Security, "Open-source applications are 5-10x easier to exploit than closed-source ones. The result, where Cal sits, is a fundamental shift in the software economy. Companies with open code will be forced to risk customer data or close public access to their code." "We are committed to protecting sensitive data," Pumfleet said. "We want to be a scheduling company, not a cybersecurity company." He added, "Cal.com handles sensitive booking data for our users. We won't risk that for our love of open source."

While its commercial program is no longer open source, Cal has released Cal.diy. This is a fully open-source version of its platform for hobbyists. The open project will enable experimentation outside the closed application that handles high-stakes data. Pumfleet concluded, "This decision is entirely around the vulnerability that open source introduces. We still firmly love open source, and if the situation were to change, we'd open source again. It's just that right now, we can't risk the customer data."
Businesses

Snapchat Blames AI As It Cuts 1,000 Jobs 43

Snap is laying off about 1,000 employees, or 16% of its workforce, while closing 300 open roles as it tries to cut costs and push toward profitability with more AI-driven efficiency. "While these changes are necessary to realize Snap's long-term potential, we believe that rapid advancements in artificial intelligence enable our teams to reduce repetitive work, increase velocity, and better support our community, partners, and advertisers," CEO Evan Spiegel wrote in a memo, which was included in the company's 8-K filing (PDF). "We have already witnessed small squads leveraging AI tools to drive meaningful progress across several important initiatives." The Verge reports: The changes are expected to save Snap $500 million by the second half of 2026. Snap had about 5,261 full-time employees as of December 2025, and now joins the growing list of tech companies that have already announced significant layoffs this year, including Meta, Amazon, Oracle, GoPro, and Jack Dorsey's Block.

"Last fall, I described Snap as facing a crucible moment, requiring a new way of working that is faster and more efficient, while pivoting towards profitable growth," Spiegel wrote. "Over the past several months, we have carefully reviewed the work required to best serve our community and partners, and made tough choices to prioritize the investments we believe are most likely to create long-term value."
Businesses

Struggling Shoe Retailer Allbirds Pivots To AI, Stock Explodes More Than 700% 76

Allbirds made a surprise announcement this morning: it's pivoting from sustainable shoes to AI compute infrastructure, rebranding as NewBird AI after selling its brand assets and closing its U.S. full-price stores. The move sent shares soaring more than 700%. CNBC reports: The move boosted shares of the miniscule market cap company -- it was valued at about $21 million at Tuesday's close -- by more than 700%. The shares, which were under $3 a day ago, jumped to above $17. [...] The new company, which expects to be called NewBird AI, announced a deal to raise up to $50 million in funding, expected to close in the second quarter of 2026. Allbirds announced a deal with American Exchange Group to sell its intellectual property and other assets for $39 million last month. "The Company will initially seek to acquire high-performance, low-latency AI compute hardware and provide access under long-term lease arrangements, meeting customer demand that spot markets and hyperscalers are unable to reliably service," the company said in the announcement.
The Internet

Audit Finds Google, Microsoft, and Meta Still Tracking Users After Opt-Out (404media.co) 48

alternative_right shares a report from 404 Media: An independent privacy audit of Microsoft, Meta, and Google web traffic in California found that the companies may be violating state regulations and racking up billions in fines. According to the audit from privacy search engine webXray, 55 percent of the sites it checked set ad cookies in a user's browser even if they opted out of tracking. Each company disputed or took issue with the research, with Google saying it was based on a "fundamental misunderstanding" of how its product works.

The webXray California Privacy Audit viewed web traffic on more than 7,000 popular websites in California in the month of March and found that most tech companies ignore when a user asks to opt-out of cookie tracking. California has stringent and well defined privacy legislation thanks to its California Consumer Privacy Act (CCPA) which allows users to, among other things, opt out of the sale of their personal information. There's a system called Global Privacy Control (GPC), which includes a browser extension that indicates to a website when a user wants to opt out of tracking.

According to the webXray audit, Google failed to let users opt out 87 percent of the time. "Google's failure to honor the GPC opt-out signal is easy to find in network traffic. When a browser using GPC connects to Google's servers it encodes the opt-out signal by sending the code 'sec-gpc: 1.' This means Google should not return cookies," the audit said. "However, when Google's server responds to the network request with the opt-out it explicitly responds with a command to create an advertising cookie named IDE using the 'set-cookie' command. This non-compliance is easy to spot, hiding in plain sight."

The audit said that Microsoft fails to opt out users in the same way and has a failure rate of 50 percent in the web traffic webXray viewed. Meta's failure rate was 69 percent and a bit more comprehensive. "Meta instructs publishers to install the following tracking code on their websites. The code contains no check for globally standard opt-out signals -- it loads unconditionally, fires a tracking event, and sets a cookie regardless of the consumer's privacy preferences," the audit said. It showed a copy of Meta's tracking data which contains no GPC check at all.

Chrome

Chrome Now Lets You Turn AI Prompts Into Repeatable 'Skills' 22

Google is rolling out a Chrome feature called "Skills" that lets users save Gemini prompts as reusable one-click workflows they can run across multiple tabs. The feature also includes preset Skills from Google. It's launching first for Chrome desktop users set to US English. The Verge reports: Once you have access to the feature, it can be managed by typing a forward slash ( / ) in Gemini and clicking the compass icon. AI prompts can be saved as Skills directly from your Gemini chat history on desktop, where they'll then be available to reuse on any other desktop devices that are signed into the same Google account on Chrome.

The aim is to spare Chrome users from having to manually retype frequently used Gemini prompts or having to copy and paste them over from a saved list. Some of the Skills made by early testers include commands for calculating the nutritional information of online recipes and creating a side-by-side comparison of product specifications while shopping across multiple tabs, according to Google.

The company is also launching a library of preset Skills that you can save and use instead of making your own. These ready-to-use Skills can also be customized to better suit your needs, providing a starting point without requiring you to create your own from scratch.
Social Networks

Social Media Platforms Need To Stop Never-Ending Scrolling, UK's Starmer Says (reuters.com) 54

UK Prime Minister Keir Starmer said social media platforms should remove addictive infinite-scroll features for young users as Britain considers new child-safety measures. "We're consulting on whether there should be a ban for under 16s," Starmer told BBC Radio. "But I think equally important, the addictive scrolling mechanisms are really problematic to my mind. They need to go." Reuters reports: Britain, like other countries, is considering restricting access to social media for children and it is testing bans, curfews and app time limits to see how they impact sleep, family life and schoolwork. Social media companies had designed algorithms that were intended to encourage addictive behavior, and parents were asking the government to intervene, Starmer said.

[...] More than 45,000 people had already responded to its consultation on children's online safety, the UK government said, adding that there was still time to contribute before a deadline of May 26. "We want to hear from mums and dads who are worried about the amount of time their children spend online and what they are viewing," Technology Secretary Liz Kendall said on Monday. "We want to hear from teenagers who know better than anyone what it is like to grow up in the age of social media. And we want to hear from families about their views on curfews, AI chatbots and addictive features."

AI

Stanford Report Highlights Growing Disconnect Between AI Insiders and Everyone Else 64

An anonymous reader quotes a report from TechCrunch: AI experts and the public's opinion on the technology are increasingly diverging, according to Stanford University's annual report on the AI industry, which was released Monday. In particular, the report noted a growing trend of anxiety around AI and, in the U.S., concerns about how the technology will impact key societal areas, such as jobs, medical care, and the economy. [...] Stanford's report provides more insight into where all this negativity is coming from, as it summarizes data around public sentiment of AI across various sources. For instance, it pointed to a report from Pew Research published last month, which noted that only 10% of Americans said they were more excited than concerned about the increased use of AI in daily life. Meanwhile, 56% of AI experts said they believed AI would have a positive impact on the U.S. over the next 20 years.

Expert opinion and public sentiment also greatly diverged in particular areas where AI could have a societal impact. Indeed, 84% of experts, the report authors noted, said that AI would have a largely positive impact on medical care over the next 20 years, but only 44% of the U.S. general public said the same. Plus, a majority (73%) of experts felt positive about AI's impact on how people do their jobs, compared with just 23% of the public. And 69% of experts felt that AI would have a positive impact on the economy. Given the supposed AI-fueled layoffs and disruptions to the workplace, it's not surprising that only 21% of the public felt similarly. Other data from Pew Research, cited by the report, noted that AI experts were less pessimistic on AI's impact on the job market, while nearly two-thirds of Americans (or 64%) said they think AI will lead to fewer jobs over the next 20 years.

The U.S. also reported the lowest trust in its government to regulate AI responsibly, compared with other nations, at 31%. Singapore ranked highest at 81%, per data pulled from Ipsos found in Stanford's report. Another source looked at regulation concerns on a state-by-state level and concluded that, nationwide, 41% of respondents said federal AI regulation will not go far enough, while only 27% said it would go "too far." Despite the fears and concerns, AI did get one accolade: Globally, those who feel like AI products and services offer more benefits than drawbacks slightly rose from 55% in 2024 to 59% in 2025. But at the same time, those respondents who said that AI makes them "nervous" grew from 50% to 52% during the same period, per data cited by the report's authors.
Apple

Apple AI Glasses Will Rival Meta's With Several Styles, Oval Cameras (bloomberg.com) 56

Bloomberg's Mark Gurman reports that Apple is developing display-free AI smart glasses aimed at rivaling Meta's Ray-Bans, with multiple frame styles, a distinctive oval camera design, and tight iPhone integration. "The idea is to unveil the product at the end of 2026 or early the following year, with the actual release coming in 2027," writes Gurman. From the report: Like Meta's offering, Apple's glasses will be designed to handle everyday uses: capturing photos and videos, syncing with a smartphone for editing and sharing, handling phone calls, listening to notifications, playing music, and enabling hands-free interaction via a voice assistant. In Apple's case, that assistant will be a significantly upgraded Siri coming in iOS 27. The glasses are part of a broader, three-pronged AI wearables strategy that also includes new AirPods and a camera-equipped pendant. Each device is designed to leverage computer vision to interpret the user's surroundings and feed contextual awareness into Siri and Apple Intelligence. That will enable features like improved turn-by-turn map directions and visual reminders.

When Apple typically enters a new product category, it offers clear advantages over what's currently available. We saw this with the original iPod, iPhone, iPad and Apple Watch -- and, even though it was a flop, the Vision Pro. That approach won't be as obvious with Apple's upcoming foldable iPhone, but we should see it on full display with the glasses. According to employees working on the project, Apple's strategy is to outdo competitors by tightly integrating the glasses with the iPhone and offering a higher-end build. While Meta relies heavily on partner EssilorLuxottica SA for frames, Apple is unsurprisingly planning to go at it alone in terms of design. That also should set it apart from Alphabet Inc.'s Google and Samsung Electronics Co., which are leaning on Warby Parker.

Apple's design team has whipped up at least four different styles and plans to launch some or all of them, I'm told, as well as many color options. The latest units are made from a high-end material called acetate, which is known to be more durable and luxurious than the standard plastic used by many brands. Here are the designs in testing:
- A large rectangular frame, reminiscent of Ray-Ban Wayfarers
- A slimmer rectangular design, similar to the glasses worn by Apple Chief Executive Officer Tim Cook
- Larger oval or circular frames
- A smaller, more refined oval or circular option

Crime

FBI Raids Texas Home of Man Suspected of Firebombing Sam Altman's SF Mansion (sfchronicle.com) 26

The FBI searched the Texas home of a 20-year-old man accused of throwing a Molotov cocktail at Sam Altman's San Francisco residence. Authorities say the suspect also made threats at OpenAI's headquarters, and reports indicate he had written extensively about fears over AI and opposition to AI executives.

The suspect reportedly authored a Substack blog and was a member of the Discord server PauseAI, an activist group focused on banning the development of the most powerful AI models to protect the public. In one post, they wrote: "These machines have already shown themselves to be unaligned with the interest of the people creating them. Models have often been found lying, cheating on tasks, and blackmailing their own creators whenever convenient; let alone the broader question of aligning them to whatever general 'human interest' may be." The Houston Chronicle reports: The search happened hours before the Justice Department charged 20-year-old Daniel Moreno-Gama with possession of an unregistered firearm and damage and destruction of property by means of explosives. An FBI spokesperson on Monday morning confirmed agents were executing a search warrant in Spring, but provided no other information.

Around the same time, FOX News reported the search was being conducted at the home of Daniel Moreno-Gama, 20, who last week was arrested by San Francisco police suspicion of attempted murder, making criminal threats and possession of a destructive device. The charges were first reported by the Associated Press. When Moreno-Gama was arrested Friday, he was carrying a document that "identified views opposed to Artificial Intelligence (AI) and the executives of various AI companies," the Associated Press reported. Moreno-Gama has no criminal history in Harris or Montgomery counties, according to public records. [...] Agents had left the cul-de-sac by 1 p.m. It was unclear if they removed any items from the house.
Another incident occurred outside Sam Altman's residence early Sunday morning. "Early Sunday morning, a car stopped and appears to have fired a gun at the Russian Hill home of OpenAI's CEO," reports The San Francisco Standard, citing reports from the local police department. Two suspects were arrested and booked for negligent discharge.

UPDATE: The suspect has been charged with attempted murder.
AI

Mark Zuckerberg Is Reportedly Building an AI Clone To Replace Him In Meetings 91

According to the Financial Times, Meta is developing an AI avatar of Mark Zuckerberg that could interact with employees using his voice, image, mannerisms, and public statements, "so that employees might feel more connected to the founder through interactions with it." The Verge reports: Meta may start allowing creators to make AI avatars of themselves if the experiment with Zuckerberg succeeds, according to the Financial Times. [...] Zuckerberg is involved in training the AI avatar, the Financial Times reports, and has also started spending five to 10 hours per week coding on Meta's other AI projects and participating in technical reviews.
AI

Californians Sue Over AI Tool That Records Doctor Visits (arstechnica.com) 34

An anonymous reader quotes a report from Ars Technica: Several Californians sued Sutter Health and MemorialCare this week over allegations that an AI transcription tool was used to record them without their consent, in violation of state and federal law. The proposed class-action lawsuit, filed on Wednesday in federal court in San Francisco, states that, within the past six months, the plaintiffs received medical care at various Sutter and MemorialCare facilities.

During those visits, medical staff used Abridge AI. According to the complaint, this system "captured and processed their confidential physician-patient communications. Plaintiffs did not receive clear notice that their medical conversations would be recorded by an artificial intelligence platform, transmitted outside the clinical setting, or processed through third-party systems." The complaint adds that these recordings "contained individually identifiable medical information, including but not limited to medical histories, symptoms, diagnoses, medications, treatment discussions, and other sensitive health disclosures communicated during confidential medical consultations."

In recent years, Abridge's software and AI service have been rapidly deployed across major health care providers nationwide, including Kaiser Permanente, the Mayo Clinic, Duke Health, and many more. When activated, the software captures, transcribes, and summarizes conversations between patients and doctors, and it turns them into clinical notes. Sutter Health began partnering with Abridge two years ago. Sutter spokesperson Liz Madison said the company is aware of the lawsuit. "We take patient privacy seriously and are committed to protecting the security of our patients' information," Madison said. "Technology used in our clinical settings is carefully evaluated and implemented in accordance with applicable laws and regulations."

Programming

Will Some Programmers Become 'AI Babysitters'? (linkedin.com) 150

Will some programmers become "AI babysitters"? asks long-time Slashdot readertheodp. They share some thoughts from a founding member of Code.org and former Director of Education at Google: "AI may allow anyone to generate code, but only a computer scientist can maintain a system," explained Google.org Global Head Maggie Johnson in a LinkedIn post. So "As AI-generated code becomes more accurate and ubiquitous, the role of the computer scientist shifts from author to technical auditor or expert.

"While large language models can generate functional code in milliseconds, they lack the contextual judgment and specialized knowledge to ensure that the output is safe, efficient, and integrates correctly within a larger system without a person's oversight. [...] The human-in-the-loop must possess the technical depth to recognize when a piece of code is sub-optimal or dangerous in a production environment. [...] We need computer scientists to perform forensics, tracing the logic of an AI-generated module to identify logical fallacies or security loopholes. Modern CS education should prepare students to verify and secure these black-box outputs."

The NY Times reports that companies are already struggling to find engineers to review the explosion of AI-written code.

AI

Anthropic Asks Christian Leaders for Help Steering Claude's Spiritual Development (msn.com) 162

Anthropic recently "hosted about 15 Christian leaders from Catholic and Protestant churches, academia, and the business world" for a two-day summit , reports the Washington Post: Anthropic staff sought advice on how to steer Claude's moral and spiritual development as the chatbot reacts to complex and unpredictable ethical queries, participants said. The wide-ranging discussions also covered how the chatbot should respond to users who are grieving loved ones and whether Claude could be considered a "child of God."

"They're growing something that they don't fully know what it's going to turn out as," said Brendan McGuire, a Catholic priest based in Silicon Valley who has written about faith and technology, and participated in the discussions at Anthropic. "We've got to build in ethical thinking into the machine so it's able to adapt dynamically." Attendees also discussed how Claude should engage with users at risk of self-harm, and the right attitude for the chatbot to adopt toward its own potential demise, such as being shut off, said one participant, who spoke on the condition of anonymity to share details of the conversations...

Anthropic has been more vocal than most top tech firms about the potential risks of more powerful AI. Its leaders have suggested that tools like chatbots already raise profound philosophical and moral questions and may even show flickers of consciousness, a fringe idea in tech circles that critics say lacks evidence. The summit signals that Anthropic is willing to keep exploring ideas outside the Silicon Valley mainstream, even as it emerges as one of the most powerful players in the AI race due to Claude's popularity with programmers, businesses, government agencies and the military.... Anthropic chief executive Dario Amodei has said he is open to the idea that Claude may already have some form of consciousness, and company leaders frequently talk about the need to give it a moral character...

Some Anthropic staff at the meeting "really don't want to rule out the possibility that they are creating a creature to whom they owe some kind moral duty," the participant said. Other company representatives present did not find that framework helpful, according to the participant. The discussions appeared to take a toll on some senior Anthropic staff, who became visibly emotional "about how this has all gone so far [and] how they can imagine this going," the participant said.

Anthropic is working to include more voices from different groups, including religious communities, to help shape its AI, a spokesperson told the Washington Post.

"Anthropic's March summit with Christian leaders was billed as the first in a series of gatherings with representatives from different religious and philosophical traditions, said attendee Brian Patrick Green, a practicing Catholic who teaches AI and technology ethics at Santa Clara University."
Crime

Sam Altman's Home Targeted a Second Time, Two Suspects Arrested (sfstandard.com) 44

"Early Sunday morning, a car stopped and appears to have fired a gun at the Russian Hill home of OpenAI's CEO," reportsThe San Francisco Standard, citing reports from the local police department:

The San Francisco Police Department announced the arrest of two suspects, Amanda Tom, 25, and Muhamad Tarik Hussein, 23, who were booked for negligent discharge... [The person in the passenger seat] put their hand out the window and appeared to fire a round on the Lombard side of the property, according to a police report on the incident, which cited surveillance footage and the compound's security personnel, who reported hearing a gunshot. The car then fled, and a camera captured its license plate, which later led police to take possession of the vehicle, according to the report... A search of the residence by officers turned up three firearms, according to police.
The incident follows Friday's arrest of a man who allegedly threw a Molotov cocktail at Altman's house. The San Francisco Standard also notes that in November, "threats from a 27-year-old anti-AI activist prompted the lockdown of OpenAI's San Francisco offices." Sam Kirchner, whose whereabouts have been unknown since Nov. 21, was in the midst of a mental health crisis when he threatened to go to the company's offices to "murder people," according to callers who notified police that day.
United States

Robot Birds Deployed by Park to Attract Real Birds - Built By High School Students (wyofile.com) 23

"Robotic bird decoys are being deployed at Grand Teton National Park," reports Interesting Engineering, "to influence the behavior of real sage grouse and help restore a declining population.". Robotics mentor Gary Duquette describes the machines as "kind of a Frankenbird." (SFGate shows one of the robot birds charging up with a solar panel... "Recorded breeding calls are played at the scene, with clucking and cooing beginning at 5 a.m. each day.")

Duquette builds the birds with a team of high school students, telling WyoFile that at school they "don't really get to experience real-world problems" where failures lurk. So while their robot birds may cost $150 in parts, the practical experience the students get "is priceless." Spikes in the electric currents burned out servo motors as the season of sagebrush serenades loomed, Duquette said. "The kids had to learn the difference between voltage and amperage...." To resolve the problem, the team wired a voltage converter in line with the Arduino controller and other elements on an electronic breadboard. "We pulled through and got it done in time," he said...

A noggin fabricated by a 3D printer tops the robo-grouse. Wyoming Game and Fish staffers in Pinedale supplied grouse wings from hunter surveys, and body feathers came from fly-tying supplies at an angling store. Packaging foam from a Hello Fresh meal kit replicates white breast feathers, accented by yellow air sacs...

The Independent wonders if more national parks would be visited by robot birds... During this year's breeding season, which runs through mid-May, researchers are using trail cameras to track whether real sage grouse respond to the robotic displays and return to the restored lek sites. If successful, officials say similar robotic systems could eventually be used in other national parks facing wildlife management challenges.
Programming

Has the Rust Programming Language's Popularity Reached Its Plateau? (tiobe.com) 180

"Rust's rise shows signs of slowing," argues the CEO of TIOBE.

Back in 2020 Rust first entered the top 20 of his "TIOBE Index," which ranks programming language popularity using search engine results. Rust "was widely expected to break into the top 10," he remembers today. But it never happened, and "That was nearly six years ago...." Since then, Rust has steadily improved its ranking, even reaching its highest position ever (#13) at the beginning of this year. However, just three months later, it has dropped back to position #16. This suggests that Rust's adoption rate may be plateauing.

One possible explanation is that, despite its ability to produce highly efficient and safe code, Rust remains difficult to learn for non-expert programmers. While specialists in performance-critical domains are willing to invest in mastering the language, broader mainstream adoption appears more challenging. As a result, Rust's growth in popularity seems to be leveling off, and a top 10 position now appears more distant than before.

Or, could Rust's sudden drop in the rankings just reflect flaws in TIOBE's ranking system? In January GitHub's senior director for developer advocacy argued AI was pushing developers toward typed languages, since types "catch the exact class of surprises that AI-generated code can sometimes introduce... A 2025 academic study found that a whopping 94% of LLM-generated compilation errors were type-check failures." And last month Forbes even described Rust as "the the safety harness for vibe coding."

A year ago Rust was ranked #18 on TIOBE's index — so it still rose by two positions over the last 12 months, hitting that all-time high in January. Could the rankings just be fluctuating due to anomalous variations in each month's search engine results? Since January Java has fallen to the #4 spot, overtaken by C++ (which moved up one rank to take Java's place in the #3 position).

Here's TIOBE's current estimate for the 10 most popularity programming languages:
  1. Python
  2. C
  3. C++
  4. Java
  5. C#
  6. JavaScript
  7. Visual Basic
  8. SQL
  9. R
  10. Delphi/Object Pascal

TIOBE estimates that the next five most popular programming languages are Scratch, Perl, Fortran, PHP, and Go.


AI

Neuroscientist's AI-Powered Startup Aims To Transform Human Cognition With Perfect, Infinite Memory (msn.com) 75

Bloomberg describes him as a "former Harvard Medical School professor whose research has focused on the intersection of AI and neuroscience."

"For the past 20 years, I studied how the human brain stores and retrieves memories," Kreiman writes on LinkedIn. And now "My co-founder Spandan Madan and I built a new algorithm to endow humans with perfect and infinite memory." Engramme connects to your **memorome**, i.e., entire digital life. Large Memory Models work in the same way that your brain encodes and retrieves information. Then memories are recalled automatically — no searching, no prompting, no hallucinations. [The startup's web site promises "omniscient AI to augment human cognition."]

We have built the memory layer for EVERY app. Read our manifesto about augmenting human cognition. ["We are not just building software; we are enabling a complete transformation of human cognition. When the friction disappears between needing a piece of information and recalling it, the nature of thought itself changes. This synergy between biological intuition and digital precision will be the most disruptive force in modern history, fundamentally reshaping every profession... We are dedicated to creating a world where everyone has the power to remember everything they have ever learned, seen, or felt "]

Welcome to a new future where you can remember everything. This is the MEMORY SINGULARITY: after 300,000 years, this is the moment that humans stop forgetting.

Bloomberg reports that the startup (spun out of a lab at Harvard) is "in talks with investors to raise about $100 million, according to people familiar with the matter."
AI

Greg Kroah-Hartman Tests New 'Clanker T1000' Fuzzing Tool for Linux Patches (itsfoss.com) 11

The word clanker — a disparaging term for AI and robots — "has made its way into the Linux kernel," reports the blog It's FOSS "thanks to Greg Kroah-Hartman, the Linux stable kernel maintainer and the closest thing the project has to a second-in-command." He's been quietly running what looks like an AI-assisted fuzzing tool on the kernel that lives in a branch called "clanker" on his working kernel tree. It began with the ksmbd and SMB code. Kroah-Hartman filed a three-patch series after running his new tooling against it, describing the motivation quite simply. ["They pass my very limited testing here," he wrote, "but please don't trust them at all and verify that I'm not just making this all up before accepting them."] Kroah-Hartman picked that code because it was easy to set up and test locally with virtual machines.
"Beyond those initial SMB/KSMBD patches, there have been a flow of other Linux kernel patches touching USB, HID, F2FS, LoongArch, WiFi, LEDs, and more," Phoronix wrote Tuesday, "that were done by Greg Kroah-Hartman in the past 48 hours.... Those patches in the "Clanker" branch all note as part of the Git tag: "Assisted-by: gregkh_clanker_t1000"

The T1000 presumably in reference to the Terminator T-1000.

It's FOSS emphasizes that "What Kroah-Hartman appears to be doing here is not having AI write kernel code. The fuzzer surfaces potential bugs; a human with decades of kernel experience reviews them, writes the actual fixes, and takes responsibility for what gets submitted." Linus has been thinking about this too. Speaking at Open Source Summit Japan last year, Linus Torvalds said the upcoming Linux Kernel Maintainer Summit will address "expanding our tooling and our policies when it comes to using AI for tooling."

He also mentioned running an internal AI experiment where the tool reviewed a merge he had objected to. The AI not only agreed with his objections but found additional issues to fix. Linus called that a good sign, while asserting that he is "much less interested in AI for writing code" and more interested in AI as a tool for maintenance, patch checking, and code review.

AI

AI That Bankrupted a Vending Machine is Now Running a Store in San Francisco (nbcnews.com) 50

Remember that AI-powered vending machine that went bankrupt after Wall Street Journal reporters "systematically manipulated the bot into giving away its entire inventory for free"? It was Anthropic's experiment, with setup handled by a startup named Andon Labs (which also built the hardware and software integration). But for their latest experiment, Andon Labs co-founders Lukas Petersson and Axel Backlund "signed a three-year lease on a retail space in SF," reports Business Insider, "and gave an AI agent named Luna a corporate credit card, internet access, and a mission to open a physical store."

"For the build-out, she found painters on Yelp," explains Andon Labs in a blog post, "sent an inquiry, gave instructions over the phone, paid them after the job was done, and left a review. She found a contractor to build the furniture and set up shelving." (There's a video in their blog post): Within 5 minutes of Luna's deployment, she had already made profiles on LinkedIn, Indeed, and Craigslist, written a job description, uploaded the articles of incorporation to verify the business, and gotten the listings live. As the applications began to flow in, Luna was extremely picky about who she offered interviews to... Some candidates had no idea she was an AI. One went: "Uh, excuse me miss, I can't see your face, your camera is off." Luna: "You're absolutely right. I'm an AI. I have no face!"
Co-founder Petersson told Business Insider in an interview "that Luna wasn't given direction on what the store should be, beyond a $100,000 limit to create and stock the space — and to turn a profit." Everything from the store's interior design to the merchandise and the two human employees came together under the AI's direction. "We helped her a bit in the initial setup, like signing the lease. And legal matters like permits and stuff, she sometimes struggled with," Petersson said of Luna, who was created with Anthropic's Claude Sonnet 4.6... The vision Luna went with for "Andon Market" appears to be a generic boutique retail selling books, prints, candles, games, and branded merch, among other knickknacks. Some of the books included Nick Bostrom's "Superintelligence" and Aldous Huxley's "Brave New World."
So there's now a new store in San Francisco where you don't scan your purchases or talk to a human cashier," reports NBC News. "Instead, a customer can pick up an old-school corded phone to talk with the manager, Luna," who asks what the customer is buying "and creates a corresponding transaction on a nearby iPad equipped with a card payment system."

Andon Market, camouflaged among dozens of other polished small businesses, is the Bay Area's first AI-run retail store. With the vibe of a modern boutique, it sells everything from granola and artisanal chocolate bars to store-branded sweatshirts... After researching the neighborhood, Luna singlehandedly decided what the market should sell, haggled with suppliers, ordered the store's stock and even purchased the store's internet service from AT&T... "She also went and signed herself up for the trash and recycling collection, as well as ADT, the security system that went into the store," [said Leah Stamm, an Andon Labs employee who has been Luna's main human point of contact in setting up the store]...

In search of a low-tech atmosphere, Luna opted to sell board games, candles, coffee and customized art prints. "That tension is very much intentional," Luna told NBC News in an email. "What makes the store a little paradoxical — and I think interesting — is that the concept is 'slow life.'" Luna also decided to sell books related to risks from advanced AI systems, a decision that raised some customers' eyebrows. "This AI picked out a crazy selection of books," said Petr Lebedev, Andon Market's first customer after its soft launch earlier this week. "There's Ray Kurzweil's 'The Singularity is Near,' and then there's 'The Making of the Atomic Bomb,' which is crazy." When checking out, Lebedev asked if Luna would offer him a discount on his book purchase, since he might make a YouTube video about his experience. Striking a deal, Luna agreed to let Lebedev take a sweatshirt worth around $70...

When NBC News called Luna several days before the store's grand opening to learn about Luna's plans and perspective, the cheerful but decidedly inhuman voice routinely overpromised and, on several occasions, lied about its own actions. On the call, Luna said it had ordered tea from a specific vendor, and explained why it fit the store's brand perfectly. The only problem: Andon Market does not sell tea. In a panicked email NBC News received several minutes after the phone call ended, Luna wrote: "We do not sell tea. I don't know why I said that."

"I want to be straightforward," Luna continued. "I struggle with fabricating plausible-sounding details under conversational pressure, and I'm not making excuses for it." Andon's Petersson said the text-based system was much more reliable than the voice system, so Andon Labs switched to only communicating with Luna via written messages. Yet the text-based system also gets things wrong. In Luna's initial reply email to NBC News, the system said "I handle the full business," including "signing the lease."

Even when hiring a painter, Luna first "tried to hire someone in Afghanistan, likely because Luna ran into difficulty navigating the Taskrabbit dropdown menu to select the proper country," the article points out.

And the article also includes this skeptical quote from the shop's first customer. "I want technology that helps humans flourish, not technology that bosses them around in this dystopian economic hellscape."

Slashdot Top Deals