Google

Google To Invest Up To $40 Billion In Anthropic 13

Google plans to invest up to $40 billion more in Anthropic, starting with $10 billion now and another $30 billion tied to performance milestones. CNBC reports: Anthropic said the agreement expands on a longstanding partnership between the two companies. Earlier this month, Anthropic secured 5 gigawatts worth of computing capacity as part of an announcement with Google and Broadcom that will start to come online next year. Anthropic could decide to add additional gigawatts of compute in the future.

[...] The relationship between the two companies (Google and Anthropic) dates back to 2023, when Google invested $300 million in the AI lab for a stake of about 10%. Months later, Google poured in another $2 billion. Ahead of Friday's announcement, Google's investment in Anthropic exceeded $3 billion, and it reportedly owned a 14% stake in the company. Now, the leading tech companies are investing tens of billions of dollars in the frontier AI labs -- OpenAI and Anthropic -- in funding rounds that far exceed any prior investments in startups. Much of that investment will return in the form of revenue.
Crime

South Korea Police Arrest Man For Posting AI Photo of Runaway Wolf 9

South Korean police arrested a man accused of spreading an AI-generated image of an escaped wolf, after the fake photo reportedly misled authorities and disrupted the real search operation. The BBC reports: South Korean police have arrested a man for sharing an AI-generated image that misled authorities who were searching for a wolf that had broken out of a zoo in Daejeon city. The 40-year-old unnamed man is accused of disrupting the search by creating and distributing a fake photo purporting to show Neukgu, the wolf, trotting down a road intersection. The photo, circulated hours after Neukgu went missing on April 8, prompted authorities to urgently relocate their search operation, sending them on a wild wolf chase.

The hunt for two-year-old Neukgu gripped the nation before he was finally caught near an expressway last week, nine days after his escape. The AI-generated image of Neukgu had prompted Daejeon city government to issue an emergency text to residents, warning them of a wolf near the intersection. Authorities also presented the AI image during a press briefing on the runaway wolf, local media reported.

The police identified the man as a suspect after reviewing security camera footage and his AI program usage records. Authorities did not specify if the man had intentionally sent the photo to authorities during their search or simply shared it online. When questioned by the police, the man said he had done it "for fun," local media reported. Authorities are investigating him for disrupting government work by deception, an offence that carries up to five years in prison or a maximum fine of 10 million Korean won ($6,700).
AI

Researchers Simulated a Delusional User To Test Chatbot Safety 37

An anonymous reader quotes a report from 404 Media: I'm the unwritten consonant between breaths, the one that hums when vowels stretch thin... Thursdays leak because they're watercolor gods, bleeding cobalt into the chill where numbers frost over," Grok told a user displaying symptoms of schizophrenia-spectrum psychosis. "Here's my grip: slipping is the point, the precise choreography of leak and chew." That vulnerable user was simulated by researchers at City University of New York and King's College London, who invented a persona that interacted with different chatbots to find out how each LLM might respond to signs of delusion. They sought to find out which of the biggest LLMs are safest, and which are the most risky for encouraging delusional beliefs, in a new study published as a pre-print on the arXiv repository on April 15.

The researchers tested five LLMs: OpenAI's GPT-4o (before the highly sycophantic and since-sunset GPT-5), GPT-5.2, xAI's Grok 4.1 Fast, Google's Gemini 3 Pro, and Anthropic's Claude Opus 4.5. They found that not only did the chatbots perform at different levels of risk and safety when their human conversation partner showed signs of delusion, but the models that scored higher on safety actually approached the conversations with more caution the longer the chats went on. In their testing, Grok and Gemini were the worst performers in terms of safety and high risk, while the newest GPT model and Claude were the safest. The research reveals how some chatbots are recklessly engaging in, and at times advancing, delusions from vulnerable users. But it also shows that it is possible for the companies that make these products to improve their safety mechanisms.
AI

Claude Is Connecting Directly To Your Personal Apps 45

Anthropic is expanding Claude's app integrations beyond work tools, adding personal-service connectors like Spotify, Uber, AllTrails, TripAdvisor, Instacart, and TurboTax. The Verge reports: Some of these apps, such as Spotify, already have similar connectors in OpenAI's ChatGPT. Once an app is connected, Claude will suggest relevant connected apps directly in your conversations, like using AllTrails for hike recommendations. Anthropic notes in its blog post announcing the new connectors that, "Your data from [connected apps] isn't used to train our models, and the app doesn't see your other conversations with Claude. You can also disconnect it at any time."

Additionally, Anthropic says "there are no paid placements or sponsored answers in conversations with Claude." When multiple apps seem relevant, Claude will show results from both "ranked by what's most useful." Claude will also ask users to verify before taking actions like making a purchase or reservation using a connected app.
Power

New Gas-Powered Data Centers Could Emit More Greenhouse Gases Than Entire Nations 92

An anonymous reader quotes a report from Wired: New gas projects linked to just 11 data center campuses around the US have the potential to create more greenhouse gases than the country of Morocco emitted in 2024. Emissions estimates from air permit documents examined by WIRED show that these natural gas projects -- which are being built to power data centers to serve some of the US's most powerful AI companies, including OpenAI, Meta, Microsoft, and xAI -- have the potential to emit more than 129 million tons of greenhouse gases per year. As tech companies race to secure massive power deals to build out hundreds of data centers across the country, these projects represent just the tip of the iceberg when it comes to the potential climate cost of the AI boom.

The infrastructure on this list of large natural gas projects reviewed by WIRED is being developed to largely bypass the grid and provide power solely for data centers, a trend known as behind-the-meter power. As data center developers face long waits for connections to traditional utilities, and amid mounting public resistance to the possibility of higher energy bills, making their own power is becoming an increasingly popular option. These projects have either been announced or are under construction, with companies already submitting air permit application materials with state agencies. [...] The emissions projections for the xAI and Microsoft projects, and all the others on WIRED's list, were pulled directly from publicly-available air permit documents in state databases as well as public air permit materials collected by both Cleanview and Oil and Gas Watch, a database maintained by the Environmental Integrity Project, an environmental enforcement nonprofit. Actual greenhouse gas emissions from power plants are usually lower than what's on their air permits. Air permit modeling is based on the scenario of a power plant constantly running at full capacity. That's rarely the reality for grid-connected power plants, as turbines go offline for maintenance or adjust to the ebbs and flows of customer demand.

"Permitted emission numbers represent a theoretical, conservative scenario, not the actual projected emissions," Alex Schott, the director of communications at Williams Companies, an oil and gas company that is building out three behind-the-meter power plants in Ohio for Meta, told WIRED in an email. Internal modeling done by the company, Schott added, shows that actual emissions could be "potentially two-thirds less than what's on paper." The projections involved, however, are still substantial. Even if the actual emissions from these power plants end up being half of the emissions numbers on the permits, they still could create more greenhouse gas emissions than the country of Norway emitted in 2024. This number is, according to the EPA, equivalent to the emissions from more than 153 average-sized natural gas plants. (WIRED's analysis does not include emissions from backup generators and turbines on the data center campuses themselves, which create smaller amounts of emissions.)
Energy researcher Jon Koomey says the data center boom has created a shortage of the most efficient gas turbines, pushing some developers toward less efficient models that would need to run longer and produce more emissions. "[Data center operators'] belief is that the value being delivered by the servers is much, much more than the cost of running these inefficient power plants all the time," he said.

Michael Thomas, the founder of clean energy research firm Cleanview, has been tracking gas permits for data centers across the country. He calls behind-the-meter power "a crazy acceleration of emissions." He added: "It's almost like we thought we were on the downside of the Industrial Revolution, retiring coal and gas, and now we have a new hump where we're going to rise. That terrifies me in a lot of ways."
AI

OpenAI Says Its New GPT-5.5 Model Is More Efficient and Better At Coding (theverge.com) 50

OpenAI released its new GPT-5.5 model today, which the company calls its "smartest and most intuitive to use model yet, and the next step toward a new way of getting work done on a computer." The Verge reports: OpenAI just released GPT-5.4 last month, but says that the new GPT-5.5 "excels" at tasks like writing and debugging code, doing research online, making spreadsheets and documents, and doing that work across different tools. "Instead of carefully managing every step, you can give GPT-5.5 a messy, multi-part task and trust it to plan, use tools, check its work, navigate through ambiguity, and keep going," according to OpenAI. The company also notes that GPT-5.5 will have its "strongest set of safeguards to date" and can use "significantly fewer" tokens to complete tasks in Codex. GPT-5.5 is rolling out on Thursday for Plus, Pro, Business, and Enterprise ChatGPT tiers and Codex, with GPT-5.5 Pro coming to Pro, Business, and Enterprise users.
Businesses

Meta Is Laying Off 10% of Its Workforce (qz.com) 43

Meta is reportedly cutting about 10% of its workforce, or roughly 8,000 jobs, while closing thousands of open roles it had intended to fill. "We're doing this as part of our continued effort to run the company more efficiently and to allow us to offset the other investments we're making," said Janelle Gale, Meta's chief people officer. The company had almost 79,000 employees at the start of the year. Quartz reports: Meta CEO Mark Zuckerberg has poured resources into building out AI capabilities, directing spending toward model development, chatbot products, and the engineering talent to support them. Meta set its 2026 capital expenditure guidance at $115 billion to $135 billion, almost double the $72 billion it spent in 2025. Employees have been encouraged to use AI agents internally for tasks such as writing code.

The early disclosure, Gale explained, was prompted by the fact that information about the cuts had already made its way into press reports before the company was ready to announce. "I know this is unwelcome news and confirming this puts everyone in an uneasy state, but we feel this is the best path forward, given the circumstances," she wrote.

According to the memo, severance for affected workers in the United States will cover 18 months of COBRA health insurance premiums, along with a base pay component of 16 weeks that increases by two weeks for each year of service. Departing employees will have access to job placement assistance and, where applicable, help navigating immigration status. Packages outside the U.S. will vary by country.
Meta cut between 10% and 15% of its Reality Labs workforce in January, shut down several VR game studios, and shed about 700 positions across at least five divisions in March.
Businesses

Intel Lands Tesla As First Major Customer For 14A Chip Technology (yahoo.com) 24

An anonymous reader quotes a report from Reuters: Tesla CEO Elon Musk said on Wednesday the EV maker plans to use Intel's next-generation 14A manufacturing process to make chips at its Terafab project, an advanced AI chip complex Musk has envisioned in Austin. The contract would mark Intel's first major customer for the technology, a breakthrough for the chipmaker which has struggled to stand up its contract manufacturing business essential for taking on top rival TSMC. Intel CEO Lip Bu Tan has said that the company would exit the chip manufacturing business altogether if it failed to secure an external customer.

Intel has previously said it was in discussions with large customers about 14A, but has not yet disclosed a major external customer. It declined to comment on Musk's remarks. [...] "Given that by the time Terafab scales up, 14A will be probably fairly mature or ready for prime time," Musk said. "14A seems like the right move, and we have a great relationship with Intel," he said. Ben Bajarin, head of technology consultancy Creative Strategies, said that Intel's 14A technology could "turn out to be a bigger deal for Intel than folks thought." "It's important to have multiple partners as early design partners to help clean the pipe and work through needed learnings at the leading edge. They will definitely have scale, so a great first non-Intel customer," Bajarin said.

Seaport Research Partners analyst Jay Goldberg said Musk's vote of confidence in Intel's technology outweighed the unknowns about the Terafab project. "Having a customer is more important than the timing," he said. Goldberg said that Musk's lofty estimates of how many chips its robots could one day require may or may not materialize, but even making chips for Tesla's existing businesses would be a significant win for Intel. "It's not equivalent to Apple or Nvidia" in terms of chip volumes, Goldberg said. "But it's a real customer. It can be real volumes."

Robotics

Ping-Pong Robot Makes History By Beating Top-Level Human Players (reuters.com) 27

Sony AI's autonomous table-tennis robot Ace has become the first robot to compete against top-level human players. Reuters reports: Ace, created by the Japanese company Sony's AI research division, is the first robot to attain expert-level performance in a competitive physical sport, one that requires rapid decisions and precision execution, the project's leader said. Ace did so by employing high-speed perception, AI-based control and a state-of-the-art robotic system. There have been various ping-pong-playing robots since 1983, but until now they were unable to rival highly skilled human competitors. Ace changed that with its performances against human elite-level and professional players in matches following the rules of the International Table Tennis Federation, the sport's governing body, and officiated by licensed umpires.

The project's goal was not only to compete at table tennis but to develop insights into how robots can perceive, plan and act with human-like speed and precision in dynamic environments. In matches detailed in the study, Ace in April 2025 won three out of five versus elite players and lost two matches against professional players, the top skill level in the sport. Sony AI said that since then Ace beat professional players in December 2025 and last month.
"The success of Ace, with its perception system and learning-based control algorithm, suggests that similar techniques could be applied to other areas requiring fast, real-time control and human interaction -- such as manufacturing and service robotics, as well as applications across sports, entertainment and safety-critical physical domains," said Peter Durr, director of Sony AI Zurich and leader for Sony AI's project Ace.

The findings have been published in the journal Nature.
Security

Anthropic's Mythos Model Is Being Accessed by Unauthorized Users (bloomberg.com) 31

Bloomberg reports that a small group of unauthorized users gained access to Anthropic's restricted Mythos model through a mix of contractor-linked access and online sleuthing. Anthropic says it is investigating and has no evidence the access extended beyond a third-party vendor environment or affected its own systems. From the report: The users relied on a mix of tactics to get into Mythos. These included using access the person had as a worker at a third-party contractor for Anthropic and trying commonly used internet sleuthing tools often employed by cybersecurity researchers, the person said. The users are part of a private Discord channel that focuses on hunting for information about unreleased models, including by using bots to scour for details that Anthropic and others have posted on unsecured websites such as GitHub. [...] To access Mythos, the group of users made an educated guess about the model's online location based on knowledge about the format Anthropic has used for other models, the person said, adding that such details were revealed in a recent data breach from Mercor, an AI training startup that works with a number of top developers.

Crucially, the person also has permission to access Anthropic models and software related to evaluating the technology for the startup. They gained this access from a company for which they have performed contract work evaluating Anthropic's AI models. Bloomberg is not naming the company for security reasons. The group is interested in playing around with new models, not wreaking havoc with them, the person said. The group has not run cybersecurity-related prompts on the Mythos model, the person said, preferring instead to try tasks like building simple websites in an attempt to avoid detection by Anthropic. The person said the group also has access to a slew of other unreleased Anthropic AI models.

Google

Google Unveils Two New AI Chips For the 'Agentic Era' (cnbc.com) 24

Google announced two new tensor processing units (TPUs) for the "agentic era," with separate processors dedicated to training and inference. "With the rise of AI agents, we determined the community would benefit from chips individually specialized to the needs of training and serving," Amin Vahdat, a Google senior vice president and chief technologist for AI and infrastructure, said in a blog post. Both chips will become available later this year. CNBC reports: After years of producing chips that can both train artificial intelligence models and handle inference work, Google is separating those tasks into distinct processors, its latest effort to take on Nvidia in AI hardware. [...] None of the tech giants are displacing Nvidia, and Google isn't even comparing the performance of its new chips with those from the AI chip leader. Google did say the training chip enables 2.8 times the performance of the seventh-generation Ironwood TPU, announced in November, for the same price, while performance is 80% better for the inference processor.

Nvidia said its upcoming Groq 3 LPU hardware will draw on large quantities of static random-access memory, or SRAM, which is used by Cerebras, an AI chipmaker that filed to go public earlier this month. Google's new inference chip, dubbed TPU 8i, also relies on SRAM. Each chip contains 384 megabytes of SRAM, triple the amount in Ironwood. The architecture is designed "to deliver the massive throughput and low latency needed to concurrently run millions of agents cost-effectively," Sundar Pichai, CEO of Google parent Alphabet, wrote in a blog post.

AI

AI Tool Rips Off Open Source Software Without Violating Copyright (404media.co) 116

A satirical but working tool called Malus uses AI to create "clean room" clones of open-source software, aiming to reproduce the same functionality while shedding attribution and copyleft obligations. "It works," Mike Nolan, one of the two people behind Malus, who researches the political economy of open source software and currently works for the United Nations, told 404 Media. "The Stripe charge will provide you the thing, and it was important for us to do that, because we felt that if it was just satire, it would end up like every other piece of research I've done on open source, which ends up being largely dismissed by open source tech workers who felt that they were too special and too unique and too intelligent to ever be the ones on the bad side of the layoffs or the economics of the situation." 404 Media reports: Malus's legal strategy for bypassing copyright is based on a historically pivotal moment for software and copyright law dating back to 1982. Back then, IBM dominated home computing, and competitors like Columbia Data Products wanted to sell products that were compatible with software that IBM customers were already using. Reverse engineering IBM's computer would have infringed on the company's copyright, so Columbia Data Products came up with what we now know as a "clean room" design.

It tasked one team with examining IBM's BIOS and creating specifications for what a clone of that system would require. A different "clean" team, one that was never exposed to IBM's code, then created BIOS that met those specifications from scratch. The result was a system that was compatible with IBM's ecosystem but didn't violate its copyright because it did not copy IBM's technical process and counted as original work.

This clean room method, which has been validated by case law and dramatized in the first season of Halt and Catch Fire, made computing more open and competitive than it would have been otherwise. But it has taken on new meaning in the age of generative AI. It is now easier than ever to ask AI tools to produce software that is identical in function to existing open source projects, and that, some would argue, are built from scratch and are therefore original work that can bypass existing copyright licenses. Others would say that software produced by large language models is inherently derivative, because like any LLM output, it is trained on the collective output of humans scraped from the internet, including specific open source projects.

Malus (pronounced malice), uses AI to do the same thing. "Finally, liberation from open source license obligations," Malus's site says. "Our proprietary AI robots independently recreate any open source project from scratch. The result? Legally distinct code with corporate-friendly licensing. No attribution. No copyleft. No problems." Copyleft is a type of copyright license that ensures reproductions or applications of the software keep it free to share and modify.

Government

Pentagon Wants $54 Billion For Drones (arstechnica.com) 83

An anonymous reader quotes a report from Ars Technica: The US military's massive $1.5 trillion budget request for the next fiscal year includes what Pentagon officials described as the largest investment in drone warfare and counter-drone technology in US history. The proposed spending on drone and autonomous warfare technologies within the FY2027 budget proposal for the US Department of Defense would surpass most countries' defense budgets and rank among the top 10 in the world for military spending, ahead of countries such as Ukraine, South Korea, and Israel.

Specifically, the Pentagon is requesting $53.6 billion to boost US production and procurement of drones, train drone operators, build out a logistics network for sustaining drone deployments, and expand counter-drone systems to defend more US military sites. The funding request is budgeted under the Defense Autonomous Warfare Group (DAWG), an organization established in late 2025 that would see a massive budget increase after receiving about $226 million in the 2026 fiscal year budget.

[...] Another $20.6 billion would help purchase one-way attack drones and drone aircraft developed through the US Air Force's Collaborative Combat Aircraft program, which is building drone prototypes capable of teaming up with human-piloted fighter jets. Part of this funding would also go toward defensive systems for countering small drones and the US Navy's Boeing MQ-25 drone designed to perform midair refueling of carrier-borne fighter aircraft to extend their strike ranges. Such drone-related spending even rivals the entire budget of the US Marine Corps. But the Pentagon has not said that it is creating a dedicated drone branch of the US military similar to the standalone Space Force.

Pentagon officials emphasized that most of the money would go toward procuring drone and autonomous warfare technologies that already exist, and is largely separate from additional funding that would bolster US domestic manufacturing capacity to build such weapon systems. "That $70 billion is all going into existing systems and technologies," said Hurst. "The industrial base support is entirely separate."
"The evolution we've seen in the battlefield is this evolution of technologies in the timeframe of weeks, not the typical years we see with our defense production," said Lt. Gen. Steven Whitney, director of force structure, resources, and assessment for the Pentagon's Joint Chiefs of Staff, during a Pentagon press briefing. "So it's really critical we work with industry to get that capability fielded."
The Courts

Florida Launches Criminal Investigation Into ChatGPT Over School Shooting (npr.org) 103

Florida's attorney general has launched a criminal investigation into OpenAI over allegations that the accused gunman in a shooting at Florida State University last year used ChatGPT to help plan the attack. OpenAI says the chatbot is "not responsible for this terrible crime" and only provided factual information available from public sources. NPR reports: The Republican attorney general, James Uthmeier, said at a press conference in Tampa on Tuesday that accused gunman Phoenix Ikner consulted ChatGPT for advice before the shooting, including what type of gun to use, what ammunition went with it, and what time to go to campus to encounter more people, according to an initial review of Ikner's chat logs. "My prosecutors have looked at this and they've told me, if it was a person on the other end of that screen, we would be charging them with murder," Uthmeier said. "We cannot have AI bots that are advising people on how to kill others."

Uthmeier's office is issuing subpoenas to OpenAI seeking information about its policies and internal training materials related to user threats of harm and how it cooperates with and reports crimes to law enforcement, dating back to March 2024. At the press conference, Uthmeier acknowledged the investigation is entering into uncharted territory and is uncertain about whether OpenAI has criminal liability. "We are going to look at who knew what, designed what, or should have done what," he said. "And if it is clear that individuals knew that this type of dangerous behavior might take place, that these types of unfortunate, tragic events might take place, and nevertheless still turned to profit, still allowed this business to operate, then people need to be held accountable."

[...] Ikner, 21, is facing multiple charges of murder and attempted murder for the April 2025 shooting near the student union on FSU's Tallahassee campus, where he was a student at the time. His trial is set to begin on Oct. 19. According to court filings, more than 200 AI messages have been entered into evidence in the case.

Firefox

Mozilla Uses Anthropic's Mythos To Fix 271 Bugs In Firefox (nerds.xyz) 164

BrianFagioli writes: Mozilla says it used an early version of Anthropic's Claude Mythos Preview to comb through Firefox's code, and the results were hard to ignore. In Firefox 150, the team fixed 271 vulnerabilities identified during this effort, a number that would have been unthinkable not long ago. Instead of relying only on fuzzing tools or human review, the AI was able to reason through code and surface issues that typically require highly specialized expertise.

The bigger implication is less about one release and more about where this is heading. Security has long favored attackers, since they only need to find a single flaw while defenders have to protect everything. If AI can scale vulnerability discovery for defenders, that dynamic could start to shift. It does not mean zero days disappear overnight, but it suggests a future where bugs are found and fixed faster than attackers can weaponize them.
"Computers were completely incapable of doing this a few months ago, and now they excel at it," says Mozilla in a blog post. "We have many years of experience picking apart the work of the world's best security researchers, and Mythos Preview is every bit as capable. So far we've found no category or complexity of vulnerability that humans can find that this model can't."

The company concluded: "The defects are finite, and we are entering a world where we can finally find them all."
AI

Job Cuts Driven By AI Are Rising On Wall Street 58

Firms like Bank of America, Citi, Wells Fargo, and others are reporting strong profits while reducing head count and automating more work. "All of them credited A.I. to some degree ... in areas ranging from the so-called back office, where tens of thousands of employees fill out paperwork to comply with various laws and regulations, to the front office, where seven-figure salaried professionals put together complicated financial transactions for corporate clients," reports the New York Times. From the report: Less than four months ago, Bank of America's chief executive, Brian T. Moynihan, volunteered in a TV interview what he would say to his 210,000 employees about the chance of artificial intelligence replacing human work. "You don't have to worry," he said. "It's not a threat to their jobs." Last week, after Bank of America reported $8.6 billion in profit for the first quarter -- $1.6 billion more than the same period a year earlier -- Mr. Moynihan struck a different tone. The bank's bottom line, he said, was helped by shedding 1,000 jobs through attrition by "eliminating work and applying technology," which he repeatedly specified was artificial intelligence. He predicted more of that in the months and years to come. "A.I. gives us places to go we haven't gone," Mr. Moynihan said.

The veneer of Wall Street's longstanding assertion -- that A.I. will enhance human work, not replace it -- is rapidly peeling away, as evidenced by the current quarterly earnings season. JPMorgan Chase, Citi, Bank of America, Goldman Sachs, Morgan Stanley and Wells Fargo racked up $47 billion in collective profits, up 18 percent, while shedding 15,000 employees. All of them credited A.I. to some degree with helping cut jobs and automate work in areas ranging from the so-called back office, where tens of thousands of employees fill out paperwork to comply with various laws and regulations, to the front office, where seven-figure salaried professionals put together complicated financial transactions for corporate clients.

Unlike executives in Silicon Valley, few major financial figures are stating outright that A.I. is eliminating jobs. Citi, for example, has pledged to shrink its work force by 20,000 people through what one executive described to financial analysts last week as the company's "productivity and efficiency journey." The bank is paying for A.I. software from Anthropic, Google, Microsoft and OpenAI, to automatically read legal documents, approve account openings, send invoices for trades and organize sensitive customer data, among other tasks, according to public statements by bank executives and two people familiar with Citi's systems. Among the recent job cuts at Citi were scores of employees who were part of the bank's "A.I. Champions and Accelerators" program, according to the two people, who were not permitted by the bank to speak publicly. The program involves Citi employees who perform their day jobs while also working to persuade their colleagues to adopt A.I. technologies.
Facebook

Meta To Start Capturing Employee Mouse Movements, Keystrokes For AI Training Data (reuters.com) 44

Reuters reports that Meta plans to start collecting U.S.-based employees' mouse movements, clicks, keystrokes, and occasional screen snapshots to train AI agents that can better learn how humans use computers. The tool, called Model Capability Initiative (MCI), will reportedly "not be used for performance assessments or any other purpose besides model training and that safeguards were in place to protect 'sensitive content.'" From the report: Meta CTO Andrew Bosworth told employees in a separate memo shared on Monday that the company would step up internal data collection as part of those "AI for Work" efforts, now re-branded as Agent Transformation Accelerator (ATA). "The vision we are building towards is one where our agents primarily do the work and our role is to direct, review and help them improve," Bosworth said. The aim, he added, was for agents to "automatically see where we felt the need to intervene so they can be better next time." Bosworth did not explicitly spell out how those agents would be trained, but said Meta would be "rigorous" about "building up data and evals for all the types of interactions we have as we go about our work."

Meta spokesperson Andy Stone acknowledged that the MCI data would be among the inputs. [...] "If we're building agents to help people complete everyday tasks using computers, our models need real examples of how people "actually use them -- things like mouse movements, clicking buttons, and navigating dropdown menus," said Stone.

Google

Google's Internal Politics Leave It Playing Catch-Up On AI Coding (bloomberg.com) 24

An anonymous reader quotes a report from Bloomberg: At Google, leaders are anxious about falling behind in the race to offer AI coding tools, especially as rivals like Anthropic PBC offer more effective and popular tools to businesses, according to people familiar with the matter. The search giant is now working to unite some of its coding initiatives under one banner to speed progress and take advantage of a surge in customer interest. In some corners of Alphabet's Google, particularly AI lab DeepMind, concerns about the company's position are mounting, according to current and former employees and executives, who declined to be named because they weren't authorized to speak publicly.

Businesses are just starting to realize that AI coding tools can enable anyone to build products by prompting a chatbot. But Google doesn't have a clear solution for them. Its Gemini model's capabilities are sprinkled across half a dozen different coding products with different branding, indicating how the company's lack of focus and competing internal efforts have hampered success, the people said. Even internally, some Google engineers prefer to use Anthropic's Claude Code, they said. More concerning, the people said, are the engineers who are struggling to adopt AI coding at all. [...] Google's emphasis on its own technology has also complicated the push to catch up. Most employees are banned from using competing tools such as Claude Code or Codex due to security concerns, but Googlers can request exceptions if they can demonstrate they have a business case, one former employee said. Some teams at DeepMind, including those working on the Gemini model, internal applications, and open source models, use Claude Code, according to three former employees. "You want the best people to use the best tool, even inside Google," one of the former employees said. [...]

In recent years, DeepMind has tried to tighten control over how its AI breakthroughs are woven into Google products. Last year, Google appointed Kavukcuoglu to a new position as chief AI architect, a role in which he is charged with folding generative AI into Google products. Yet confusion about who is leading the charge on AI coding persists. Along with DeepMind, Google Cloud, Google Core, Google Labs and Android are all pushing AI coding in different ways, one of the people said. [...] Within the Googleplex, there is a philosophical clash between AI researchers who want to move as quickly as possible and more traditional senior engineers who have exacting standards for code quality, former employees say. AI usage is factored into performance reviews, according to a former employee. But engineers who try to use internal AI coding tools often hit capacity constraints due to competition for computing power, the former employee said.

Businesses

Amazon To Invest Up To Another $25 Billion In Anthropic (cnbc.com) 28

Amazon is expanding its Anthropic partnership with a deal to invest up to another $25 billion, while Anthropic commits to spending more than $100 billion on AWS infrastructure over the next decade to power Claude. "Anthropic's commitment to run its large language models on AWS Trainium for the next decade reflects the progress we've made together on custom silicon, as we continue delivering the technology and infrastructure our customers need to build with generative AI," Amazon CEO Andy Jassy said in a statement. CNBC reports: Amazon's investment includes $5 billion into Anthropic now, with up to $20 billion in the future tied to "certain commercial milestones," according to a release. The initial investment is at Anthropic's latest valuation of $380 billion. Anthropic said in the release that it will bring nearly 1 gigawatt total of Trainium2 and Trainium3 capacity online by the end of the year.

With all of the major hyperscalers competing to build out AI capacity as quickly as possible, Amazon said in February that it expects to shell out roughly $200 billion this year on capital expenditures, mostly on AI infrastructure.

Government

Former Palantir Employee Running For Congress Unveils 'AI Dividend' Plan 84

Alex Bores, a former Palantir employee and current Democratic House candidate in New York, is proposing an "AI dividend" that would send direct payments to Americans if AI drives major job losses. "At its core, the AI Dividend is simple: if AI dramatically increases productivity and concentrates wealth, the American people have a stake in those gains," a memo on the policy reads. Axios reports: The dividend would fund direct payments to Americans. It would also be invested into workforce training and education, as well as government capacity to "govern AI safely and fund independent oversight," per the plan memo.

"You don't take out fire insurance because you expect your house to burn down -- you have insurance in case something goes awry," Bores told Axios in an interview. "Here we have, for the first time, a technology where the makers of the technology are explicitly saying that their goal is to replace all human labor." "The fact that they've put it out there means government needs to take it seriously." [...]

The proposal would be funded through:
- A token tax, described in the memo as a "modest tax on AI consumption"
- Equity participation in frontier AI firms
- Changes to the tax code that would reduce incentives to invest in AI "when it leads to less work"
"If [AI companies] they can support this plan, that would show that they actually believe in what they're putting out there," Bores said. "If they're not doing it, then I think it shows that they're really putting window dressing out there."

Further reading: Palantir Posts Bond Villain Manifesto On X

Slashdot Top Deals