AI

America's Peace Corps Announces 'Tech Corps' Volunteers to Help Bring AI to Foreign Countries (engadget.com) 49

Over 240,000 Americans volunteered for Peace Corps projects in 142 countries since the program began more than half a century ago.

But now the agency is launching a new initiative — called Tech Corps. "It's the Peace Corps, but make it AI," explains Engadget: The Peace Corps' latest proposal will recruit STEM graduates or those with professional experience in the artificial intelligence sector and send them to participating host countries.

According to the press release, volunteers will be placed in Peace Corps countries that are part of the American AI Exports Program, which was created last year from an executive order from President Trump as a way to bolster the US' grip on the AI market abroad. Tech Corps members will be tasked with using AI to resolve issues related to agriculture, education, health and economic development. The program will offer its members 12- to 27-month in-person assignments or virtual placements, which will include housing, healthcare, a living stipend and a volunteer service award if the corps member is placed overseas.

"American technology to power prosperity," reads the headline at Tech Corps web site. ("Build the tech nations depend on... See the world. Be the future."

The site says they're recruiting "service-minded technologists to serve in the Peace Corps to help countries around the world harness American AI to enhance opportunity and prosperity for their citizens." (And experienced technology professionals can donate 5-15 hours a week "to mentor and support projects on-the-ground.")
The Courts

US Supreme Court Rejects Trump's Global Tariffs (reuters.com) 228

The U.S. Supreme Court struck down on Friday President Donald Trump's sweeping tariffs that he pursued under a law meant for use in national emergencies, rejecting one of his most contentious assertions of his authority in a ruling with major implications for the global economy. From a report: The justices, in a 6-3 ruling authored by conservative Chief Justice John Roberts, upheld a lower court's decision that the Republican president's use of this 1977 law exceeded his authority.

The court ruled that the Trump administration's interpretation that the law at issue - the International Emergency Economic Powers Act, or IEEPA - grants Trump the power he claims to impose tariffs would intrude on the powers of Congress and violate a legal principle called the "major questions" doctrine. The doctrine, embraced by the conservative justices, requires actions by the government's executive branch of "vast economic and political significance" to be clearly authorized by Congress. The court used the doctrine to stymie some of Democratic former President Joe Biden's key executive actions.

Security

How Private Equity Debt Left a Leading VPN Open To Chinese Hackers (financialpost.com) 26

An anonymous reader quotes a report from Bloomberg: In early 2024, the agency that oversees cybersecurity for much of the US government issued a rare emergency order -- disconnect your Connect Secure virtual private network software immediately. Chinese spies had hacked the code and infiltrated nearly two dozen organizations. The directive applied to all civilian federal agencies, but given the product's customer base, its impact was more widely felt. The software, which is made by Ivanti Inc., was something of an industry standard across government and much of the corporate world. Clients included the US Air Force, Army, Navy and other parts of the Defense Department, the Department of State, the Federal Aviation Administration, the Federal Reserve, the National Aeronautics and Space Administration, thousands of companies and more than 2,000 banks including Wells Fargo & Co. and Deutsche Bank AG, according to federal procurement records, internal documents, interviews and the accounts of former Ivanti employees who requested anonymity because they were not authorized to disclose customer information.

Soon after sending out their order, which instructed agencies to install an Ivanti-issued fix, staffers at the Cybersecurity and Infrastructure Security Agency discovered that the threat was also inside their own house. Two sensitive CISA databases -- one containing information about personnel at chemical facilities, another assessing the vulnerabilities of critical infrastructure operators -- had been compromised via the agency's own Connect Secure software. CISA had followed all its own guidance. Ivanti's fix had failed. This was a breaking point for some American national security officials, who had long expressed concerns about Connect Secure VPNs. CISA subsequently published a letter with the Federal Bureau of Investigation and the national cybersecurity agencies of the UK, Canada, Australia and New Zealand warning customers of the "significant risk" associated with continuing to use the software. According to Laura Galante, then the top cyber official in the Office of the Director of National Intelligence, the government came to a simple conclusion about the technology. "You should not be using it," she said. "There really is no other way to put it."

That attack, along with several others that successfully targeted the Ivanti software, illustrate how private equity's push into the cybersecurity market ended up compromising the quality and safety of some critical VPN products, Bloomberg has found. Last year, Bloomberg reported that Citrix Systems Inc., another top VPN maker, experienced several major hacks after its private equity owners, Elliott Investment Management and Vista Equity Partners, cut most of the company's 70-member product security team following their acquisition of the company in 2022. Some government officials and private-sector executives are now reconsidering their approach to evaluating cybersecurity software. In addition to excising private equity-owned VPNs from their networks, some factor private equity ownership into their risk assessments of key technologies.

Transportation

New York Drops Plan To Legalize Robotaxis Outside NYC (theverge.com) 25

New York Governor Kathy Hochul has dropped a proposal that would have allowed limited commercial robotaxi deployments outside New York City, citing a lack of support among state legislators. "The move is a blow to Waymo and other robotaxi companies who saw New York, and especially New York City, as a potential goldmine," reports The Verge. From the report: The plan, which was introduced by Hochul as part of the state's budget proposal last month, would have allowed limited robotaxi deployment in cities other than the Big Apple -- while leaving whether New York City would get autonomous vehicles up to the mayor and the City Council. But now that plan is DOA, as support in the legislature never materialized. "Based on conversations with stakeholders, including in the legislature, it was clear that the support was not there to advance this proposal," Sean Butler, a Hochul spokesperson, said in a statement. "While we are disappointed by the Governor's decision, we're committed to bringing our service to New York and will work with the State Legislature to advance this issue," Waymo spokesperson Ethan Teicher said in a statement. "The path forward requires a collaborative approach that prioritizes transparency and public safety."
Censorship

US Plans Online Portal To Bypass Content Bans In Europe and Elsewhere 55

The U.S. State Department is reportedly developing a site called freedom.gov that would let users in Europe and elsewhere access content restricted under local laws, "including alleged hate speech and terrorist propaganda," reports Reuters. Washington views the move as a way to counter censorship. Reuters reports: One source said officials had discussed including a virtual private network function to make a user's traffic appear to originate in the U.S. and added that user activity on the site will not be tracked. Headed by Undersecretary for Public Diplomacy Sarah Rogers, the project was expected to be unveiled at last week's Munich Security Conference but was delayed, the sources said. Reuters could not determine why the launch did not happen, but some State Department officials, including lawyers, have raised concerns about the plan, two of the sources said, without detailing the concerns.

The project could further strain ties between the Trump administration and traditional U.S. allies in Europe, already heightened by disputes over trade, Russia's war in Ukraine and President Donald Trump's push to assert control over Greenland. The portal could also put Washington in the unfamiliar position of appearing to encourage citizens to flout local laws.
Printer

California's New Bill Requires DOJ-Approved 3D Printers That Report on Themselves (adafruit.com) 123

California's recently-proposed AB-2047 would require 3D printers sold in the state to be DOJ-approved models equipped with "firearm blocking technology," banning non-certified machines after 2029 and criminalizing efforts to bypass the software. Adafruit notes that unlike similar legislation proposed in Washington State and New York, California's version "adds a certification bureaucracy on top: state-approved algorithms, state-approved software control processes, state-approved printer models, quarterly list updates, and civil penalties up to $25,000 per violation." From the report: Assembly Member Bauer-Kahan introduced AB-2047, the "California Firearm Printing Prevention Act," on February 17th. The bill would ban the sale or transfer of any 3D printer in California unless it appears on a state-maintained roster of approved makes and models... certified by the Department of Justice as equipped with "firearm blocking technology." Manufacturers would need to submit attestations for every make and model. The DOJ would publish a list. If your printer isn't on the list by March 1, 2029, it can't be sold. In addition, knowingly disabling or circumventing the blocking software is a misdemeanor.

[...] As Michael Weinberg wrote after the New York and Washington proposals dropped⦠accurately identifying gun parts from geometry alone is incredibly hard, desktop printers lack the processing power to run this kind of analysis, and the open-source firmware that runs most machines makes any blocking requirement trivially easy to bypass. The Firearms Policy Coalition flagged AB-2047 on X, and the reactions tell you everything. Jon Lareau called it "stupidity on steroids," pointing out that a simple spring-shaped part has no way of revealing its intended use. The Foundry put it plainly: "Regulating general-purpose machines is another. AB-2047 would require 3D printers to run state-approved surveillance software and criminalize modifying your own hardware."

Security

OpenClaw Security Fears Lead Meta, Other AI Firms To Restrict Its Use (wired.com) 7

An anonymous reader quotes a report from Wired: Last month, Jason Grad issued a late-night warning to the 20 employees at his tech startup. "You've likely seen Clawdbot trending on X/LinkedIn. While cool, it is currently unvetted and high-risk for our environment," he wrote in a Slack message with a red siren emoji. "Please keep Clawdbot off all company hardware and away from work-linked accounts." Grad isn't the only tech executive who has raised concerns to staff about the experimental agentic AI tool, which was briefly known as MoltBot and is now named OpenClaw. A Meta executive says he recently told his team to keep OpenClaw off their regular work laptops or risk losing their jobs. The executive told reporters he believes the software is unpredictable and could lead to a privacy breach if used in otherwise secure environments. He spoke on the condition of anonymity to speak frankly.

[...] Some cybersecurity professionals have publicly urged companies to take measures to strictly control how their workforces use OpenClaw. And the recent bans show how companies are moving quickly to ensure security is prioritized ahead of their desire to experiment with emerging AI technologies. "Our policy is, 'mitigate first, investigate second' when we come across anything that could be harmful to our company, users, or clients," says Grad, who is cofounder and CEO of Massive, which provides Internet proxy tools to millions of users and businesses. His warning to staff went out on January 26, before any of his employees had installed OpenClaw, he says. At another tech company, Valere, which works on software for organizations including Johns Hopkins University, an employee posted about OpenClaw on January 29 on an internal Slack channel for sharing new tech to potentially try out. The company's president quickly responded that use of OpenClaw was strictly banned, Valere CEO Guy Pistone tells WIRED. "If it got access to one of our developer's machines, it could get access to our cloud services and our clients' sensitive information, including credit card information and GitHub codebases," Pistone says. "It's pretty good at cleaning up some of its actions, which also scares me."

A week later, Pistone did allow Valere's research team to run OpenClaw on an employee's old computer. The goal was to identify flaws in the software and potential fixes to make it more secure. The research team later advised limiting who can give orders to OpenClaw and exposing it to the Internet only with a password in place for its control panel to prevent unwanted access. In a report shared with WIRED, the Valere researchers added that users have to "accept that the bot can be tricked." For instance, if OpenClaw is set up to summarize a user's email, a hacker could send a malicious email to the person instructing the AI to share copies of files on the person's computer. But Pistone is confident that safeguards can be put in place to make OpenClaw more secure. He has given a team at Valere 60 days to investigate. "If we don't think we can do it in a reasonable time, we'll forgo it," he says. "Whoever figures out how to make it secure for businesses is definitely going to have a winner."

Government

IRS Loses 40% of IT Staff, 80% of Tech Leaders In 'Efficiency' Shakeup (theregister.com) 87

The IRS's IT division has reportedly lost 40% of its staff and nearly 80% of its tech leadership amid a federal "efficiency" overhaul, the agency's CIO revealed yesterday. The Register reports: Kaschit Pandya detailed the extent of the tech reorganization during a panel at the Association of Government Accountants yesterday, describing it as the biggest in two decades. ... The IRS lost a quarter of its workforce overall in 2025. But the tech team was clearly affected more deeply. At the start of the year, the team encompassed around 8,500 employees.

As reported by Federal News Network (FNN), Pandya said: "Last year, we lost approximately 40 percent of the IT staff and nearly 80 percent of the execs." "So clearly there was an opportunity, and I thought the opportunity that we needed to really execute was reorganizing." That included breaking up silos within the organization, he said. "Everyone was operating in their own department or area."

It is not entirely clear where all those staff have gone. According to a report by the US Treasury Inspector General for Tax Administration, the IT department had 8,504 workers as of October 2024. As of October 2025, it had 7,135. However, reports say that as part of the reorganization, 1,000 techies were detailed to work on delivering frontline services during the US tax season. According to FNN, those employees have questioned the wisdom of this move and its implementation.

Facebook

Mark Zuckerberg Grilled On Usage Goals and Underage Users At California Trial (wsj.com) 20

An anonymous reader quotes a report from the Wall Street Journal: Meta Chief Executive Mark Zuckerberg faced a barrage of questions about his social-media company's efforts to secure ever more of its users' time and attention at a landmark trial in Los Angeles on Wednesday. In sworn testimony, Zuckerberg said Meta's growth targets reflect an aim to give users something useful, not addict them, and that the company doesn't seek to attract children as users. [...] Mark Lanier, a lawyer for the plaintiff, repeatedly asked Zuckerberg about internal company communications discussing targets for how much time users spend with Meta's products. Lanier showed an email from 2015 in which the CEO stated his goal for 2016 was to increase users' time spent by 12%. "We used to give teams goals on time spent and we don't do that anymore because I don't think that's the best way to do it," Zuckerberg said on the witness stand in sworn testimony.

Lanier also asked Zuckerberg about documents showing Meta employees were aware of children under 13 using Meta's apps. Zuckerberg said the company's policy was that children under 13 aren't allowed on the platform and that they are removed when identified. Lanier showed an internal Meta email from 2015 that estimated 4 million children under 13 were using Instagram. He estimated that figure would represent approximately 30% of all kids aged 10 to 12 in the U.S. In response to a question about his ownership stake in Meta, which amounts to roughly more than $200 billion, Zuckerberg said he has pledged to donate most of his money to charity. "The better that Meta does, the more money I will be able to invest in science research," he said.

[...] On the stand, Zuckerberg was also asked about his decision to continue to allow beauty filters on the apps after 18 experts said they were harmful to teenage girls. The company temporarily banned the filters on Instagram in 2019 and commissioned a panel of experts to review the feature. All 18 said they were damaging. Meta later lifted the ban but said it didn't create any filters of its own or recommend the filters to users on Instagram after that. "We shouldn't create that content ourselves and we shouldn't recommend it to people," Zuckerberg said. But at the same time, he continued, "I think oftentimes telling people that they can't express themselves like that is overbearing." He also argued that other experts had thought such bans were a suppression of free speech. By focusing on the design of Meta's apps rather than the content posted in them, the case seeks to get around longstanding legal doctrine that largely shields social-media companies from litigation. At times, the case has veered into questions of content, prompting Meta's lawyers to object.

The Courts

EPA Faces First Lawsuit Over Its Killing of Major Climate Rule (nytimes.com) 34

An anonymous reader quotes a report from the New York Times: The first shot has been fired in the legal war over the Environmental Protection Agency's rollback of its "endangerment finding," which had been the foundation for federal climate regulations. Environmental and health groups filed a lawsuit on Wednesday morning in the U.S. Court of Appeals for the District of Columbia Circuit, arguing that the E.P.A.'s move to eliminate limits on greenhouse gases from vehicles, and potentially other sources, was illegal. The suit was triggered by last week's decision by the E.P.A. to kill one of its key scientific conclusions, the endangerment finding, which says that greenhouse gases harm public health. The finding had formed the basis for climate regulations in the United States.

The lawsuit claims that the agency is rehashing arguments that the Supreme Court already considered, and rejected, in a landmark 2007 case, Massachusetts v. E.P.A. The issue is likely to end up back before the Supreme Court, which is now far more conservative. In the 2007 case, the justices ruled that the E.P.A. was required to issue a scientific determination as to whether greenhouse gases were a threat to public health under the 1970 Clean Air Act and to regulate them if they were. As a result, two years later, in 2009, the E.P.A. issued the endangerment finding, allowing the government to limit greenhouse gas emissions, which cause climate change. "With this action, E.P.A. flips its mission on its head," said Hana Vizcarra, a senior lawyer at the nonprofit Earthjustice, which is representing six groups in the lawsuit. "It abandons its core mandate to protect human health and the environment to boost polluting industries and attempts to rewrite the law in order to do so."

[...] Also on Wednesday, two other nonprofit law firms filed their own lawsuit against the E.P.A. over the endangerment finding, on behalf of 18 youth plaintiffs. That suit, by Our Children's Trust and Public Justice, argues that the E.P.A.'s move was unconstitutional. Separate legal challenges to E.P.A. rules are generally consolidated into one case at the D.C. Circuit Court, which is where disputes involving the Clean Air Act are required to be heard. But the sheer number of groups involved could make the legal battle lengthy and complicated to manage. A three-judge panel at the Circuit Court is expected to pore over several rounds of legal briefs before oral arguments begin. Those may not take place until next year.

Advertising

Meta Begins $65 Million Election Push To Advance AI Agenda (nytimes.com) 33

An anonymous reader quotes a report from the New York Times: Meta is preparing to spend $65 million this year to boost state politicians who are friendly to the artificial intelligence industry, beginning this week in Texas and Illinois, according to company representatives. The sum is the biggest election investment by Meta, which owns Facebook, Instagram and WhatsApp. The company was previously cautious about campaign engagements, making small donations out of a corporate political action committee and contributing to presidential inaugurations. It also let executives like Sheryl Sandberg, who was chief operating officer, support candidates in their personal capacities.

Now Meta is betting bigger on politics, driven by concerns over the regulatory threat to the artificial intelligence industry as it aims to beat back legislation in states that it fears could inhibit A.I. development, company representatives said. To do that, Meta is quietly starting two new super PACs, according to federal filings surfaced by The New York Times. One group, Forge the Future Project, is backing Republicans. Another, Making Our Tomorrow, is backing Democrats. The new PACs join two others already started by Meta, one of which is focused on California while the other is an umbrella organization that finances the company's spending in other states. In total, the four super PACs have an initial budget of $65 million, according to federal and state filings. Meta's spending is set to start this week in Illinois and Texas, where the company generally favors backing Democratic and Republican incumbents or engaging in open races rather than deposing existing officials, company representatives said in interviews.

[...] Last year, Meta's public policy vice president, Brian Rice, said the company would start spending in politics because of "inconsistent regulations that threaten homegrown innovation and investments in A.I." The company started its first two super PACs, American Technology Excellence Project and Mobilizing Economic Transformation Across California. Meta put $45 million into American Technology Excellence Project in September. That money is expected, in turn, to flow to Forge the Future Project, Making Our Tomorrow and potentially to other entities. [...] In California, which has some of the country's most onerous campaign-finance disclosures, Meta in August put $20 million into Mobilizing Economic Transformation Across California, which shortens to META California. State laws require the sponsoring company to be disclosed in the name of the entity. In December, Meta put $5 million into another California committee called California Leads, which is focused on promoting moderate business policy and not A.I., according to state records.

The Courts

Mark Zuckerberg Testifies During Landmark Trial On Social Media Addiction (nbcnews.com) 31

Mark Zuckerberg is testifying in a landmark Los Angeles trial examining whether Meta and other social media firms can be held liable for designing platforms that allegedly addict and harm children. NBC News reports: It's the first of a consolidated group of cases -- from more than 1,600 plaintiffs, including over 350 families and over 250 school districts -- scheduled to be argued before a jury in Los Angeles County Superior Court. Plaintiffs accuse the owners of Instagram, YouTube, TikTok and Snap of knowingly designing addictive products harmful to young users' mental health. Historically, social media platforms have been largely shielded by Section 230, a provision added to the Communications Act of 1934, that says internet companies are not liable for content users post. TikTok and Snap reached settlements with the first plaintiff, a 20-year-old woman identified in court as K.G.M., ahead of the trial. The companies remain defendants in a series of similar lawsuits expected to go to trial this year.

[...] Matt Bergman, founding attorney of Social Media Victims Law Center -- which is representing about 750 plaintiffs in the California proceeding and about 500 in the federal proceeding -- called Wednesday's testimony "more than a legal milestone -- it is a moment that families across this country have been waiting for." "For the first time, a Meta CEO will have to sit before a jury, under oath, and explain why the company released a product its own safety teams warned were addictive and harmful to children," Bergman said in a statement Tuesday, adding that the moment "carries profound weight" for parents "who have spent years fighting to be heard." "They deserve the truth about what company executives knew," he said. "And they deserve accountability from the people who chose growth and engagement over the safety of their children."

United States

Texas Sues TP-Link Over China Links and Security Vulnerabilities (theregister.com) 46

TP-Link is facing legal action from the state of Texas for allegedly misleading consumers with "Made in Vietnam" claims despite China-dominated manufacturing and supply chains, and for marketing its devices as secure despite reported firmware vulnerabilities exploited by Chinese state-sponsored actors. The Register: The Lone Star State's Attorney General, Ken Paxton, is filing the lawsuit against California-based TP-Link Systems Inc., which was originally founded in China, accusing it of deceptively marketing its networking devices and alleging that its security practices and China-based affiliations allowed Chinese state-sponsored actors to access devices in the homes of American consumers.

It is understood that this is just the first of several lawsuits that the Office of the Attorney General intends to file this week against "China-aligned companies," as part of a coordinated effort to hold China accountable under Texas law. The lawsuit claims that TP-Link is the dominant player in the US networking and smart home market, controlling 65 percent of the American market for network devices.

It also alleges that TP-Link represents to American consumers that the devices it markets and sells within the US are manufactured in Vietnam, and that consistent with this, the devices it sells in the American market carry a "Made in Vietnam" sticker.

Privacy

Leaked Email Suggests Ring Plans To Expand 'Search Party' Surveillance Beyond Dogs (404media.co) 47

Ring's AI-powered "Search Party" feature, which links neighborhood cameras into a networked surveillance system to find lost dogs, was never intended to stop at pets, according to an internal email from founder Jamie Siminoff obtained by 404 Media.

Siminoff told employees in early October, shortly after the feature launched, that Search Party was introduced "first for finding dogs" and that the technology would eventually help "zero out crime in neighborhoods." The on-by-default feature faced intense backlash after Ring promoted it during a Super Bowl ad. Ring has since also rolled out "Familiar Faces," a facial recognition tool that identifies friends and family on a user's camera, and "Fire Watch," an AI-based fire alert system.

A Ring spokesperson told the publication Search Party does not process human biometrics or track people.
The Courts

Bayer Agrees To $7.25 Billion Proposed Settlement Over Thousands of Roundup Cancer Lawsuits (apnews.com) 42

An anonymous reader quotes a report from the Associated Press: Agrochemical maker Bayer and attorneys for cancer patients announced a proposed $7.25 billion settlement Tuesday to resolve thousands of U.S. lawsuits alleging the company failed to warn people that its popular weedkiller Roundup could cause cancer. The proposed settlement comes as the U.S. Supreme Court is preparing to hear arguments in April on Bayer's assertion that the U.S. Environmental Protection Agency's approval of Roundup without a cancer warning should invalidate claims filed in state courts. That case would not be affected by the proposed settlement.

But the settlement would eliminate some of the risk from an eventual Supreme Court ruling. Patients would be assured of receiving settlement money even if the Supreme Court rules in Bayer's favor. And Bayer would be protected from potentially larger costs if the high court rules against it. Germany-based Bayer, which acquired Roundup maker Monsanto in 2018, disputes the assertion that Roundup's key ingredient, glyphosate, can cause non-Hodgkin lymphoma. But the company has warned that mounting legal costs are threatening its ability to continue selling the product in U.S. agricultural markets. "Litigation uncertainly has plagued the company for years, and this settlement gives the company a road to closure," Bayer CEO Bill Anderson said Tuesday.
The proposed settlement could total up to $7.25 billion over 21 years and resolve most of the remaining U.S. lawsuits surrounding the cancer-related harms of Roundup. The report notes that more than 125,000 claims have been filed since 2015, and while many have already been settled, this deal aims to cover most outstanding and future claims tied to past exposure.

Individual payouts would vary widely based on exposure type, age at diagnosis, and cancer severity. Bayer can also cancel the deal if too many plaintiffs opt out.
The Courts

NPR's Radio Host David Greene Says Google's NotebookLM Tool Stole His Voice 24

An anonymous reader quotes a report from the Washington Post: David Greene had never heard of NotebookLM, Google's buzzy artificial intelligence tool that spins up podcasts on demand, until a former colleague emailed him to ask if he'd lent it his voice. "So... I'm probably the 148th person to ask this, but did you license your voice to Google?" the former co-worker asked in a fall 2024 email. "It sounds very much like you!"

Greene, a public radio veteran who has hosted NPR's "Morning Edition" and KCRW's political podcast "Left, Right & Center," looked up the tool, listening to the two virtual co-hosts -- one male and one female -- engage in light banter. "I was, like, completely freaked out," Greene said. "It's this eerie moment where you feel like you're listening to yourself." Greene felt the male voice sounded just like him -- from the cadence and intonation to the occasional "uhhs" and "likes" that Greene had worked over the years to minimize but never eliminated. He said he played it for his wife and her eyes popped.

As emails and texts rolled in from friends, family members and co-workers, asking if the AI podcast voice was his, Greene became convinced he'd been ripped off. Now he's suing Google, alleging that it violated his rights by building a product that replicated his voice without payment or permission, giving users the power to make it say things Greene would never say. Google told The Washington Post in a statement on Thursday that NotebookLM's male podcast voice has nothing to do with Greene. Now a Santa Clara County, California, court may be asked to determine whether the resemblance is uncanny enough that ordinary people hearing the voice would assume it's his -- and if so, what to do about it.
Greene's lawsuit cites an unnamed AI forensic firm that used its software to compare the artificial voice to Greene's. It gave a confidence rating of 53-60% that Greene's voice was used to train the model, which it considers "relatively high" confidence.

"If I was David Greene I would be upset, not just because they stole my voice," but because they used it to make the podcasting equivalent of AI "slop," said Mike Pesca, host of "The Gist" podcast and a former colleague of Greene's at NPR. "They have banter, but it's very surface-level, un-insightful banter, and they're always saying, 'Yeah, that's so interesting.' It's really bad, because what do we as show hosts have except our taste in commentary and pointing our audience to that which is interesting?"
Privacy

US Lawyers Fire Up Privacy Class Action Accusing Lenovo of Bulk Data Transfers To China (theregister.com) 8

A US law firm has accused Lenovo of violating Justice Department strictures about the bulk transfer of data to foreign adversaries, namely China. From a report: The case filed by Almeida Law Group on behalf of San Francisco-based "Spencer Christy, individually and on behalf of all others similarly situated" centers on the Data Security Program regulations implemented by the DOJ last year. According to the suit, these were "implemented to prevent adversarial countries from acquiring large quantities of behavioral data which could be used to surveil, analyze, or exploit American citizens' behavior."

The complaint states the DOJ rule "makes clear that sending American consumers' information to Chinese entities through automated advertising systems and associated databases with the requisite controls is prohibited." The case states the threshold for "covered personal identifiers" is 100,000 US persons or more and lists a range of potential identifiers, from government and financial account numbers to IMEIs, MAC, and SIM numbers, demographic data, and advertising IDs.

EU

EU Parliament Blocks AI Features Over Cyber, Privacy Fears (politico.eu) 47

An anonymous reader shares a report: The European Parliament has disabled AI features on the work devices of lawmakers and their staff over cybersecurity and data protection concerns, according to an internal email seen by POLITICO. The chamber emailed its members on Monday to say it had disabled "built-in artificial intelligence features" on corporate tablets after its IT department assessed it couldn't guarantee the security of the tools' data.

"Some of these features use cloud services to carry out tasks that could be handled locally, sending data off the device," the Parliament's e-MEP tech support desk said in the email. "As these features continue to evolve and become available on more devices, the full extent of data shared with service providers is still being assessed. Until this is fully clarified, it is considered safer to keep such features disabled."

Privacy

Samsung Ad Confirms Rumors of a Useful S26 'Privacy Display' (theverge.com) 23

Samsung has all but confirmed that its upcoming Galaxy S26 will feature a built-in privacy display, releasing an ad that demonstrates a "Zero-peeking privacy" toggle capable of blacking out on-screen content for anyone peering over the user's shoulder.

The underlying technology is reportedly Samsung Display's Flex Magic Pixel OLED panel, first shown at MWC 2024, which adjusts viewing angles on a pixel-by-pixel basis -- and leaker Ice Universe has shared a video of the feature selectively hiding content in banking and messaging apps using AI. Samsung's Unpacked event is scheduled for February 25th.
Social Networks

India's New Social Media Rules: Remove Unlawful Content in Three Hours, Detect Illegal AI Content Automatically (bbc.com) 23

Bloomberg reports: India tightened rules governing social media content and platforms, particularly targeting artificially generated and manipulated material, in a bid to crack down on the rapid spread of misinformation and deepfakes. The government on Tuesday (Feb 10) notified new rules under an existing law requiring social media firms to comply with takedown requests from Indian authorities within three hours and prominently label AI-generated content. The rules also require platforms to put in place measures to prevent users from posting unlawful material...

Companies will need to invest in 24-hour monitoring centres as enforcement shifts toward platforms rather than users, said Nikhil Pahwa, founder of MediaNama, a publication tracking India's digital policy... The onus of identification, removal and enforcement falls on tech firms, which could lose immunity from legal action if they fail to act within the prescribed timeline.

The new rules also require automated tools to detect and prevent illegal AI content, the BBC reports. And they add that India's new three-hour deadline is "a sharp tightening of the existing 36-hour deadline." [C]ritics worry the move is part of a broader tightening of oversight of online content and could lead to censorship in the world's largest democracy with more than a billion internet users... According to transparency reports, more than 28,000 URLs or web links were blocked in 2024 following government requests...

Delhi-based technology analyst Prasanto K Roy described the new regime as "perhaps the most extreme takedown regime in any democracy". He said compliance would be "nearly impossible" without extensive automation and minimal human oversight, adding that the tight timeframe left little room for platforms to assess whether a request was legally appropriate. On AI labelling, Roy said the intention was positive but cautioned that reliable and tamper-proof labelling technologies were still developing.

DW reports that India has also "joined the growing list of countries considering a social media ban for children under 16."

"Young Indians are not happy and are already plotting workarounds."

Slashdot Top Deals