The Internet

Audit Finds Google, Microsoft, and Meta Still Tracking Users After Opt-Out (404media.co) 48

alternative_right shares a report from 404 Media: An independent privacy audit of Microsoft, Meta, and Google web traffic in California found that the companies may be violating state regulations and racking up billions in fines. According to the audit from privacy search engine webXray, 55 percent of the sites it checked set ad cookies in a user's browser even if they opted out of tracking. Each company disputed or took issue with the research, with Google saying it was based on a "fundamental misunderstanding" of how its product works.

The webXray California Privacy Audit viewed web traffic on more than 7,000 popular websites in California in the month of March and found that most tech companies ignore when a user asks to opt-out of cookie tracking. California has stringent and well defined privacy legislation thanks to its California Consumer Privacy Act (CCPA) which allows users to, among other things, opt out of the sale of their personal information. There's a system called Global Privacy Control (GPC), which includes a browser extension that indicates to a website when a user wants to opt out of tracking.

According to the webXray audit, Google failed to let users opt out 87 percent of the time. "Google's failure to honor the GPC opt-out signal is easy to find in network traffic. When a browser using GPC connects to Google's servers it encodes the opt-out signal by sending the code 'sec-gpc: 1.' This means Google should not return cookies," the audit said. "However, when Google's server responds to the network request with the opt-out it explicitly responds with a command to create an advertising cookie named IDE using the 'set-cookie' command. This non-compliance is easy to spot, hiding in plain sight."

The audit said that Microsoft fails to opt out users in the same way and has a failure rate of 50 percent in the web traffic webXray viewed. Meta's failure rate was 69 percent and a bit more comprehensive. "Meta instructs publishers to install the following tracking code on their websites. The code contains no check for globally standard opt-out signals -- it loads unconditionally, fires a tracking event, and sets a cookie regardless of the consumer's privacy preferences," the audit said. It showed a copy of Meta's tracking data which contains no GPC check at all.

Privacy

Meta Is Warned That Facial Recognition Glasses Will Arm Sexual Predators (wired.com) 89

An anonymous reader quotes a report from Wired: More than 70 civil liberties, domestic violence, reproductive rights, LGBTQ+, labor, and immigrant advocacy organizations are demanding that Meta abandon plans to deploy face recognition on its Ray-Ban and Oakley smart glasses, warning that the feature -- reportedly known inside the company as "Name Tag" -- would hand stalkers, abusers, and federal agents the ability to silently identify strangers in public. The coalition, which includes the ACLU, the Electronic Privacy Information Center, Fight for the Future, Access Now, and the Leadership Conference on Civil and Human Rights, is demanding Meta kill the feature before launch, after internal documents surfaced showing the company hoped to use the current "dynamic political environment" as cover for the rollout, betting that civil society groups would have their resources "focused on other concerns."

Name Tag, as revealed in February by The New York Times, would work through the artificial intelligence assistant built into Meta's smart glasses, allowing wearers to pull up information about people in their field of view. Engineers have reportedly been weighing two versions of the feature: one that would only identify people the wearer is already connected to on a Meta platform, and a broader version that could recognize anyone with a public account on a Meta service such as Instagram. The coalition wants Meta to scrap the feature entirely. In a letter to CEO Mark Zuckerberg on Monday, it argues that face recognition in inconspicuous consumer eyewear "cannot be resolved through product design changes, opt-out mechanisms, or incremental safeguards." Bystanders in public have no meaningful way to consent to being identified, it says.

Meta is also urged to disclose any known instances of its wearables being used in stalking, harassment, or domestic violence cases; disclose any past or ongoing discussions with federal law enforcement agencies, including Immigration and Customs Enforcement and Customs and Border Protection, about the use of Meta wearables or data from them; and commit to consulting civil society and independent privacy experts before integrating biometric identification into any consumer device. "People should be able to move through their daily lives without fear that stalkers, scammers, abusers, federal agents, and activists across the political spectrum are silently and invisibly verifying their identities and potentially matching their names to a wealth of readily available data about their habits, hobbies, relationships, health, and behaviors," write the groups, which also include Common Cause, Jane Doe Inc., UltraViolet, the National Organization for Women, the New York State Coalition Against Domestic Violence, the Library Freedom Project, and Old Dykes Against Billionaire Tech Bros, among others.

AI

Californians Sue Over AI Tool That Records Doctor Visits (arstechnica.com) 34

An anonymous reader quotes a report from Ars Technica: Several Californians sued Sutter Health and MemorialCare this week over allegations that an AI transcription tool was used to record them without their consent, in violation of state and federal law. The proposed class-action lawsuit, filed on Wednesday in federal court in San Francisco, states that, within the past six months, the plaintiffs received medical care at various Sutter and MemorialCare facilities.

During those visits, medical staff used Abridge AI. According to the complaint, this system "captured and processed their confidential physician-patient communications. Plaintiffs did not receive clear notice that their medical conversations would be recorded by an artificial intelligence platform, transmitted outside the clinical setting, or processed through third-party systems." The complaint adds that these recordings "contained individually identifiable medical information, including but not limited to medical histories, symptoms, diagnoses, medications, treatment discussions, and other sensitive health disclosures communicated during confidential medical consultations."

In recent years, Abridge's software and AI service have been rapidly deployed across major health care providers nationwide, including Kaiser Permanente, the Mayo Clinic, Duke Health, and many more. When activated, the software captures, transcribes, and summarizes conversations between patients and doctors, and it turns them into clinical notes. Sutter Health began partnering with Abridge two years ago. Sutter spokesperson Liz Madison said the company is aware of the lawsuit. "We take patient privacy seriously and are committed to protecting the security of our patients' information," Madison said. "Technology used in our clinical settings is carefully evaluated and implemented in accordance with applicable laws and regulations."

EU

EU Parliament Fails To Renew Loophole Allowing Tech Firms To Report Abuse (theguardian.com) 17

Bruce66423 shares a report from the Guardian: The European parliament has blocked the extension of a law that permits big tech firms to scan for child sexual exploitation on their platforms, creating a legal gap that child safety experts say will lead to crimes going undetected. The law, which was a carve-out of the EU Privacy Act, was put in place in 2021 as a temporary measure allowing companies to use automated detection technologies to scan messages for harms, including child sexual abuse material (CSAM), grooming and sextortion. However, it expired on April 3, and the EU parliament decided not to vote to extend it, amid privacy concerns from some lawmakers.

The regulatory gap has created uncertainty for big tech companies, because while scanning for harms on their platforms is now illegal, they still remain liable to remove any illegal content hosted on their platforms under a different law, the Digital Services Act. Google, Meta, Snap and Microsoft said they would continue to voluntarily scan their platforms for CSAM, in a joint statement posted on a Google blog.
Bruce66423 adds: "Child abuse as the excuse for avoiding privacy protections. Who would have thought it?"
Encryption

Google Rolls Out Gmail End-To-End Encryption On Mobile Devices (bleepingcomputer.com) 27

Gmail's end-to-end encryption is now available on all Android and iOS devices, letting enterprise users send and read encrypted emails directly in the app without any extra tools. "This launch combines the highest level of privacy and data encryption with a user-friendly experience for all users, enabling simple encrypted email for all customers from small businesses to enterprises and public sector," Google announced in a blog post. BleepingComputer reports: Starting this week, encrypted messages will be delivered as regular emails to Gmail recipients' inboxes if they use the Gmail app. Recipients who don't have the Gmail mobile app and use other email services can read them in a web browser, regardless of the device and service they're using.

[...] This feature is now available for all client-side encryption (CSE) users with Enterprise Plus licenses and the Assured Controls or Assured Controls Plus add-on after admins enable the Android and iOS clients in the CSE admin interface via the Admin Console. Gmail's end-to-end encryption (E2EE) feature is powered by the client-side encryption (CSE) technical control, which allows Google Workspace organizations to use encryption keys they control and are stored outside Google's servers to protect sensitive documents and emails.

Bitcoin

NYT Claims Adam Back Is Bitcoin Creator Satoshi Nakamoto (nytimes.com) 85

A New York Times investigation by John Carreyrou claims a British cryptographer named Adam Back is the strongest circumstantial candidate yet for being Satoshi Nakamoto. The report citing overlaps in writing style, ideology, technical background, and old posts that outlined key parts of Bitcoin years before its launch. Carreyrou is a renowned investigative journalist and author, best known for exposing the massive fraud at Theranos while at the Wall Street Journal. Here's an excerpt from the report: ... As anyone steeped in Bitcoin lore will tell you, Satoshi was a master at the art of maintaining anonymity on the internet, leaving few, if any, digital footprints behind. But Satoshi did leave behind a corpus of texts, including a nine-page white paper (PDF) outlining his invention and his many posts on the Bitcointalk forum, an online message board where users gathered to discuss the digital currency's software, economics and philosophy. And that corpus, it turned out, had expanded significantly during the impostor's civil trial when Martti Malmi, a Finnish programmer who collaborated with Satoshi in Bitcoin's early days, released a trove of hundreds of emails he had exchanged with him. Emails Satoshi sent to other early Bitcoin adopters had surfaced before, but none came close in volume to the Malmi dump. If Satoshi was ever going to be found, I was convinced the key lay somewhere in these texts.

Then again, others must have gone down this road before me. Journalists, academics and internet sleuths had been trying to identify Satoshi for 16 years. During that span, more than 100 names had been put forward, including those of an Irish cryptography student, an unemployed Japanese American engineer, a South African criminal mastermind and the mathematician portrayed in the movie "A Beautiful Mind." The most alluring theories had focused on coincidences that aligned with what little was known about Satoshi: a particular code-writing style, a mysterious work history, an expertise in Bitcoin's key technical concepts, an anti-government worldview. But they had run aground under the weight of an alibi or some other piece of inconsistent or contrary evidence. Each failure had been met with glee by many members of the Bitcoin community. As they liked to point out, only Satoshi could definitively prove his identity by moving some of his coins. Any evidence short of that would be circumstantial.

It seemed foolish to think that I could somehow crack a case that had confounded so many others. But I craved the thrill of a big, challenging story. So I decided to try once more to unmask Bitcoin's mysterious creator.
Back, for his part, denies being Satoshi, writing in a post on X: "i'm not satoshi, but I was early in laser focus on the positive societal implications of cryptography, online privacy and electronic cash, hence my ~1992 onwards active interest in applied research on ecash, privacy tech on cypherpunks list which led to hashcash and other ideas."
Privacy

LinkedIn Faces Spying Allegations Over Browser Extension Scanning (pcmag.com) 70

LinkedIn is facing allegations that it quietly scans users' browsers for installed Chrome extensions. The German group Fairlinked e.V. goes so far as to claim that the site is "running one of the largest corporate espionage operations in modern history."

"The program runs silently, without any visible indicator to the user," the group says. "It does not ask for consent. It does not disclose what it is doing. It reports the results to LinkedIn's servers. This is not a one-time check. The scan runs on every page load, for every visitor." PCMag reports: This browser extension "fingerprinting" technique has been spotted before, but it was previously found to probe only 2,000 to 3,000 extensions. Fairlinked alleges that LinkedIn is now scanning for 6,222 extensions that could indicate a user's political opinions or religious views. For example, the extensions LinkedIn will look for include one that flags companies as too "woke," one that can add an "anti-Zionist" tag to LinkedIn profiles, and two others that can block content forbidden under Islamic teachings.

It would also be a cakewalk to tie the collected extension data to specific users, since LinkedIn operates as a vast professional social network that covers people's work history. Fairlinked's concern is that Microsoft and LinkedIn can allegedly use the data to identify which companies use competing products. "LinkedIn has already sent enforcement threats to users of third-party tools, using data obtained through this covert scanning to identify its targets," the group claims. However, LinkedIn claims that Fairlinked mischaracterizes a LinkedIn safeguard designed to prevent web scraping by browser extensions. "We do not use this data to infer sensitive information about members," the company says. "To protect the privacy of our members, their data, and to ensure site stability, we do look for extensions that scrape data without members' consent or otherwise violate LinkedIn's Terms of Service," LinkedIn adds.

[...] The statement goes on to allege that Fairlinked is from a developer whose account was previously suspended for web scraping. One of the group's board members is listed as "S.Morell," which appears to be Steven Morell, the founder of Teamfluence, a tool that helps businesses monitor LinkedIn activity. [...] Still, the Microsoft-owned site is facing some blowback for not clearly disclosing the browser extension scanning in LinkedIn's privacy policy. Fairlinked is soliciting donations for a legal fund to take on Microsoft and is urging the public to encourage local regulators to intervene.

The Internet

Fan Fiction Website AO3 Exits Beta After 17 Years 3

Archive of Our Own (AO3) is officially dropping its "beta" label after 17 years. The Organization for Transformative Works, the nonprofit behind the fanfiction site, said the site will keep evolving with new improvements even though it's no longer technically in beta.

"As the AO3 software has been stable for a long time, the change is mostly cosmetic and does not indicate that everything is finalized or perfectly working," the organizations says. "Exiting beta doesn't mean we'll stop continuing to improve AO3 -- our volunteer coders and community contributors will still be working to add to and improve AO3 every day."

Some of the features it's introduced over the years include a tag system, offline fanworks downloads, privacy settings that let creators restrict access to their work, and new modes for multi-chapter works. As it stands, the site says it has more than 10 million registered users and 17 million fanworks.
The Courts

Perplexity's 'Incognito Mode' Is a 'Sham,' Lawsuit Says 5

An anonymous reader quotes a report from Ars Technica: Perplexity's AI search engine encourages users to go deeper with their prompts by engaging in chat sessions that a lawsuit has alleged are often shared in their entirety with Google and Meta without users' knowledge or consent. "This happened to every user regardless of whether or not they signed up for a Perplexity account," the lawsuit alleged, while stressing that "enormous volumes of sensitive information from both subscribed and non-subscribed users" are shared.

Using developer tools, the lawsuit found that opening prompts are always shared, as are any follow-up questions the search engine asks that a user clicks on. Privacy concerns are seemingly worse for non-subscribed users, the complaint alleged. Their initial prompts are shared with "a URL through which the entire conversation may be accessed by third parties like Meta and Google." Disturbingly, the lawsuit alleged, chats are also shared with personally identifiable information (PII), even when users who want to stay anonymous opt to use Perplexity's "Incognito Mode." That mode, the lawsuit charged, is a "sham."

"'Incognito' mode does nothing to protect users from having their conversations shared with Meta and Google," the complaint said. "Even paid users who turned on the 'Incognito' feature still had their conversations shared with Meta and Google, along with their email addresses and other identifiers that allowed Meta and Google to personally identify them."
"Perplexity's failure to inform its users that their personal information has been disclosed to Meta and Google or to take any steps to halt the continued disclosure of users' information is malicious, oppressive, and in reckless disregard" of users' rights, the lawsuit alleged.

"Nothing on Perplexity's website warns users that their conversations with its AI Machine will be shared with Meta and Google," Doe alleged. "Much less does Perplexity warn subscribed users that its 'Incognito Mode' does not function to protect users' private conversations from disclosure to companies like Meta and Google."
The Internet

Cloudflare Announces EmDash As Open-Source 'Spiritual Successor' To WordPress (phoronix.com) 41

In classic Cloudflare fashion, the CDN provider used April Fool's Day to unveil an actual, "not a joke" product. Today, the company announced EmDash -- an open-source "spiritual successor" to WordPress that aims to solve plugin security. Phoronix reports: With the help of AI coding agents, Cloudflare engineers have been rebuilding the WordPress open-source project "from the ground up." EmDash is written entirely in TypeScript and is a server-less design. Making plug-ins more secure than the WordPress architecture, EmDash plug-ins are sandboxed and run in their own isolate. EmDash builds upon the Astro web framework. EmDash doesn't rely on any WordPress code but is designed to be compatible with WordPress functionality. EmDash is open-source now under the MIT license. The EmDash code is available on GitHub.
The Courts

OkCupid Settles FTC Case On Alleged Misuse of Its Users' Personal Data (engadget.com) 11

OkCupid and parent company Match Group settled an FTC case dating back to 2014 over allegations that the dating app shared users' photos and other personal data with a third party without proper disclosure or opt-out rights. Engadget reports: According to the FTC, OkCupid's privacy policy at the time noted that the company wouldn't share a user's personal information with others, except for some cases including "service providers, business partners, other entities within its family of businesses." However, the lawsuit accused OkCupid of sharing three million photos of its users to Clarifai, which the FTC claims is a "unrelated third party" that didn't fall under the allowed entities. On top of that, the lawsuit alleged that OkCupid didn't inform its users of this data sharing, nor give them a chance to opt out.

Moving forward, the settlement would "permanently prohibit" Match Group, which owns OkCupid, and Humor Rainbow, which operates OkCupid, from misrepresenting what kind of personal information it collects, the purpose for collecting the data and any consumer choices to prevent data collection. Even after the 2014 incident, OkCupid was found with security flaws that could've exposed user account info but, which were quickly patched in 2020.

Social Networks

Will Social Media Change After YouTube and Meta's Court Defeat? (theverge.com) 54

Yes, this week YouTube and Meta were found negligent in a landmark case about social media addiction.

But "it's still far from certain what this defeat will change," argues The Verge's senior tech and policy editor, "and what the collateral damage could be." If these decisions survive appeal — which isn't certain — the direct outcome would be multimillion-dollar penalties. Depending on the outcome of several more "bellwether" cases in Los Angeles, a much larger group settlement could be reached down the road... For many activists, the overall goal is to make clear that lawsuits will keep piling up if companies don't change their business practices...

The best-case outcome of all this has been laid out by people like Julie Angwin, who wrote in The New York Times that companies should be pushed to change "toxic" features like infinite scrolling, beauty filters that encourage body dysmorphia, and algorithms that prioritize "shocking and crude" content. The worst-case scenario falls along the lines of a piece from Mike Masnick at Techdirt, who argued the rulings spell disaster for smaller social networks that could be sued for letting users post and see First Amendment-protected speech under a vague standard of harm. He noted that the New Mexico case hinged partly on arguing that Meta had harmed kids by providing end-to-end encryption in private messaging, creating an incentive to discontinue a feature that protects users' privacy — and indeed, Meta discontinued end-to-end encryption on Instagram earlier this month.

Blake Reid, a professor at Colorado Law, is more circumspect. "It's hard right now to forecast what's going to happen," Reid told The Verge in an interview. On Bluesky, he noted that companies will likely look for "cold, calculated" ways to avoid legal liability with the minimum possible disruption, not fundamentally rethink their business models. "There are obviously harms here and it's pretty important that the tort system clocked those harms" in the recent cases, he told The Verge. "It's just that what comes in the wake of them is less clear to me".

The article also includes this prediction from legal blogger/Section 230 export Eric Goldman. "There will be even stronger pushes to restrict or ban children from social media." Goldman argues "This hurts many subpopulations of minors, ranging from LGBTQ teens who will be isolated from communities that can help them navigate their identities to minors on the autism spectrum who can express themselves better online than they can in face-to-face conversations."
Desktops (Apple)

MacOS 26.4 Adds Warnings For ClickFix Attacks to Its Terminal App (macrumors.com) 66

An anonymous Slashdot reader writes: ClickFix attacks are ramping up. These attacks have users copy and paste a string to something that can execute a command line — like the Windows Run dialog, or a shell prompt.

But MacRumors reports that macOS 26.4 Tahoe (updated earlier this week) introduces a new feature to its Terminal app where it will detect ClickFix attempts and stop them by prompting the user if they really wanted to run those commands.

According to MacRumors, the warning readers "Possible malware, Paste blocked."

"Your Mac has not been harmed. Scammers often encourage pasting text into Terminal to try and harm your Mac or compromise your privacy...."

There is also a "Paste Anyway" option if users still wish to proceed.
Advertising

'Ads Are Popping Up On the Fridge and It Isn't Going Over Well' (msn.com) 122

The Wall Street Journal reports: Walking into his kitchen, Tim Yoder recoiled at a message on his refrigerator door: "Shop Samsung water filters." Yoder, a supply-chain manager in Chicago, owns a Samsung Electronics Family Hub fridge. He paid $1,400 for an appliance that came with a 32-inch screen on the door that allows him to control other Samsung gadgets, pull up recipes or stream music. But since last fall, it's been intermittently serving up ads, part of a pilot program being tested on some of Samsung's smart fridges sold in the U.S. The response? Not warm. "I guess this is another place for somebody to shove an ad in your face," said the 47-year-old Yoder, recalling the first time he noticed one...

The ads are only on certain Family Hub fridges that have screens and internet connectivity. They run as a rectangular banner at the bottom — part of a widget that also shows news, the weather and a calendar. Samsung declined to say how long the pilot might last or whether it would end. The firm recently unveiled a "Screens Everywhere" initiative that also includes washers, dryers and ovens.... Samsung launched the banner-type fridge ads that come as part of the widget via an October software update. In a footnote of a news release at the time, Samsung pledged to "serve contextual or non-personal ads" and respect data privacy. The banner ads can be turned off in settings.

Samsung said the purpose of the pilot is to explore whether ads relevant to home chores can be useful to owners, and that overall pushback has been negligible. The "turn-off" rate for the pilot ad program remains in the bottom single-digit range, it said... While owners can turn off the banner ads, doing so eliminates the widget altogether, a bummer for Brian Bosworth, a media-industry engineer who liked the feature. Bosworth thinks it's wrong to take away the new feature as a condition. Wanting to keep the widget but not the ads, the 49-year-old in Edgewater, Md., made sure his home router's ad-blocking software extended to his fridge. He hasn't seen another since.

One 27-year-old plans to return his refrigerator after the entire display "lit up with a full-screen ad for Apple TV's sci-fi show Pluribus," according to the article. The all-caps ad beckoned him "with an oft-used refrain directed at protagonist Carol Sturka: 'We're Sorry We Upset You, Carol.'"

Thanks to Slashdot reader fjo3 for sharing the article.
Desktops (Apple)

Windows PCs Crash Three Times As Often As Macs, Report Says (techspot.com) 186

A workplace-device study says Windows PCs crash significantly more often than Macs, lag further behind on patching and encryption in some sectors, and are typically replaced sooner. TechSpot reports: Omnissa's 2026 State of Digital Workspace report outlines the IT challenges that various organizations face from the growing use of AI and the heterogeneous deployment of enterprise devices. The relative instability of Windows and Android is a recurring theme throughout the report. The company gathered telemetry from clients located across the globe in retail, healthcare, finance, education, government, and other sectors throughout 2025. The data suggests that IT administrators face frustrating security gaps due to inconsistent patching across a diverse mosaic of devices and operating systems.

Employee workflow disruption, often due to software issues, is one area of concern. The report found that Windows devices were forced to shut down 3.1 times more often than Macs. Windows programs also froze 7.5 times more often than macOS apps and needed to be restarted more than twice as often. Certain industries were also alarmingly lax in securing Windows and Android devices. More than half of Windows and Android devices in healthcare and pharma were five major operating system updates behind, likely leaving them more vulnerable to errors and malware. More than half of the desktops and mobile devices used for education were also unencrypted, putting students' privacy at risk.

Macs also last longer, being replaced every five years on average, compared to every three years for Windows PCs. Despite a recent backlash against Windows, driven by a push for digital sovereignty in countries such as Germany, Windows use on government devices actually doubled last year. Meanwhile, Macs using Apple's M-series chips showcase a significant thermal advantage, with an average temperature of 40.1 degrees Celsius, while Intel processors run at 65.2 degrees.

Social Networks

California Bill Would Require Parent Bloggers To Delete Content of Minors On Social Media (latimes.com) 46

A California bill would let adults demand the removal of social media posts about them that were created by paid family content creators when they were minors. Supporters say Senate Bill 1247 addresses privacy, dignity, and safety harms caused when parents monetize their children's lives online. The Los Angeles Times reports: The legislation would require the parent or other relative to delete or edit the content within 10 business days of receiving the notification. Petitioners could take civil action against those who fail to comply and statutory damages would be set at $3,000 for each day the content remained online. Sen. Steve Padilla (D-San Diego), who introduced the bill last month, said it would help protect the dignity and mental health of those who had their childhood shared on social media. The measure was referred to the Senate Privacy, Digital Technologies and Consumer Protection Committee and is slated for a hearing on April 6.

"The evolution of these applications and technology is incredible," Padilla said. "But it's changing our social dynamic and it's creating situations that, while very productive for some folks, also need some guardrails." The bill would build upon previous legislation from Padilla that was signed into law two years ago and requires content creators that feature minors in at least 30% of their material to place some of their earnings into a trust the children can access when they turn 18.

Government

Senators Demand to Know How Much Energy Data Centers Use (wired.com) 51

Elizabeth Warren and Josh Hawley are pressing the Energy Information Administration (EIA) to provide better information on how much electricity data centers actually use. In a joint letter sent to the EIA on Thursday, the two senators press the agency to publicly collect "comprehensive, annual energy-use disclosures" on data centers, saying it's "essential for accurate grid planning and will support policymaking to prevent large companies from increasing electricity costs for American families." Wired reports: In December, EIA administrator Tristan Abbey said at a roundtable that he expects the EIA "is going to be an essential player in providing objective data and analysis to policymakers" with respect to data centers. The agency announced on Wednesday that it would be conducting a voluntary pilot program to collect energy consumption information from nearly 200 companies operating data centers in Texas, Washington, and Virginia, which will cover "energy sources, electricity consumption, site characteristics, server metrics, and cooling systems."

While the senators praise the EIA pilot program, their letter includes several questions about how the agency plans to move forward with more data collection, such as whether or not the energy surveys will be mandatory and whether or not the EIA will collect information on behind-the-meter power. This information will be especially crucial, the senators say, to make sure that big tech companies that signed the agreement at the White House earlier this month pledging that consumers won't bear the costs of data center electricity use will stick to their promises. "Without this data, policymakers, utility companies, and local communities are operating in the dark," the senators write.

The EIA mandates that other industries, including oil and gas and manufacturing, provide regular data to the agency; Hawley and Warren assert that the EIA should be able to collect similar information from data centers under the same provision. The provision is broad enough, Peskoe says, that it could absolutely be interpreted to encompass data centers.
Yesterday, Senator Bernie Sanders and Rep. Alexandria Ocasio-Cortez announced a bill that would "enact a reasonable pause to the development of AI to ensure the safety of humanity." It calls for a federal moratorium on AI data centers until stronger national safeguards are in place around safety, jobs, privacy, energy costs, and environmental impact.
Television

Vizio TVs Now Require Walmart Accounts For Smart Features (arstechnica.com) 79

An anonymous reader quotes a report from Ars Technica: Prospective Vizio TV buyers should know there's a good chance the set won't work properly without a Walmart account. In an attempt to better serve advertisers, Walmart, which bought Vizio in December 2024, announced this week that select newly purchased Vizio TVs now require a Walmart account for setup and accessing smart TV features. Since 2024, Vizio TVs have required a Vizio account, which a Vizio OS website says is necessary for accessing "exclusive offers, subscription management, and tailored support." Accounts are also central to Vizio's business, which is largely driven by ads and tracking tied to its OS.

A Walmart spokesperson confirmed to Ars Technica that Walmart accounts will be mandatory on "select new Vizio OS TVs" for owners to complete onboarding and to use smart TV features. The representative added: "Customers who already have an existing Vizio account are being given the option to merge their Vizio account with their Walmart account. Customers with an existing Vizio account can opt out by deleting their Vizio account." The representative wouldn't confirm which TV models are affected. Walmart's representative said the Walmart account integration is "designed to respect consumer choice and privacy, with data used in aggregated, permissioned, and compliant ways" but didn't specify how.

Mozilla

Mozilla and Mila Team Up On Open Source AI Push 31

BrianFagioli writes: Mozilla just teamed up with Mila, the Quebec Artificial Intelligence Institute, to push open source AI -- and it feels like a direct response to Big Tech tightening its grip on the space. Instead of relying on closed models, the goal here is to build "sovereign AI" that's more transparent, privacy-focused, and actually under the control of developers and even governments. They're starting with things like private memory for AI agents, which sounds niche but matters if you care about where your data goes. Big question is whether open source can realistically keep up with the billions being poured into proprietary AI, but at least someone's trying to give folks an alternative. "Canada has what it takes to lead on frontier AI that the world can actually trust: the research depth, the values, and the will to do it differently. The next frontier in AI isn't just capability, it is trustworthiness, and Canada is uniquely positioned to lead on both. This partnership is a concrete step in that direction. Open, trustworthy AI isn't a compromise on ambition. It's the higher bar," said Valerie Pisano, president and CEO of Mila.
Privacy

Reddit Takes On Bots With 'Human Verification' Requirements (techcrunch.com) 75

Reddit is rolling out human-verification checks for accounts that show signs of bot-like behavior, while also labeling approved automated accounts that provide useful services. The social media company stressed that these checks will only happen if something appears "fishy," and that it is "not conducting sitewide human verification." TechCrunch reports: To identify potential bots, Reddit is using specialized tooling that looks at account-level signals and other factors -- like how quickly the account is attempting to write or post content. Using AI to write posts or comments, however, is not against its policies (though community moderators may set their own rules).

To verify an account is human, Reddit will leverage third-party tools like passkeys from Apple, Google, YubiKey, and other third-party biometric services, like Face ID or even Sam Altman's World ID -- or, in some countries, the use of government IDs. Reddit notes this last category may be required in some countries like the U.K. and Australia and some U.S. states, because of local regulations on age verification, but it's not the company's preferred method.
"If we need to verify an account is human, we'll do it in a privacy-first way," Reddit co-founder and CEO Steve Huffman wrote in the announcement Wednesday. "Our aim is to confirm there is a person behind the account, not who that person is. The goal is to increase transparency of what is what on Reddit while preserving the anonymity that makes Reddit unique. You shouldn't have to sacrifice one for the other."

Slashdot Top Deals