Encryption

WhatsApp Moves To Support Apple Against UK Government's Data Access Demands (bbc.com) 8

WhatsApp has applied to submit evidence in Apple's legal battle against the UK Home Office over government demands for access to encrypted user data. The messaging platform's boss Will Cathcart told the BBC the case "could set a dangerous precedent" by "emboldening other nations" to seek to break encryption protections.

The confrontation began when Apple received a secret Technical Capability Notice from the Home Office earlier this year demanding the right to access data from its global customers for national security purposes. Apple responded by first pulling its Advanced Data Protection system from the UK, then taking the government to court to overturn the request.

Cathcart said WhatsApp "would challenge any law or government request that seeks to weaken the encryption of our services." US Director of National Intelligence Tulsi Gabbard has called the UK's demands an "egregious violation" of American citizens' privacy rights.
The Internet

40,000 IoT Cameras Worldwide Stream Secrets To Anyone With a Browser 21

Connor Jones reports via The Register: Security researchers managed to access the live feeds of 40,000 internet-connected cameras worldwide and they may have only scratched the surface of what's possible. Supporting the bulletin issued by the Department of Homeland Security (DHS) earlier this year, which warned of exposed cameras potentially being used in Chinese espionage campaigns, the team at Bitsight was able to tap into feeds of sensitive locations. The US was the most affected region, with around 14,000 of the total feeds streaming from the country, allowing access to the inside of datacenters, healthcare facilities, factories, and more. Bitsight said these feeds could potentially be used for espionage, mapping blind spots, and gleaning trade secrets, among other things.

Aside from the potential national security implications, cameras were also accessed in hotels, gyms, construction sites, retail premises, and residential areas, which the researchers said could prove useful for petty criminals. Monitoring the typical patterns of activity in retail stores, for example, could inform robberies, while monitoring residences could be used for similar purposes, especially considering the privacy implications.
"It should be obvious to everyone that leaving a camera exposed on the internet is a bad idea, and yet thousands of them are still accessible," said Bitsight in a report. "Some don't even require sophisticated hacking techniques or special tools to access their live footage in unintended ways. In many cases, all it takes is opening a web browser and navigating to the exposed camera's interface."

HTTP-based cameras accounted for 78.5 percent of the total 40,000 sample, while RTSP feeds were comparatively less open, accounting for only 21.5 percent.

To protect yourself or your company, Bitsight says you should secure your surveillance cameras by changing default passwords, disabling unnecessary remote access, updating firmware, and restricting access with VPNs or firewalls. Regularly monitoring for unusual activity also helps to prevent your footage from being exposed online.
Android

Android 16 Is Here (blog.google) 23

An anonymous reader shares a blog post from Google: Today, we're bringing you Android 16, rolling out first to supported Pixel devices with more phone brands to come later this year. This is the earliest Android has launched a major release in the last few years, which ensures you get the latest updates as soon as possible on your devices. Android 16 lays the foundation for our new Material 3 Expressive design, with features that make Android more accessible and easy to use.
AI

Apple Lets Developers Tap Into Its Offline AI Models (techcrunch.com) 14

An anonymous reader quotes a report from TechCrunch: Apple is launching what it calls the Foundation Models framework, which the company says will let developers tap into its AI models in an offline, on-device fashion. Onstage at WWDC 2025 on Monday, Apple VP of software engineering Craig Federighi said that the Foundation Models framework will let apps use on-device AI models created by Apple to drive experiences. These models ship as a part of Apple Intelligence, Apple's family of models that power a number of iOS features and capabilities.

"For example, if you're getting ready for an exam, an app like Kahoot can create a personalized quiz from your notes to make studying more engaging," Federighi said. "And because it happens using on-device models, this happens without cloud API costs [] We couldn't be more excited about how developers can build on Apple intelligence to bring you new experiences that are smart, available when you're offline, and that protect your privacy."

In a blog post, Apple says that the Foundation Models framework has native support for Swift, Apple's programming language for building apps for its various platforms. The company claims developers can access Apple Intelligence models with as few as three lines of code. Guided generation, tool calling, and more are all built into the Foundation Models framework, according to Apple. Automattic is already using the framework in its Day One journaling app, Apple says, while mapping app AllTrails is tapping the framework to recommend different hiking routes.

Security

A Researcher Figured Out How To Reveal Any Phone Number Linked To a Google Account (wired.com) 17

A cybersecurity researcher was able to figure out the phone number linked to any Google account, information that is usually not public and is often sensitive, according to the researcher, Google, and 404 Media's own tests. From a report: The issue has since been fixed but at the time presented a privacy issue in which even hackers with relatively few resources could have brute forced their way to peoples' personal information. "I think this exploit is pretty bad since it's basically a gold mine for SIM swappers," the independent security researcher who found the issue, who goes by the handle brutecat, wrote in an email.

[...] In mid-April, we provided brutecat with one of our personal Gmail addresses in order to test the vulnerability. About six hours later, brutecat replied with the correct and full phone number linked to that account. "Essentially, it's bruting the number," brutecat said of their process. Brute forcing is when a hacker rapidly tries different combinations of digits or characters until finding the ones they're after. Typically that's in the context of finding someone's password, but here brutecat is doing something similar to determine a Google user's phone number.

Brutecat said in an email the brute forcing takes around one hour for a U.S. number, or 8 minutes for a UK one. For other countries, it can take less than a minute, they said. In an accompanying video demonstrating the exploit, brutecat explains an attacker needs the target's Google display name. They find this by first transferring ownership of a document from Google's Looker Studio product to the target, the video says. They say they modified the document's name to be millions of characters, which ends up with the target not being notified of the ownership switch. Using some custom code, which they detailed in their write up, brutecat then barrages Google with guesses of the phone number until getting a hit.

Facebook

Mozilla Criticizes Meta's 'Invasive' Feed of Users' AI Prompts, Demands Its Shutdown (mozillafoundation.org) 37

In late April Meta introduced its Meta AI app, which included something called a Discover feed. ("You can see the best prompts people are sharing, or remix them to make them your own.")

But while Meta insisted "you're in control: nothing is shared to your feed unless you choose to post it" — just two days later Business Insider noticed that "clearly, some people don't realize they're sharing personal stuff." To be clear, your AI chats are not public by default — you have to choose to share them individually by tapping a share button. Even so, I get the sense that some people don't really understand what they're sharing, or what's going on.

Like the woman with the sick pet turtle. Or another person who was asking for advice about what legal measures he could take against his former employer after getting laid off. Or a woman asking about the effects of folic acid for a woman in her 60s who has already gone through menopause. Or someone asking for help with their Blue Cross health insurance bill... Perhaps these people knew they were sharing on a public feed and wanted to do so. Perhaps not. This leaves us with an obvious question: What's the point of this, anyway? Even if you put aside the potential accidental oversharing, what's the point of seeing a feed of people's AI prompts at all?

Now Mozilla has issued their own warning. "Meta is quietly turning private AI chats into public content," warns a new post this week from the Mozilla Foundation, "and too many people don't realize it's happening." That's why the Mozilla community is demanding that Meta:

- Shut down the Discover feed until real privacy protections are in place.

- Make all AI interactions private by default with no public sharing option unless explicitly enabled through informed consent.

- Provide full transparency about how many users have unknowingly shared private information.

- Create a universal, easy-to-use opt-out system for all Meta platforms that prevents user data from being used for AI training.

- Notify all users whose conversations may have been made public, and allow them to delete their content permanently.

Meta is blurring the line between private and public — and it's happening at the cost of our privacy. People have the right to know when they're speaking in public, especially when they believe they're speaking in private.

If you agree, add your name to demand Meta shut down its invasive AI feed — and guarantee that no private conversations are made public without clear, explicit, and informed opt-in consent.

AI

'Welcome to Campus. Here's Your ChatGPT.' (nytimes.com) 68

The New York Times reports: California State University announced this year that it was making ChatGPT available to more than 460,000 students across its 23 campuses to help prepare them for "California's future A.I.-driven economy." Cal State said the effort would help make the school "the nation's first and largest A.I.-empowered university system..." Some faculty members have already built custom chatbots for their students by uploading course materials like their lecture notes, slides, videos and quizzes into ChatGPT.
And other U.S. campuses including the University of Maryland are also "working to make A.I. tools part of students' everyday experiences," according to the article. It's all part of an OpenAI initiative "to overhaul college education — by embedding its artificial intelligence tools in every facet of campus life."

The Times calls it "a national experiment on millions of students." If the company's strategy succeeds, universities would give students A.I. assistants to help guide and tutor them from orientation day through graduation. Professors would provide customized A.I. study bots for each class. Career services would offer recruiter chatbots for students to practice job interviews. And undergrads could turn on a chatbot's voice mode to be quizzed aloud ahead of a test. OpenAI dubs its sales pitch "A.I.-native universities..." To spread chatbots on campuses, OpenAI is selling premium A.I. services to universities for faculty and student use. It is also running marketing campaigns aimed at getting students who have never used chatbots to try ChatGPT...

OpenAI's campus marketing effort comes as unemployment has increased among recent college graduates — particularly in fields like software engineering, where A.I. is now automating some tasks previously done by humans. In hopes of boosting students' career prospects, some universities are racing to provide A.I. tools and training...

[Leah Belsky, OpenAI's vice president of education] said a new "memory" feature, which retains and can refer to previous interactions with a user, would help ChatGPT tailor its responses to students over time and make the A.I. "more valuable as you grow and learn." Privacy experts warn that this kind of tracking feature raises concerns about long-term tech company surveillance. In the same way that many students today convert their school-issued Gmail accounts into personal accounts when they graduate, Ms. Belsky envisions graduating students bringing their A.I. chatbots into their workplaces and using them for life.

"It would be their gateway to learning — and career life thereafter," Ms. Belsky said.

Government

ACLU Accuses California Local Government's Drones of 'Runaway Spying Operation' (sfgate.com) 79

An anonymous reader shared this report from SFGate about a lawsuit alleging a "warrantless drone surveillance program" that's "trampling residents' right to privacy": Sonoma County has been accused of deploying hundreds of drone flights over residents in a "runaway spying operation"... according to a lawsuit filed Wednesday by the American Civil Liberties Union. The North Bay county of Sonoma initially started the 6-year-old drone program to track illegal cannabis cultivation, but the lawsuit alleges that officials have since turned it into a widespread program to catch unrelated code violations at residential properties and levy millions of dollars in fines. The program has captured 5,600 images during more than 700 flights, the lawsuit said...

Matt Cagle, a senior staff attorney with the ACLU Foundation of Northern California, said in a Wednesday news release that the county "has hidden these unlawful searches from the people they have spied on, the community, and the media...." The lawsuit says the county employees used the drones to spy on private homes without first receiving a warrant, including photographing private areas like hot tubs and outdoor baths, and through curtainless windows.

One plaintiff "said the county secretly used the drone program to photograph her Sonoma County horse stable and issue code violations," according to the article. She only discovered the use of the drones after a county employee mentioned they had photos of her property, according to the lawsuit. She then filed a public records request for the images, which left her "stunned" after seeing that the county employees were monitoring her private property including photographing her outdoor bathtub and shower, the lawsuit said.
Advertising

Washington Post's Privacy Tip: Stop Using Chrome, Delete Meta's Apps (and Yandex) (msn.com) 70

Meta's Facebook and Instagram apps "were siphoning people's data through a digital back door for months," writes a Washington Post tech columnist, citing researchers who found no privacy setting could've stopped what Meta and Yandex were doing, since those two companies "circumvented privacy and security protections that Google set up for Android devices.

"But their tactics underscored some privacy vulnerabilities in web browsers or apps. These steps can reduce your risks." Stop using the Chrome browser. Mozilla's Firefox, the Brave browser and DuckDuckGo's browser block many common methods of tracking you from site to site. Chrome, the most popular web browser, does not... For iPhone and Mac folks, Safari also has strong privacy protections. It's not perfect, though. No browser protections are foolproof. The researchers said Firefox on Android devices was partly susceptible to the data harvesting tactics they identified, in addition to Chrome. (DuckDuckGo and Brave largely did block the tactics, the researchers said....)

Delete Meta and Yandex apps on your phone, if you have them. The tactics described by the European researchers showed that Meta and Yandex are unworthy of your trust. (Yandex is not popular in the United States.) It might be wise to delete their apps, which give the companies more latitude to collect information that websites generally cannot easily obtain, including your approximate location, your phone's battery level and what other devices, like an Xbox, are connected to your home WiFi.

Know, too, that even if you don't have Meta apps on your phone, and even if you don't use Facebook or Instagram at all, Meta might still harvest information on your activity across the web.

Australia

Apple Warns Australia Against Joining EU In Mandating iPhone App Sideloading (neowin.net) 84

Apple has urged Australia not to follow the European Union in mandating iPhone app sideloading, warning that such policies pose serious privacy and security risks. "This communication comes as the Australian federal government considers new rules that could force Apple to open up its iOS ecosystem, much like what happened in Europe with recent legislation," notes Neowin. Apple claims that allowing alternative app stores has led to increased exposure to malware, scams, and harmful content. From the report: Apple, in its response to this Australian paper (PDF), stated that Australia should not use the EU's Digital Markets Act "as a blueprint". The company's core argument is that the changes mandated by the EU's DMA, which came into full effect in March 2024, introduce serious security and privacy risks for users. Apple claims that allowing sideloading and alternative app stores effectively opens the door for malware, fraud, scams, and other harmful content. The tech company also highlighted specific concerns from its European experience, alleging that its compliance there has led to users being able to install pornography apps and apps that facilitate copyright infringement, things its curated App Store aims to prevent. Apple maintains that its current review process is vital for user protection, and that its often criticized 30% commission applies mainly to the highest earning apps, with most developers paying a lower 15% rate or nothing.
Encryption

Lawmakers Vote To Stop NYPD's Attempt To Encrypt Their Radios (nypost.com) 74

alternative_right shares a report: New York state lawmakers voted to stop the NYPD's attempt to block its radio communications from the public Thursday, with the bill expected to head to Gov. Kathy Hochul's desk. The "Keep Police Radio Public Act" passed both the state Senate and state Assembly, with a sponsor of the legislation arguing the proposal strikes the "proper balance" in the battle between transparency and sensitive information.

"Preserving access to police radio is critical for a free press and to preserve the freedoms and protections afforded by the public availability of this information," state Sen. Michael Gianaris (D-Queens) said in a statement. "As encrypted radio usage grows, my proposal strikes the proper balance between legitimate law enforcement needs and the rights and interests of New Yorkers."

The bill, which was sponsored in the Assembly by lawmaker Karines Reyes (D-Bronx), is meant to make real-time police radio communications accessible to emergency services organizations and reporters. "Sensitive information" would still be kept private, according to the legislation.
In late 2023, the NYPD began encrypting its radio communications to increase officer safety and "protect the privacy interests of victims and witnesses." However, it led to outcry from press advocates and local officials concerned about reduced transparency and limited access to real-time information.

A bill to address the issue has passed both chambers of New York's legislature, but Governor Hochul has not yet indicated whether she will sign it.
Nintendo

Nintendo Warns Switch 2 GameChat Users: 'Your Chat Is Recorded' (arstechnica.com) 68

Ars Technica's Kyle Orland reports: Last month, ahead of the launch of the Switch 2 and its GameChat communication features, Nintendo updated its privacy policy to note that the company "may also monitor and record your video and audio interactions with other users." Now that the Switch 2 has officially launched, we have a clearer understanding of how the console handles audio and video recorded during GameChat sessions, as well as when that footage may be sent to Nintendo or shared with partners, including law enforcement. Before using GameChat on Switch 2 for the first time, you must consent to a set of GameChat Terms displayed on the system itself. These terms warn that chat content is "recorded and stored temporarily" both on your system and the system of those you chat with. But those stored recordings are only shared with Nintendo if a user reports a violation of Nintendo's Community Guidelines, the company writes.

That reporting feature lets a user "review a recording of the last three minutes of the latest three GameChat sessions" to highlight a particular section for review, suggesting that chat sessions are not being captured and stored in full. The terms also lay out that "these recordings are available only if the report is submitted within 24 hours," suggesting that recordings are deleted from local storage after a full day. If a report is submitted to Nintendo, the company warns that it "may disclose certain information to third parties, such as authorities, courts, lawyers, or subcontractors reviewing the reported chats." If you don't consent to the potential for such recording and sharing, you're prevented from using GameChat altogether.

Nintendo is extremely clear that the purpose of its recording and review system is "to protect GameChat users, especially minors" and "to support our ability to uphold our Community Guidelines." This kind of human moderator review of chats is pretty common in the gaming world and can even apply to voice recordings made by various smart home assistants. [...] Overall, the time-limited, local-unless-reported recordings Nintendo makes here seem like a minimal intrusion on the average GameChat user's privacy. Still, if you're paranoid about Nintendo potentially seeing and hearing what's going on in your living room, it's good to at least be aware of it.

The Courts

OpenAI Slams Court Order To Save All ChatGPT Logs, Including Deleted Chats (arstechnica.com) 103

An anonymous reader quotes a report from Ars Technica: OpenAI is now fighting a court order (PDF) to preserve all ChatGPT user logs—including deleted chats and sensitive chats logged through its API business offering -- after news organizations suing over copyright claims accused the AI company of destroying evidence. "Before OpenAI had an opportunity to respond to those unfounded accusations, the court ordered OpenAI to 'preserve and segregate all output log data that would otherwise be deleted on a going forward basis until further order of the Court (in essence, the output log data that OpenAI has been destroying)," OpenAI explained in a court filing (PDF) demanding oral arguments in a bid to block the controversial order.

In the filing, OpenAI alleged that the court rushed the order based only on a hunch raised by The New York Times and other news plaintiffs. And now, without "any just cause," OpenAI argued, the order "continues to prevent OpenAI from respecting its users' privacy decisions." That risk extended to users of ChatGPT Free, Plus, and Pro, as well as users of OpenAI's application programming interface (API), OpenAI said. The court order came after news organizations expressed concern that people using ChatGPT to skirt paywalls "might be more likely to 'delete all [their] searches' to cover their tracks," OpenAI explained. Evidence to support that claim, news plaintiffs argued, was missing from the record because so far, OpenAI had only shared samples of chat logs that users had agreed that the company could retain. Sharing the news plaintiffs' concerns, the judge, Ona Wang, ultimately agreed that OpenAI likely would never stop deleting that alleged evidence absent a court order, granting news plaintiffs' request to preserve all chats.

OpenAI argued the May 13 order was premature and should be vacated, until, "at a minimum," news organizations can establish a substantial need for OpenAI to preserve all chat logs. They warned that the privacy of hundreds of millions of ChatGPT users globally is at risk every day that the "sweeping, unprecedented" order continues to be enforced. "As a result, OpenAI is forced to jettison its commitment to allow users to control when and how their ChatGPT conversation data is used, and whether it is retained," OpenAI argued. Meanwhile, there is no evidence beyond speculation yet supporting claims that "OpenAI had intentionally deleted data," OpenAI alleged. And supposedly there is not "a single piece of evidence supporting" claims that copyright-infringing ChatGPT users are more likely to delete their chats. "OpenAI did not 'destroy' any data, and certainly did not delete any data in response to litigation events," OpenAI argued. "The Order appears to have incorrectly assumed the contrary."
One tech worker on LinkedIn suggested the order created "a serious breach of contract for every company that uses OpenAI," while privacy advocates on X warned, "every single AI service 'powered by' OpenAI should be concerned."

Also on LinkedIn, a consultant rushed to warn clients to be "extra careful" sharing sensitive data "with ChatGPT or through OpenAI's API for now," warning, "your outputs could eventually be read by others, even if you opted out of training data sharing or used 'temporary chat'!"
Cloud

AWS Forms EU-Based Cloud Unit As Customers Fret (theregister.com) 31

An anonymous reader quotes a report from The Register: In a nod to European customers' growing mistrust of American hyperscalers, Amazon Web Services says it is establishing a new organization in the region "backed by strong technical controls, sovereign assurances, and legal protections." Ever since the Trump 2.0 administration assumed office and implemented an erratic and unprecedented foreign policy stance, including aggressive tariffs and threats to the national sovereignty of Greenland and Canada, customers in Europe have voiced unease about placing their data in the hands of big U.S. tech companies. The Register understands that data sovereignty is now one of the primary questions that customers at European businesses ask sales reps at hyperscalers when they have conversations about new services.

[...] AWS is forming a new European organization with a locally controlled parent company and three subsidiaries incorporated in Germany, as part of its European Sovereign Cloud (ESC) rollout, set to launch by the end of 2025. Kathrin Renz, an AWS Industries VP based in Munich, will lead the operation as the first managing director of the AWS ESC. The other leaders, we're told, include a government security official and a privacy official – all EU citizens. The cloud giant stated: "AWS will establish an independent advisory board for the AWS European Sovereign Cloud, legally obligated to act in the best interest of the AWS European Sovereign Cloud. Reinforcing the sovereign control of the AWS European Sovereign Cloud, the advisory board will consist of four members, all EU citizens residing in the EU, including at least one independent board member who is not affiliated with Amazon. The advisory board will act as a source of expertise and provide accountability for AWS European Sovereign Cloud operations, including strong security and access controls and the ability to operate independently in the event of disruption."

The AWS ESC allows the business to continue operations indefinitely, "even in the event of a connectivity interruption between the AWS European Sovereign Cloud and the rest of the world." Authorized ESC staff who are EU residents will have independent access to a replica of the source code needed to maintain services under "extreme circumstances." The services will have "no critical dependencies on non-EU infrastructure," with staff, tech, and leadership all based on the continent, AWS said. "The AWS European Sovereign Cloud will have its own dedicated Amazon Route 53, providing customers with a highly available and scalable Domain Name System (DNS), domain name registration, and health-checking web services," the company said.
"The Route 53 name servers for the AWS European Sovereign Cloud will use only European Top Level Domains (TLDs) for their own names," added AWS. "AWS will also launch a dedicated 'root' European Certificate Authority, so that the key material, certificates, and identity verification needed for Secure Sockets Layer/Transport Layer Security certificates can all run autonomously within the AWS European Sovereign Cloud."

The Register also notes that the sovereign cloud will be "supported by a dedicated European Security Operations Center (SOC), led by an EU citizen residing in the EU." That said, the parent company "remains under American ownership and may be subject to the Cloud Act, which requires U.S. companies to turn over data to law enforcement authorities with the proper warrants, no matter where that data is stored."
Privacy

Meta and Yandex Are De-Anonymizing Android Users' Web Browsing Identifiers (github.io) 77

"It appears as though Meta (aka: Facebook's parent company) and Yandex have found a way to sidestep the Android Sandbox," writes Slashdot reader TheWho79. Researchers disclose the novel tracking method in a report: We found that native Android apps -- including Facebook, Instagram, and several Yandex apps including Maps and Browser -- silently listen on fixed local ports for tracking purposes.

These native Android apps receive browsers' metadata, cookies and commands from the Meta Pixel and Yandex Metrica scripts embedded on thousands of web sites. These JavaScripts load on users' mobile browsers and silently connect with native apps running on the same device through localhost sockets. As native apps access programmatically device identifiers like the Android Advertising ID (AAID) or handle user identities as in the case of Meta apps, this method effectively allows these organizations to link mobile browsing sessions and web cookies to user identities, hence de-anonymizing users' visiting sites embedding their scripts.

This web-to-app ID sharing method bypasses typical privacy protections such as clearing cookies, Incognito Mode and Android's permission controls. Worse, it opens the door for potentially malicious apps eavesdropping on users' web activity.

While there are subtle differences in the way Meta and Yandex bridge web and mobile contexts and identifiers, both of them essentially misuse the unvetted access to localhost sockets. The Android OS allows any installed app with the INTERNET permission to open a listening socket on the loopback interface (127.0.0.1). Browsers running on the same device also access this interface without user consent or platform mediation. This allows JavaScript embedded on web pages to communicate with native Android apps and share identifiers and browsing habits, bridging ephemeral web identifiers to long-lived mobile app IDs using standard Web APIs.
This technique circumvents privacy protections like Incognito Mode, cookie deletion, and Android's permission model, with Meta Pixel and Yandex Metrica scripts silently communicating with apps across over 6 million websites combined.

Following public disclosure, Meta ceased using this method on June 3, 2025. Browser vendors like Chrome, Brave, Firefox, and DuckDuckGo have implemented or are developing mitigations, but a full resolution may require OS-level changes and stricter enforcement of platform policies to prevent further abuse.
Open Source

'Ladybird' Browser's Nonprofit Becomes Public Charity, Now Officially Tax-Exempt (ladybird.org) 26

The Ladybird browser project is now officially tax-exempt as a U.S. 501(c)(3) nonprofit.

Started two years ago (by the original creator of SerenityOS), Ladybird will be "an independent, fast and secure browser that respects user privacy and fosters an open web." They're targeting Summer 2026 for the first Alpha version on Linux and macOS, and in May enjoyed "a pleasantly productive month" with 261 merged PRs from 53 contributors — and seven new sponsors (including coding livestreamer "ThePrimeagen").

And they're now recognized as a public charity: This is retroactive to March 2024, so donations made since then may be eligible for tax exemption (depending on country-specific rules). You can find all the relevant information on our new Organization page. ["Our mission is to create an independent, fast and secure browser that respects user privacy and fosters an open web. We are tax-exempt and rely on donations and sponsorships to fund our development efforts."]
Other announcements for May:
  • "We've been making solid progress on Web Platform Tests... This month, we added 15,961 new passing tests for a total of 1,815,223."
  • "We've also done a fair bit of performance work this month, targeting Speedometer and various websites that are slower than we'd like." [The optimizations led to a 10% speed-up on Speedometer 2.1.]

Government

Brazil Tests Letting Citizens Earn Money From Data in Their Digital Footprint (restofworld.org) 15

With over 200 million people, Brazil is the world's fifth-largest country by population. Now it's testing a program that will allow Brazilians "to manage, own, and profit from their digital footprint," according to RestOfWorld.org — "the first such nationwide initiative in the world."

The government says it's partnering with California-based data valuation/monetization firm DrumWave to create "data savings account" to "transform data into economic assets, with potential for monetization and participation in the benefits generated by investing in technologies such as AI LLMs." But all based on "conscious and authorized use of personal information." RestOfWorld reports: Today, "people get nothing from the data they share," Brittany Kaiser, co-founder of the Own Your Data Foundation and board adviser for DrumWave, told Rest of World. "Brazil has decided its citizens should have ownership rights over their data...." After a user accepts a company's offer on their data, payment is cashed in the data wallet, and can be immediately moved to a bank account. The project will be "a correction in the historical imbalance of the digital economy," said Kaiser. Through data monetization, the personal data that companies aggregate, classify, and filter to inform many aspects of their operations will become an asset for those providing the data...

Brazil's project stands out because it brings the private sector and the government together, "so it has a better chance of catching on," said Kaiser. In 2023, Brazil's Congress drafted a bill that classifies data as personal property. The country's current data protection law classifies data as a personal, inalienable right. The new legislation gives people full rights over their personal data — especially data created "through use and access of online platforms, apps, marketplaces, sites and devices of any kind connected to the web." The bill seeks to ensure companies offer their clients benefits and financial rewards, including payment as "compensation for the collecting, processing or sharing of data." It has garnered bipartisan support, and is currently being evaluated in Congress...

If approved, the bill will allow companies to collect data more quickly and precisely, while giving users more clarity over how their data will be used, according to Antonielle Freitas, data protection officer at Viseu Advogados, a law firm that specializes in digital and consumer laws. As data collection becomes centralized through regulated data brokers, the government can benefit by paying the public to gather anonymized, large-scale data, Freitas told Rest of World. These databases are the basis for more personalized public services, especially in sectors such as health care, urban transportation, public security, and education, she said.

This first pilot program involves "a small group of Brazilians who will use data wallets for payroll loans," according to the article — although Pedro Bastos, a researcher at Data Privacy Brazil, sees downsides. "Once you treat data as an economic asset, you are subverting the logic behind the protection of personal data," he told RestOfWorld. The data ecosystem "will no longer be defined by who can create more trust and integrity in their relationships, but instead, it will be defined by who's the richest."

Thanks to Slashdot reader applique for sharing the news.
Privacy

Developer Builds Tool That Scrapes YouTube Comments, Uses AI To Predict Where Users Live (404media.co) 34

An anonymous reader quotes a report from 404 Media: If you've left a comment on a YouTube video, a new website claims it might be able to find every comment you've ever left on any video you've ever watched. Then an AI can build a profile of the commenter and guess where you live, what languages you speak, and what your politics might be. The service is called YouTube-Tools and is just the latest in a suite of web-based tools that started life as a site to investigate League of Legends usernames. Now it uses a modified large language model created by the company Mistral to generate a background report on YouTube commenters based on their conversations. Its developer claims it's meant to be used by the cops, but anyone can sign up. It costs about $20 a month to use and all you need to get started is a credit card and an email address.

The tool presents a significant privacy risk, and shows that people may not be as anonymous in the YouTube comments sections as they may think. The site's report is ready in seconds and provides enough data for an AI to flag identifying details about a commenter. The tool could be a boon for harassers attempting to build profiles of their targets, and 404 Media has seen evidence that harassment-focused communities have used the developers' other tools. YouTube-Tools also appears to be a violation of YouTube's privacy policies, and raises questions about what YouTube is doing to stop the scraping and repurposing of peoples' data like this. "Public search engines may scrape data only in accordance with YouTube's robots.txt file or with YouTube's prior written permission," it says.

Security

ASUS Router Backdoors Affect 9,000 Devices, Persists After Firmware Updates 23

An anonymous reader quotes a report from SC Media: Thousands of ASUS routers have been compromised with malware-free backdoors in an ongoing campaign to potentially build a future botnet, GreyNoise reported Wednesday. The threat actors abuse security vulnerabilities and legitimate router features to establish persistent access without the use of malware, and these backdoors survive both reboots and firmware updates, making them difficult to remove.

The attacks, which researchers suspect are conducted by highly sophisticated threat actors, were first detected by GreyNoise's AI-powered Sift tool in mid-March and disclosed Thursday after coordination with government officials and industry partners. Sekoia.io also reported the compromise of thousands of ASUS routers in their investigation of a broader campaign, dubbed ViciousTrap, in which edge devices from other brands were also compromised to create a honeypot network. Sekoia.io found that the ASUS routers were not used to create honeypots, and that the threat actors gained SSH access using the same port, TCP/53282, identified by GreyNoise in their report.
The backdoor campaign affects multiple ASUS router models, including the RT-AC3200, RT-AC3100, GT-AC2900, and Lyra Mini.

GreyNoise advises users to perform a full factory reset and manually reconfigure any potentially compromised device. To identify a breach, users should check for SSH access on TCP port 53282 and inspect the authorized_keys file for unauthorized entries.
Security

Data Broker Giant LexisNexis Says Breach Exposed Personal Information of Over 364,000 People (techcrunch.com) 48

An anonymous reader quotes a report from TechCrunch: LexisNexis Risk Solutions, a data broker that collects and uses consumers' personal data to help its paying corporate customers detect possible risk and fraud, has disclosed a data breach affecting more than 364,000 people. The company said in a filing with Maine's attorney general that the breach, dating back to December 25, 2024, allowed a hacker to obtain consumers' sensitive personal data from a third-party platform used by the company for software development.

Jennifer Richman, a spokesperson for LexisNexis, told TechCrunch that an unknown hacker accessed the company's GitHub account. The stolen data varies, but includes names, dates of birth, phone numbers, postal and email addresses, Social Security numbers, and driver license numbers. It's not immediately clear what circumstances led to the breach. Richman said LexisNexis received a report on April 1, 2025 "from an unknown third party claiming to have accessed certain information." The company would not say if it had received a ransom demand from the hacker.

Slashdot Top Deals