Security

Mysterious Database of 184 Million Records Exposes Vast Array of Login Credentials (wired.com) 15

A security researcher has discovered an exposed database containing 184 million login credentials for major services including Apple, Facebook, and Google accounts, along with credentials linked to government agencies across 29 countries. Jeremiah Fowler found the 47-gigabyte trove in early May, but the database contained no identifying information about its owner or origins.

The records included plaintext passwords and usernames for accounts spanning Netflix, PayPal, Discord, and other major platforms. A sample analysis revealed 220 email addresses with government domains from countries including the United States, China, and Israel. Fowler told Wired he suspects the data was compiled by cybercriminals using infostealer malware. World Host Group, which hosted the database, shut down access after Fowler's report and described it as content uploaded by a "fraudulent user." The company said it would cooperate with law enforcement authorities.
Privacy

Texas Adopts Online Child-Safety Bill Opposed by Apple's CEO (msn.com) 89

Texas Governor Greg Abbott signed an online child safety bill, bucking a lobbying push from big tech companies that included a personal phone call from from Apple CEO Tim Cook. From a report: The measure requires app stores to verify users' ages and secure parental approval before minors can download most apps or make in-app purchases. The bill drew fire from app store operators such as Google and Apple, which has argued that the legislation threatens the privacy of all users.

The bill was a big enough priority for Apple that Cook called Abbott to emphasize the company's opposition to it, said a person familiar with their discussion, which was first reported by the Wall Street Journal.

Privacy

Adidas Warns of Data Breach After Customer Service Provider Hack (bleepingcomputer.com) 10

German sportswear giant Adidas disclosed a data breach after attackers hacked a customer service provider and stole some customers' data. From a report: "adidas recently became aware that an unauthorized external party obtained certain consumer data through a third-party customer service provider," the company said. "We immediately took steps to contain the incident and launched a comprehensive investigation, collaborating with leading information security experts."

Adidas added that the stolen information did not include the affected customers' payment-related information or passwords, as the threat actors behind the breach only gained access to contact. The company has also notified the relevant authorities regarding this security incident and will alert those affected by the data breach.

Government

Does the World Need Publicly-Owned Social Networks? (elpais.com) 122

"Do we need publicly-owned social networks to escape Silicon Valley?" asks an opinion piece in Spain's El Pais newspaper.

It argues it's necessary because social media platforms "have consolidated themselves as quasi-monopolies, with a business model that consists of violating our privacy in search of data to sell ads..." Among the proposals and alternatives to these platforms, the idea of public social media networks has often been mentioned. Imagine, for example, a Twitter for the European Union, or a Facebook managed by media outlets like the BBC. In February, Spanish Prime Minister Pedro Sánchez called for "the development of our own browsers, European public and private social networks and messaging services that use transparent protocols." Former Spanish prime minister José Luis Rodríguez Zapatero — who governed from 2004 until 2011 — and the left-wing Sumar bloc in the Spanish Parliament have also proposed this. And, back in 2021, former British Labour Party leader Jeremy Corbyn made a similar suggestion.

At first glance, this may seem like a good idea: a public platform wouldn't require algorithms — which are designed to stimulate addiction and confrontation — nor would it have to collect private information to sell ads. Such a platform could even facilitate public conversations, as pointed out by James Muldoon, a professor at Essex Business School and author of Platform Socialism: How to Reclaim our Digital Future from Big Tech (2022)... This could be an alternative that would contribute to platform pluralism and ensure we're not dependent on a handful of billionaires. This is especially important at a time when we're increasingly aware that technology isn't neutral and that private platforms respond to both economic and political interests.

There's other possibilities. Further down they write that "it makes much more sense for the state to invest in, or collaborate with, decentralized social media networks based on free and interoperable software" that "allow for the portability of information and content." They even spoke to Cory Doctorow, who they say "proposes that the state cooperate with the software systems, developers, or servers for existing open-source platforms, such as the U.S. network Bluesky or the German firm Mastodon." (Doctorow adds that reclaiming digital independence "is incredibly important, it's incredibly difficult, and it's incredibly urgent."

The article also acknowledges the option of "legislative initiatives — such as antitrust laws, or even stricter regulations than those imposed in Europe — that limit or prevent surveillance capitalism." (Though they also figures showing U.S. tech giants have one of the largest lobbying groups in the EU, with Meta being the top spender...)
Privacy

Ask Slashdot: Do We Need Opt-Out-By-Default Privacy Laws? 92

"In large, companies failed to self-regulate," writes long-time Slashdot reader BrendaEM: They have not been respected the individual's right to privacy. In software and web interfaces, companies have buried their privacy setting so deep that they cannot be found in a reasonable amount of time, or an unreasonable amount of steps are needed to attempt to retain data. These companies have taken away the individual's right to privacy --by default.

Are laws needed that protect a person's privacy by default--unless specific steps are taken by that user/purchaser to relinquish it? Should the wording of the explanation be so written that the contract is brief, explaining the forfeiture of the privacy, and where that data might be going? Should a company selling a product be required to state before purchase which rights need to be dismissed for its use? Should a legal owner who purchased a product expect it to stop functioning--only because a newer user contract is not agreed to?

Share your own thoughts and experiences in the comments. What's your ideal privacy policy?

And do we need opt-out-by-defaut privacy laws?
Privacy

Destructive Malware Available In NPM Repo Went Unnoticed For 2 Years (arstechnica.com) 6

An anonymous reader quotes a report from Ars Technica: Researchers have found malicious software that received more than 6,000 downloads from the NPM repository over a two-year span, in yet another discovery showing the hidden threats users of such open source archives face. Eight packages using names that closely mimicked those of widely used legitimate packages contained destructive payloads designed to corrupt or delete important data and crash systems, Kush Pandya, a researcher at security firm Socket, reported Thursday. The packages have been available for download for more than two years and accrued roughly 6,200 downloads over that time.

"What makes this campaign particularly concerning is the diversity of attack vectors -- from subtle data corruption to aggressive system shutdowns and file deletion," Pandya wrote. "The packages were designed to target different parts of the JavaScript ecosystem with varied tactics." [...] Some of the payloads were limited to detonate only on specific dates in 2023, but in some cases a phase that was scheduled to begin in July of that year was given no termination date. Pandya said that means the threat remains persistent, although in an email he also wrote: "Since all activation dates have passed (June 2023-August 2024), any developer following normal package usage today would immediately trigger destructive payloads including system shutdowns, file deletion, and JavaScript prototype corruption."
The list of malicious packages included js-bomb, js-hood, vite-plugin-bomb-extend, vite-plugin-bomb, vite-plugin-react-extend, vite-plugin-vue-extend, vue-plugin-bomb, and quill-image-downloader.
Privacy

Russia To Enforce Location Tracking App On All Foreigners in Moscow (bleepingcomputer.com) 81

The Russian government has introduced a new law that makes installing a tracking app mandatory for all foreign nationals in the Moscow region. From a report: The new proposal was announced by the chairman of the State Duma, Vyacheslav Volodin, who presented it as a measure to tackle migrant crimes. "The adopted mechanism will allow, using modern technologies, to strengthen control in the field of migration and will also contribute to reducing the number of violations and crimes in this area," stated Volodin.

Using a mobile application that all foreigners will have to install on their smartphones, the Russian state will receive the following information: Residence location, fingerprint, face photograph, real-time geo-location monitoring.

Privacy

Signal Deploys DRM To Block Microsoft Recall's Invasive Screenshot Collection (betanews.com) 69

BrianFagioli writes: Signal has officially had enough, folks. You see, the privacy-first messaging app is going on the offensive, declaring war on Microsoft's invasive Recall feature by enabling a new "Screen security" setting by default on Windows 11. This move is designed to block Microsoft's AI-powered screenshot tool from capturing your private chats.

If you aren't aware, Recall was first unveiled a year ago as part of Microsoft's Copilot+ PC push. The feature quietly took screenshots of everything happening on your computer, every few seconds, storing them in a searchable timeline. Microsoft claimed it would help users "remember" what they've done. Critics called it creepy. Security experts called it dangerous. The backlash was so fierce that Microsoft pulled the feature before launch.

But now, in a move nobody asked for, Recall is sadly back. And thankfully, Signal isn't waiting around this time. The team has activated a Windows 11-specific DRM flag that completely blacks out Signal's chat window when a screenshot is attempted. If you've ever tried to screen grab a streaming movie, you'll know the result: nothing but black.

Google

Denver Detectives Crack Deadly Arson Case Using Teens' Google Search Histories (wired.com) 92

Three teenagers nearly escaped prosecution for a 2020 house fire that killed five people until Denver police discovered a novel investigative technique: requesting Google search histories for specific terms. Kevin Bui, Gavin Seymour, and Dillon Siebert had burned down a house in Green Valley Ranch, mistakenly targeting innocent Senegalese immigrants after Bui used Apple's Find My feature to track his stolen phone to the wrong address.

The August 2020 arson killed a family of five, including a toddler and infant. For months, detectives Neil Baker and Ernest Sandoval had no viable leads despite security footage showing three masked figures. Traditional methods -- cell tower data, geofence warrants, and hundreds of tips -- yielded nothing concrete. The breakthrough came when another detective suggested Google might have records of anyone searching the address beforehand.

Police obtained a reverse keyword search warrant requesting all users who had searched variations of "5312 Truckee Street" in the 15 days before the fire. Google provided 61 matching devices. Cross-referencing with earlier cell tower data revealed the three suspects, who had collectively searched the address dozens of times, including floor plans on Zillow.
Security

Most AI Chatbots Easily Tricked Into Giving Dangerous Responses, Study Finds (theguardian.com) 46

An anonymous reader quotes a report from The Guardian: Hacked AI-powered chatbots threaten to make dangerous knowledge readily available by churning out illicit information the programs absorb during training, researchers say. [...] In a report on the threat, the researchers conclude that it is easy to trick most AI-driven chatbots into generating harmful and illegal information, showing that the risk is "immediate, tangible and deeply concerning." "What was once restricted to state actors or organised crime groups may soon be in the hands of anyone with a laptop or even a mobile phone," the authors warn.

The research, led by Prof Lior Rokach and Dr Michael Fire at Ben Gurion University of the Negev in Israel, identified a growing threat from "dark LLMs", AI models that are either deliberately designed without safety controls or modified through jailbreaks. Some are openly advertised online as having "no ethical guardrails" and being willing to assist with illegal activities such as cybercrime and fraud. [...] To demonstrate the problem, the researchers developed a universal jailbreak that compromised multiple leading chatbots, enabling them to answer questions that should normally be refused. Once compromised, the LLMs consistently generated responses to almost any query, the report states.

"It was shocking to see what this system of knowledge consists of," Fire said. Examples included how to hack computer networks or make drugs, and step-by-step instructions for other criminal activities. "What sets this threat apart from previous technological risks is its unprecedented combination of accessibility, scalability and adaptability," Rokach added. The researchers contacted leading providers of LLMs to alert them to the universal jailbreak but said the response was "underwhelming." Several companies failed to respond, while others said jailbreak attacks fell outside the scope of bounty programs, which reward ethical hackers for flagging software vulnerabilities.

Android

Android XR Glasses Get I/O 2025 Demo (9to5google.com) 20

At I/O 2025, Google revealed new details about Android XR glasses, which will integrate with your phone to deliver context-aware support via Gemini AI. 9to5Google reports: Following the December announcement, Google today shared how all Android XR glasses will have a camera, microphones, and speakers, while an "in-lens display" that "privately provides helpful information right when you need it" is described as being "optional." The glasses will "work in tandem with your phone, giving you access to your apps without ever having to reach in your pocket." Gemini can "see and hear what you do" to "understand your context, remember what's important to you and provide information right when you need it." We see it accessing Google Calendar, Maps, Messages, Photos, Tasks, and Translate.

Google is "working with brands and partners to bring this technology to life," specifically Warby Parker and Gentle Monster. "Stylish glasses" are the goal for Android XR since they "can only truly be helpful if you want to wear them all day." Meanwhile, Google is officially "advancing" the Samsung partnership from headsets to Android XR glasses. They are making a software and reference hardware platform "that will enable the ecosystem to make great glasses." Notably, "developers will be able to start building for this platform later this year." On the privacy front, Google is now "gathering feedback on our prototypes with trusted testers."
Further reading: Google's Brin: 'I Made a Lot of Mistakes With Google Glass'
Google

Google's Brin: 'I Made a Lot of Mistakes With Google Glass' 34

Google co-founder Sergey Brin candidly addressed the failure of Google Glass during an unscheduled appearance at Tuesday's Google I/O conference, where the company announced a new smart glasses partnership with Warby Parker. "I definitely feel like I made a lot of mistakes with Google Glass, I'll be honest," Brin said.

He noted several key issues that doomed the $1,500 device launched in 2013, including a conspicuous front-facing camera that sparked privacy concerns. "Now it looks like normal glasses without that thing in front," Brin said of the new design. He also blamed the "technology gap" that existed a decade ago and his own inexperience with supply chains that prevented pricing the original Glass competitively.
Privacy

Coinbase Data Breach Will 'Lead To People Dying,' TechCrunch Founder Says (decrypt.co) 56

An anonymous reader quotes a report from Decrypt: The founder of online news publication TechCrunch has claimed that Coinbase's recent data breach "will lead to people dying," amid a wave of kidnap attempts targeting high-net-worth crypto holders. TechCrunch founder and venture capitalist Michael Arrington added that this should be a point of reflection for regulators to re-think the importance of know-your-customer (KYC), a process that requires users to confirm their identity to a platform. He also called for prison time for executives that fail to "adequately protect" customer information.

"This hack -- which includes home addresses and account balances -- will lead to people dying. It probably has already," he tweeted. "The human cost, denominated in misery, is much larger than the $400 million or so they think it will actually cost the company to reimburse people." [...] He believes that people are in immediate physical danger following the breach, which exposed data including names, addresses, phone numbers, emails, government-ID images, and more.

Arrington believes that in the wake of these attacks, crypto companies that handle user data need to be much more careful than they currently are. "Combining these KYC laws with corporate profit maximization and lax laws on penalties for hacks like these means these issues will continue to happen," he tweeted. "Both governments and corporations need to step up to stop this. As I said, the cost can only be measured in human suffering." Former Coinbase chief technology officer Balaji Srinivasan pushed back on Arrington's position that executives should be punished, arguing that regulators are forcing KYC onto unwilling companies. "When enough people die, the laws may change," Arrington hit back.

Privacy

France Barred Telegram Founder Pavel Durov From Traveling To US 18

French authorities have denied Telegram founder Pavel Durov's request to travel to the U.S. for "negotiations with investment funds." From a report: The Paris prosecutor's office told POLITICO that it rendered its decision on May 12 "on the grounds that such a trip abroad did not appear imperative or justified."

Durov was arrested in August 2024 at a French airport and has been under strict legal control since last September, when he was indicted on six charges related to illicit activity on the messaging app he operates. He is forbidden to leave France without authorization -- which he obtained to travel to Dubai from March 15 to April 7, the prosecutor's office said. Russian-born Durov is a citizen, among other countries, of France and the United Arab Emirates.
Businesses

Regeneron Pharmaceuticals To Buy 23andMe and Its Data For $256 Million (cnbc.com) 22

Regeneron Pharmaceuticals is acquiring most of 23andMe's assets for $256 million. The sale includes 23andMe's Personal Genome Service, Total Health and Research Services business lines. What's not included is 23andMe's telehealth unit, Lemonaid Health, which the company acquired for around $400 million in 2021. It'll be shut down, but all staffers will remain employed. CNBC reports: The deal is still subject to approval by the U.S. Bankruptcy Court for the Eastern District of Missouri. Pending approval, it's expected to close in the third quarter of this year, according to the release. In its bankruptcy proceedings, 23andMe required all bidders to comply with its privacy policies, and a court-appointed, independent "Consumer Privacy Ombudsman" will assess the deal, the companies said.

Several lawmakers and officials, including the Federal Trade Commission, had expressed concerns about the safety of consumers' genetic data through 23andMe's sale process. The privacy ombudsman will present a report on the acquisition to the court by June 10. "We are pleased to have reached a transaction that maximizes the value of the business and enables the mission of 23andMe to live on, while maintaining critical protections around customer privacy, choice and consent with respect to their genetic data," Mark Jensen, 23andMe's board chair, said in a statement.
"At its peak, 23andMe was valued at around $6 billion," notes the report.
AI

How Miami Schools Are Leading 100,000 Students Into the A.I. Future 63

Miami-Dade County Public Schools, the nation's third-largest school district, is now deploying Google's Gemini chatbots to more than 105,000 high school students -- marking the largest U.S. school district AI deployment to date. This represents a dramatic reversal from just two years ago when the district blocked such tools over cheating and misinformation concerns.

The initiative follows President Trump's recent executive order promoting AI integration "in all subject areas" from kindergarten through 12th grade. District officials spent months testing various chatbots for accuracy, privacy, and safety before selecting Google's platform.
Australia

New South Wales Education Department Caught Unaware After Microsoft Teams Began Collecting Students' Biometric Data (theguardian.com) 47

New submitter optical_phiber writes: In March 2025, the New South Wales (NSW) Department of Education discovered that Microsoft Teams had begun collecting students' voice and facial biometric data without their prior knowledge. This occurred after Microsoft enabled a Teams feature called 'voice and face enrollment' by default, which creates biometric profiles to enhance meeting experiences and transcriptions via its CoPilot AI tool.

The NSW department learned of the data collection a month after it began and promptly disabled the feature and deleted the data within 24 hours. However, the department did not disclose how many individuals were affected or whether they were notified. Despite Microsoft's policy of retaining data only while the user is enrolled and deleting it within 90 days of account deletion, privacy experts have raised serious concerns. Rys Farthing of Reset Tech Australia criticized the unnecessary collection of children's data, warning of the long-term risks and calling for stronger protections.

AI

MIT Asks arXiv To Take Down Preprint Paper On AI and Scientific Discovery 19

MIT has formally requested the withdrawal of a preprint paper on AI and scientific discovery due to serious concerns about the integrity and validity of its data and findings. It didn't provide specific details on what it believes is wrong with the paper. From a post: "Earlier this year, the COD conducted a confidential internal review based upon allegations it received regarding certain aspects of this paper. While student privacy laws and MIT policy prohibit the disclosure of the outcome of this review, we are writing to inform you that MIT has no confidence in the provenance, reliability or validity of the data and has no confidence in the veracity of the research contained in the paper. Based upon this finding, we also believe that the inclusion of this paper in arXiv may violate arXiv's Code of Conduct.

"Our understanding is that only authors of papers appearing on arXiv can submit withdrawal requests. We have directed the author to submit such a request, but to date, the author has not done so. Therefore, in an effort to clarify the research record, MIT respectfully request that the paper be marked as withdrawn from arXiv as soon as possible." Preprints, by definition, have not yet undergone peer review. MIT took this step in light of the publication's prominence in the research conversation and because it was a formal step it could take to mitigate the effects of misconduct. The author is no longer at MIT. [...]

"We are making this information public because we are concerned that, even in its non-published form, the paper is having an impact on discussions and projections about the effects of AI on science. Ensuring an accurate research record is important to MIT. We therefore would like to set the record straight and share our view that at this point the findings reported in this paper should not be relied on in academic or public discussions of these topics."
The paper in question, titled "Artificial Intelligence, Scientific Discovery, and Product Innovation" and authored by Aidan Toner-Rodgers, investigated the effects of introducing an AI-driven materials discovery tool to 1,018 scientists in a U.S. R&D lab. The study reported that AI-assisted researchers discovered 44% more materials, filed 39% more patents, and achieved a 17% increase in product innovation. These gains were primarily attributed to AI automating 57% of idea-generation tasks, allowing top-performing scientists to focus on evaluating AI-generated suggestions effectively. However, the benefits were unevenly distributed; lower-performing scientists saw minimal improvements, and 82% of participants reported decreased job satisfaction due to reduced creativity and skill utilization.

The Wall Street Journal reported on MIT's statement.
Facebook

Meta Argues Enshittification Isn't Real (arstechnica.com) 67

An anonymous reader quotes a report from Ars Technica: Meta thinks there's no reason to carry on with its defense after the Federal Trade Commission closed its monopoly case, and the company has moved to end the trial early by claiming that the FTC utterly failed to prove its case. "The FTC has no proof that Meta has monopoly power," Meta's motion for judgment (PDF) filed Thursday said, "and therefore the court should rule in favor of Meta." According to Meta, the FTC failed to show evidence that "the overall quality of Meta's apps has declined" or that the company shows too many ads to users. Meta says that's "fatal" to the FTC's case that the company wielded monopoly power to pursue more ad revenue while degrading user experience over time (an Internet trend known as "enshittification"). And on top of allegedly showing no evidence of "ad load, privacy, integrity, and features" degradation on Meta apps, Meta argued there's no precedent for an antitrust claim rooted in this alleged harm.

"Meta knows of no case finding monopoly power based solely on a claimed degradation in product quality, and the FTC has cited none," Meta argued. Meta has maintained throughout the trial that its users actually like seeing ads. In the company's recent motion, Meta argued that the FTC provided no insights into what "the right number of ads" should be, "let alone" provide proof that "Meta showed more ads" than it would in a competitive market where users could easily switch services if ad load became overwhelming. Further, Meta argued that the FTC did not show evidence that users sharing friends-and-family content were shown more ads. Meta noted that it "does not profit by showing more ads to users who do not click on them," so it only shows more ads to users who click ads.

Meta also insisted that there's "nothing but speculation" showing that Instagram or WhatsApp would have been better off or grown into rivals had Meta not acquired them. The company claimed that without Meta's resources, Instagram may have died off. Meta noted that Instagram co-founder Kevin Systrom testified that his app was "pretty broken and duct-taped" together, making it "vulnerable to spam" before Meta bought it. Rather than enshittification, what Meta did to Instagram could be considered "a consumer-welfare bonanza," Meta argued, while dismissing "smoking gun" emails from Mark Zuckerberg discussing buying Instagram to bury it as "legally irrelevant." Dismissing these as "a few dated emails," Meta argued that "efforts to litigate Mr. Zuckerberg's state of mind before the acquisition in 2012 are pointless."

"What matters is what Meta did," Meta argued, which was pump Instagram with resources that allowed it "to 'thrive' -- adding many new features, attracting hundreds of millions and then billions of users, and monetizing with great success." In the case of WhatsApp, Meta argued that nobody thinks WhatsApp had any intention to pivot to social media when the founders testified that their goal was to never add social features, preferring to offer a simple, clean messaging app. And Meta disputed any claim that it feared Google might buy WhatsApp as the basis for creating a Facebook rival, arguing that "the sole Meta witness to (supposedly) learn of Google's acquisition efforts testified that he did not have that worry."
In sum: A ruling in Meta's favor could prevent a breakup of its apps, while a denial would push the trial toward a possible order to divest Instagram and WhatsApp.
United States

Montana Becomes First State To Close the Law Enforcement Data Broker Loophole (eff.org) 31

Montana has enacted SB 282, becoming the first state to prohibit law enforcement from purchasing personal data they would otherwise need a warrant to obtain. The landmark legislation closes what privacy advocates call the "data broker loophole," which previously allowed police to buy geolocation data, electronic communications, and other sensitive information from third-party vendors without judicial oversight.

The new law specifically restricts government access to precise geolocation data, communications content, electronic funds transfers, and "sensitive data" including health status, religious affiliation, and biometric information. Police can still access this information through traditional means: warrants, investigative subpoenas, or device owner consent.

Slashdot Top Deals