Google

'Read the Manual': Misconfigured Google Analytics Led to a Data Breach Affecting 4.7M (csoonline.com) 16

Slashdot reader itwbennett writes: Personal health information on 4.7 million Blue Shield California subscribers was unintentionally shared between Google Analytics and Google Ads between April 2021 and January 2025 due to a misconfiguration error. Security consultant and SANS Institute instructor Brandon Evans points to two lessons to take from this debacle:

- Read the documentation of any third party service you sign up for, to understand the security and privacy controls;
- Know what data is being collected from your organization, and what you don't want shared.

"If there is a concern by the organization that Google Ads would use this information, they should really consider whether or not they should be using a platform like Google Analytics in the first place," Evans says in the article. "Because from a technical perspective, there is nothing stopping Google from sharing the information across its platform...

"Google definitely gives you a great bunch of controls, but technically speaking, that data is within the walls of that organization, and it's impossible to know from the outside how that data is being used."

Google

What Happens When You Pay People Not to Use Google Search? (yahoo.com) 51

"A group of researchers says it has identified a hidden reason we use Google for nearly all web searches," reports the Washington Post. "We've never given other options a real shot." Their research experiment suggests that Google is overwhelmingly popular partly because we believe it's the best, whether that's true or not. It's like a preference for your favorite soda. And their research suggested that our mass devotion to googling can be altered with habit-changing techniques, including by bribing people to try search alternatives to see what they are like...

[A] group of academics — from Stanford University, the University of Pennsylvania and MIT — designed a novel experiment to try to figure out what might shake up Google's popularity. They recruited nearly 2,500 participants and remotely monitored their web searches on computers for months. The core of the experiment was paying some participants — most received $10 — to use Bing rather than Google for two weeks. After that period, the money stopped, and the participants had to pick either Bing or Google. The vast majority in the group of people who were paid to use Bing for 14 days chose to go back to Google once the payments stopped, suggesting a strong preference for Google even after trying an alternative. But a healthy number in that group — about 22 percent — chose Bing and were still using it many weeks later.

"I realized Bing was not as bad as I thought it was...." one study participant said — which an assistant professor in business economics and public policy at the University of Pennsylvania says is a nice summation of the study's findings.

"The researchers did not test other search engines," the article notes. But it also points out that more importantly: the research caught the attention of some government officials: Colorado Attorney General Phil Weiser (D), who is leading the group of states that sued Google alongside the Justice Department, said the research helped inspire a demand by the states to fix Google's search monopoly. They asked a judge to require Google to bankroll a consumer information campaign about web search alternatives, including "short-term incentive payments."
On the basis of that, the article suggests "you could soon be paid to try Microsoft Bing or another alternative."

And in the meantime, the reporter writes, "I encourage you to join me in a two-week (unpaid) experiment mirroring the research: Change your standard search engine to something other than Google and see whether you like it. (And drop me a line to let me know how it went.) I'm going with DuckDuckGo, a privacy-focused web search engine that uses Bing's technology."
Privacy

Employee Monitoring App Leaks 21 Million Screenshots In Real Time (cybernews.com) 31

An anonymous reader quotes a report from Cybernews: Researchers at Cybernews have uncovered a major privacy breach involving WorkComposer, a workplace surveillance app used by over 200,000 people across countless companies. The app, designed to track productivity by logging activity and snapping regular screenshots of employees' screens, left over 21 million images exposed in an unsecured Amazon S3 bucket, broadcasting how workers go about their day frame by frame. The leaked data is extremely sensitive, as millions of screenshots from employees' devices could not only expose full-screen captures of emails, internal chats, and confidential business documents, but also contain login pages, credentials, API keys, and other sensitive information that could be exploited to attack businesses worldwide. After the company was contacted, access to the unsecured database was secured. An official comment has yet to be received.
AI

South Korea Says DeepSeek Transferred User Data, Prompts Without Consent (reuters.com) 9

South Korea's data protection authority said on Thursday that Chinese artificial intelligence startup DeepSeek transferred user information and prompts without permission when the service was still available for download in the country's app market. From a report: The Personal Information Protection Commission said in a statement that Hangzhou DeepSeek Artificial Intelligence Co Ltd did not obtain user consent while transferring personal information to a number of companies in China and the United States at the time of its South Korean launch in January.
Privacy

WhatsApp Blocks People From Exporting Your Entire Chat History (theverge.com) 14

WhatsApp is rolling out a new "Advanced Chat Privacy" feature that blocks others from exporting chat histories or automatically downloading media. While it doesn't stop screenshots or manual downloads, it marks the first step in WhatsApp's plan to enhance in-chat privacy protections. The Verge reports: By default, WhatsApp saves photos and videos in a chat to your phone's local storage. It also lets you and your recipients export chats (with or without media) to your messages, email, or notes app. The Advanced Chat Privacy setting will prevent this in group and individual chats. [...] WhatsApp says this is its "first version" of the feature, and that it plans to add more protections down the line.

"We think this feature is best used when talking with groups where you may not know everyone closely but are nevertheless sensitive in nature," WhatsApp says in its announcement. WABetaInfo first spotted this feature earlier this month, and now it's rolling out to the latest version of the app. You can turn on the setting by tapping the name of your chat and selecting Advanced Chat Privacy.

The Courts

Shopify Must Face Data Privacy Lawsuit In US (reuters.com) 42

An anonymous reader quotes a report from Reuters: A U.S. appeals court on Monday revived a proposed data privacy class action against Shopify, a decision that could make it easier for American courts to assert jurisdiction over internet-based platforms. In a 10-1 decision, the 9th U.S. Circuit Court of Appeals in San Francisco said the Canadian e-commerce company can be sued in California for collecting personal identifying data from people who make purchases on websites of retailers from that state.

Brandon Briskin, a California resident, said Shopify installed tracking software known as cookies on his iPhone without his consent when he bought athletic wear from the retailer I Am Becoming, and used his data to create a profile it could sell to other merchants. Shopify said it should not be sued in California because it operates nationwide and did not aim its conduct toward that state. The Ottawa-based company said Briskin could sue in Delaware, New York or Canada. A lower court judge and a three-judge 9th Circuit panel had agreed the case should be dismissed, but the full appeals court said Shopify "expressly aimed" its conduct toward California.

"Shopify deliberately reached out ... by knowingly installing tracking software onto unsuspecting Californians' phones so that it could later sell the data it obtained, in a manner that was neither random, isolated, or fortuitous," Circuit Judge Kim McLane Wardlaw wrote for the majority. A spokesman for Shopify said the decision "attacks the basics of how the internet works," and drags entrepreneurs who run online businesses into distant courtrooms regardless of where they operate. Shopify's next legal steps are unclear.

Google

Google Chrome To Continue To Use Third-Party Cookies in Major Reversal (digiday.com) 27

An anonymous reader shares a report: In a shocking development, Google won't roll out a new standalone prompt for third-party cookies in Chrome. It's a move that amounts to a U-turn on the Chrome team's earlier updated approach to deprecating third-party cookies, announced in July last year, with the latest development bound to cause ructions across the ad tech ecosystem. "We've made the decision to maintain our current approach to offering users third-party cookie choice in Chrome, and will not be rolling out a new standalone prompt for third-party cookies," wrote Anthony Chavez, vp Privacy Sandbox at Google, in a blog post published earlier today (April 22). "Users can continue to choose the best option for themselves in Chrome's Privacy and Security Settings." However, it's not the end of Privacy Sandbox, according to Google, as certain initiatives incubated within the project are set to continue, such as its IP Protection for Chrome Incognito users, which will be rolled out in Q3.
Privacy

Judge Rules Blanket Search of Cell Tower Data Unconstitutional (404media.co) 34

An anonymous reader quotes a report from 404 Media: A judge in Nevada has ruled that "tower dumps" -- the law enforcement practice of grabbing vast troves of private personal data from cell towers -- is unconstitutional. The judge also ruled that the cops could, this one time, still use the evidence they obtained through this unconstitutional search. Cell towers record the location of phones near them about every seven seconds. When the cops request a tower dump, they ask a telecom for the numbers and personal information of every single phone connected to a tower during a set time period. Depending on the area, these tower dumps can return tens of thousands of numbers. Cops have been able to sift through this data to solve crimes. But tower dumps are also a massive privacy violation that flies in the face of the Fourth Amendment, which protects people from unlawful search and seizure. When the cops get a tower dump they're not just searching and seizing the data of a suspected criminal, they're sifting through the information of everyone who was in the location. The ruling stems from a court case involving Cory Spurlock, a Nevada man charged with drug offenses and a murder-for-hire plot. He was implicated through a cellphone tower dump that law enforcement used to place his device near the scenes of the alleged crimes.

A federal judge ruled that the tower dump constituted an unconstitutional general search under the Fourth Amendment but declined to suppress the evidence, citing officers' good faith in obtaining a warrant. It marks the first time a court in the Ninth Circuit has ruled on the constitutionality of tower dumps, which in Spurlock's case captured location data from over 1,600 users -- many of whom had no way to opt out.
Privacy

ChatGPT Models Are Surprisingly Good At Geoguessing (techcrunch.com) 15

An anonymous reader quotes a report from TechCrunch: There's a somewhat concerning new trend going viral: People are using ChatGPT to figure out the location shown in pictures. This week, OpenAI released its newest AI models, o3 and o4-mini, both of which can uniquely "reason" through uploaded images. In practice, the models can crop, rotate, and zoom in on photos -- even blurry and distorted ones -- to thoroughly analyze them. These image-analyzing capabilities, paired with the models' ability to search the web, make for a potent location-finding tool. Users on X quickly discovered that o3, in particular, is quite good at deducing cities, landmarks, and even restaurants and bars from subtle visual clues.

In many cases, the models don't appear to be drawing on "memories" of past ChatGPT conversations, or EXIF data, which is the metadata attached to photos that reveal details such as where the photo was taken. X is filled with examples of users giving ChatGPT restaurant menus, neighborhood snaps, facades, and self-portraits, and instructing o3 to imagine it's playing "GeoGuessr," an online game that challenges players to guess locations from Google Street View images. It's an obvious potential privacy issue. There's nothing preventing a bad actor from screenshotting, say, a person's Instagram Story and using ChatGPT to try to doxx them.

Facebook

Meta Blocks Apple Intelligence in iOS Apps (9to5mac.com) 22

Meta has disabled Apple Intelligence features across its iOS applications, including Facebook, WhatsApp, and Threads, according to Brazilian tech blog Sorcererhat Tech. The block affects Writing Tools, which enable text creation and editing via Apple's AI, as well as Genmoji generation. Users cannot access these features via the standard text field interface in Meta apps. Instagram Stories have also lost previously available keyboard stickers and Memoji functionality.

While Meta hasn't explained the decision, it likely aims to drive users toward Meta AI, its own artificial intelligence service that offers similar text and image generation capabilities. The move follows failed negotiations between Apple and Meta regarding Llama integration into Apple Intelligence, which reportedly collapsed over privacy disagreements. The companies also maintain ongoing disputes regarding App Store policies.
AI

Apple To Analyze User Data on Devices To Bolster AI Technology 38

Apple will begin analyzing data on customers' devices in a bid to improve its AI platform, a move designed to safeguard user information while still helping it catch up with AI rivals. From a report: Today, Apple typically trains AI models using synthetic data -- information that's meant to mimic real-world inputs without any personal details. But that synthetic information isn't always representative of actual customer data, making it harder for its AI systems to work properly.

The new approach will address that problem while ensuring that user data remains on customers' devices and isn't directly used to train AI models. The idea is to help Apple catch up with competitors such as OpenAI and Alphabet, which have fewer privacy restrictions. The technology works like this: It takes the synthetic data that Apple has created and compares it to a recent sample of user emails within the iPhone, iPad and Mac email app. By using actual emails to check the fake inputs, Apple can then determine which items within its synthetic dataset are most in line with real-world messages.
Chrome

Chrome To Patch Decades-Old 'Browser History Sniffing' Flaw That Let Sites Peek At Your History (theregister.com) 34

Slashdot reader king*jojo shared this article from The Register: A 23-year-old side-channel attack for spying on people's web browsing histories will get shut down in the forthcoming Chrome 136, released last Thursday to the Chrome beta channel. At least that's the hope.

The privacy attack, referred to as browser history sniffing, involves reading the color values of web links on a page to see if the linked pages have been visited previously... Web publishers and third parties capable of running scripts, have used this technique to present links on a web page to a visitor and then check how the visitor's browser set the color for those links on the rendered web page... The attack was mitigated about 15 years ago, though not effectively. Other ways to check link color information beyond the getComputedStyle method were developed... Chrome 136, due to see stable channel release on April 23, 2025, "is the first major browser to render these attacks obsolete," explained Kyra Seevers, Google software engineer in a blog post.

This is something of a turnabout for the Chrome team, which twice marked Chromium bug reports for the issue as "won't fix." David Baron, presently a Google software engineer who worked for Mozilla at the time, filed a Firefox bug report about the issue back on May 28, 2002... On March 9, 2010, Baron published a blog post outlining the issue and proposing some mitigations...

AI

AI Industry Tells US Congress: 'We Need Energy' (msn.com) 98

The Washington Post reports: The United States urgently needs more energy to fuel an artificial intelligence race with China that the country can't afford to lose, industry leaders told lawmakers at a House hearing on Wednesday. "We need energy in all forms," said Eric Schmidt, former CEO of Google, who now leads the Special Competitive Studies Project, a think tank focused on technology and security. "Renewable, nonrenewable, whatever. It needs to be there, and it needs to be there quickly." It was a nearly unanimous sentiment at the four-hour-plus hearing of the House Energy and Commerce Committee, which revealed bipartisan support for ramping up U.S. energy production to meet skyrocketing demand for energy-thirsty AI data centers.

The hearing showed how the country's AI policy priorities have changed under President Donald Trump. President Joe Biden's wide-ranging 2023 executive order on AI had sought to balance the technology's potential rewards with the risks it poses to workers, civil rights and national security. Trump rescinded that order within days of taking office, saying its "onerous" requirements would "threaten American technological leadership...." [Data center power consumption] is already straining power grids, as residential consumers compete with data centers that can use as much electricity as an entire city. And those energy demands are projected to grow dramatically in the coming years... [Former Google CEO Eric] Schmidt, whom the committee's Republicans called as a witness on Wednesday, told [committee chairman Brett] Guthrie that winning the AI race is too important to let environmental considerations get in the way...

Once the United States beats China to develop superintelligence, Schmidt said, AI will solve the climate crisis. And if it doesn't, he went on, China will become the world's sole superpower. (Schmidt's view that AI will become superintelligent within a decade is controversial among experts, some of whom predict the technology will remain limited by fundamental shortcomings in its ability to plan and reason.)

The industry's wish list also included "light touch" federal regulation, high-skill immigration and continued subsidies for chip development. Alexandr Wang, the young billionaire CEO of San Francisco-based Scale AI, said a growing patchwork of state privacy laws is hampering AI companies' access to the data needed to train their models. He called for a federal privacy law that would preempt state regulations and prioritize innovation.

Some committee Democrats argued that cuts to scientific research and renewable energy will actually hamper America's AI competitiveness, according to the article. " But few questioned the premise that the U.S. is locked in an existential struggle with China for AI supremacy.

"That stark outlook has nearly coalesced into a consensus on Capitol Hill since China's DeepSeek chatbot stunned the AI industry with its reasoning skills earlier this year."
Transportation

Air Travel Set for Biggest Overhaul in 50 Years With UN-Backed Digital Credentials (theguardian.com) 103

The International Civil Aviation Organization plans to eliminate boarding passes and check-ins within three years through a new "digital travel credential" system. Passengers will store passport data on their phones and use facial recognition to move through airports, while airlines will automatically detect arrivals via biometric scanning.

The system will dynamically update "journey passes" for flight changes and delays, potentially streamlining connections. "The last upgrade of great scale was the adoption of e-ticketing in the early 2000s," said Valerie Viale from travel technology company Amadeus, who noted passenger data will be deleted within 15 seconds at each checkpoint to address privacy concerns.
AI

Waymo May Use Interior Camera Data To Train Generative AI Models, Sell Ads (techcrunch.com) 35

An anonymous reader shares a report: Waymo is preparing to use data from its robotaxis, including video from interior cameras tied to rider identities, to train generative AI models, according to an unreleased version of its privacy policy found by researcher Jane Manchun Wong.

The draft language reveals Waymo may also share this data to personalize ads, raising fresh questions about how much of a rider's behavior inside autonomous vehicles could be repurposed for AI training and marketing. The privacy page states: "Waymo may share data to improve and analyze its functionality and to tailor products, services, ads, and offers to your interests. You can opt out of sharing your information with third parties, unless it's necessary to the functioning of the service."

AI

Google's NotebookLM AI Can Now 'Discover Sources' For You 6

Google's NotebookLM has added a new "Discover sources" feature that allows users to describe a topic and have the AI find and curate relevant sources from the web -- eliminating the need to upload documents manually. "When you tap the Discover button in NotebookLM, you can describe the topic you're interested in, and NotebookLM will bring back a curated collection of relevant sources from the web," says Google software engineer Adam Bignell. Click to add those sources to your notebook; "it's a fast and easy way to quickly grasp a new concept or gather essential reading on a topic." PCMag reports: You can still add your files. NotebookLM can ingest PDFs, websites, YouTube videos, audio files, Google Docs, or Google Slides and summarize, transcribe, narrate, or convert into FAQs and study guides. "Discover sources" helps incorporate information you may not have saved. [...] The imported sources stay within the notebook you created. You can read the entire original document, ask questions about it via chat, or apply other NotebookLM features to it.

Google started rolling out both features on Wednesday. It should be available for all users in about "a week or so." For those concerned about privacy, Google says, "NotebookLM does not use your personal data, including your source uploads, queries, and the responses from the model for training."
There's also an "I'm Feeling Curious" button (a reference to its iconic "I'm feeling lucky" search button) that generates sources on a random topic you might find interesting.
Piracy

Massive Expansion of Italy's Piracy Shield Underway (techdirt.com) 21

An anonymous reader quotes a report from Techdirt: Walled Culture has been following closely Italy's poorly designed Piracy Shield system. Back in December we reported how copyright companies used their access to the Piracy Shield system to order Italian Internet service providers (ISPs) to block access to all of Google Drive for the entire country, and how malicious actors could similarly use that unchecked power to shut down critical national infrastructure. Since then, the Computer & Communications Industry Association (CCIA), an international, not-for-profit association representing computer, communications, and Internet industry firms, has added its voice to the chorus of disapproval. In a letter (PDF) to the European Commission, it warned about the dangers of the Piracy Shield system to the EU economy [...]. It also raised an important new issue: the fact that Italy brought in this extreme legislation without notifying the European Commission under the so-called "TRIS" procedure, which allows others to comment on possible problems [...].

As well as Italy's failure to notify the Commission about its new legislation in advance, the CCIA believes that: this anti-piracy mechanism is in breach of several other EU laws. That includes the Open Internet Regulation which prohibits ISPs to block or slow internet traffic unless required by a legal order. The block subsequent to the Piracy Shield also contradicts the Digital Services Act (DSA) in several aspects, notably Article 9 requiring certain elements to be included in the orders to act against illegal content. More broadly, the Piracy Shield is not aligned with the Charter of Fundamental Rights nor the Treaty on the Functioning of the EU -- as it hinders freedom of expression, freedom to provide internet services, the principle of proportionality, and the right to an effective remedy and a fair trial.

Far from taking these criticisms to heart, or acknowledging that Piracy Shield has failed to convert people to paying subscribers, the Italian government has decided to double down, and to make Piracy Shield even worse. Massimiliano Capitanio, Commissioner at AGCOM, the Italian Authority for Communications Guarantees, explained on LinkedIn how Piracy Shield was being extended in far-reaching ways (translation by Google Translate, original in Italian). [...] That is, Piracy Shield will apply to live content far beyond sports events, its original justification, and to streaming services. Even DNS and VPN providers will be required to block sites, a serious technical interference in the way the Internet operates, and a threat to people's privacy. Search engines, too, will be forced to de-index material. The only minor concession to ISPs is to unblock domain names and IP addresses that are no longer allegedly being used to disseminate unauthorized material. There are, of course, no concessions to ordinary Internet users affected by Piracy Shield blunders.
In the future, Italy's Piracy Shield will add:
- 30-minute blackout orders not only for pirate sports events, but also for other live content;
- the extension of blackout orders to VPNs and public DNS providers;
- the obligation for search engines to de-index pirate sites;
- the procedures for unblocking domain names and IP addresses obscured by Piracy Shield that are no longer used to spread pirate content;
- the new procedure to combat piracy on the #linear and "on demand" television, for example to protect the #film and #serietv.
AI

Anthropic Launches an AI Chatbot Plan For Colleges and Universities (techcrunch.com) 9

An anonymous reader quotes a report from TechCrunch: Anthropic announced on Wednesday that it's launching a new Claude for Education tier, an answer to OpenAI's ChatGPT Edu plan. The new tier is aimed at higher education, and gives students, faculty, and other staff access to Anthropic's AI chatbot, Claude, with a few additional capabilities. One piece of Claude for Education is "Learning Mode," a new feature within Claude Projects to help students develop their own critical thinking skills, rather than simply obtain answers to questions. With Learning Mode enabled, Claude will ask questions to test understanding, highlight fundamental principles behind specific problems, and provide potentially useful templates for research papers, outlines, and study guides.

Anthropic says Claude for Education comes with its standard chat interface, as well as "enterprise-grade" security and privacy controls. In a press release shared with TechCrunch ahead of launch, Anthropic said university administrators can use Claude to analyze enrollment trends and automate repetitive email responses to common inquiries. Meanwhile, students can use Claude for Education in their studies, the company suggested, such as working through calculus problems with step-by-step guidance from the AI chatbot. To help universities integrate Claude into their systems, Anthropic says it's partnering with the company Instructure, which offers the popular education software platform Canvas. The AI startup is also teaming up with Internet2, a nonprofit organization that delivers cloud solutions for colleges.

Anthropic says that it has already struck "full campus agreements" with Northeastern University, the London School of Economics and Political Science, and Champlain College to make Claude for Education available to all students. Northeastern is a design partner -- Anthropic says it's working with the institution's students, faculty, and staff to build best practices for AI integration, AI-powered education tools, and frameworks. Anthropic hopes to strike more of these contracts, in part through new student ambassador and AI "builder" programs, to capitalize on the growing number of students using AI in their studies.

United States

Cybersecurity Professor Faced China Funding Inquiry Before Disappearing (wired.com) 21

The FBI searched two homes of Indiana University Bloomington data privacy professor Xiaofeng Wang last week, following months of university inquiries into whether he received unreported research funding from China, WIRED reported Wednesday.

Wang, who leads the Center for Distributed Confidential Computing established with a $3 million National Science Foundation grant, was terminated on March 28 via email from the university provost. The university had contacted Wang in December regarding a 2017-2018 grant in China that listed him as a researcher, questioning whether he properly disclosed the funding to IU and in applications for U.S. federal research grants.

Jason Covert, Wang's attorney, said Wang and his wife Nianli Ma, whose employee profile was also removed, are "safe" and neither has been arrested. The couple's legal team has viewed a search warrant but received no affidavit establishing probable cause.
Privacy

Alleged Deel Spy Confesses To Coordinating with Deel CEO Alex Bouaziz (newcomer.co) 8

Newcomer: Keith O'Brien, the man who allegedly spied for Deel while working at Rippling, is apparently clearing his conscience, according to a sworn Irish affidavit. O'Brien says in the affidavit that Deel paid him to spy on Rippling and that he coordinated directly with Deel's CEO, Alex Bouaziz.

For some background, Alex Bouaziz is Deel's CEO and Philippe Bouaziz is his father, Deel's CFO. Rippling, which competes directly with Deel, has sued Deel over the alleged spying.
O'Brien says in the affidavit: I decided to cooperate after I got a text from a friend on March 25, 2025 saying, "the truth will set you free." I was also driving with a family member to meet my solicitors and she told me that if I had done something wrong that I should "just tell the truth." I was having bad thoughts at the time; it was a horrible time for me. I was getting sick concealing this lie. I realised that I was harming myself and my family to protect Deel. I was concerned, and I am still concerned, about how wealthy and powerful Alex and Philippe are, but I know that what I was doing was wrong. After I spoke with my solicitors at Fenecas Law, I started to feel a sense of relief. I want to do what I can to start making amends and righting these wrongs. Deel CEO allegedly agreed to pay O'Brien 5000 euros a month.

Slashdot Top Deals