Security

Dead Google Apps Domains Can Be Compromised By New Owners (arstechnica.com) 34

An anonymous reader quotes a report from Ars Technica: Lots of startups use Google's productivity suite, known as Workspace, to handle email, documents, and other back-office matters. Relatedly, lots of business-minded webapps use Google's OAuth, i.e. "Sign in with Google." It's a low-friction feedback loop -- up until the startup fails, the domain goes up for sale, and somebody forgot to close down all the Google stuff. Dylan Ayrey, of Truffle Security Co., suggests in a report that this problem is more serious than anyone, especially Google, is acknowledging. Many startups make the critical mistake of not properly closing their accounts -- on both Google and other web-based apps -- before letting their domains expire.

Given the number of people working for tech startups (6 million), the failure rate of said startups (90 percent), their usage of Google Workspaces (50 percent, all by Ayrey's numbers), and the speed at which startups tend to fall apart, there are a lot of Google-auth-connected domains up for sale at any time. That would not be an inherent problem, except that, as Ayrey shows, buying a domain allows you to re-activate the Google accounts for former employees if the site's Google account still exists.

With admin access to those accounts, you can get into many of the services they used Google's OAuth to log into, like Slack, ChatGPT, Zoom, and HR systems. Ayrey writes that he bought a defunct startup domain and got access to each of those through Google account sign-ins. He ended up with tax documents, job interview details, and direct messages, among other sensitive materials.
A Google spokesperson said in a statement: "We appreciate Dylan Ayrey's help identifying the risks stemming from customers forgetting to delete third-party SaaS services as part of turning down their operation. As a best practice, we recommend customers properly close out domains following these instructions to make this type of issue impossible. Additionally, we encourage third-party apps to follow best-practices by using the unique account identifiers (sub) to mitigate this risk."
Privacy

UnitedHealth Hid Its Change Healthcare Data Breach Notice For Months (techcrunch.com) 24

Change Healthcare has hidden its data breach notification webpage from search engines using "noindex" code, TechCrunch found, making it difficult for affected individuals to find information about the massive healthcare data breach that compromised over 100 million people's medical records last year.

The UnitedHealth subsidiary said Tuesday it had "substantially" completed notifying victims of the February 2024 ransomware attack. The cyberattack caused months of healthcare disruptions and marked the largest known U.S. medical data theft.
Privacy

PowerSchool Data Breach Victims Say Hackers Stole 'All' Historical Student and Teacher Data (techcrunch.com) 21

An anonymous reader shares a report: U.S. school districts affected by the recent cyberattack on edtech giant PowerSchool have told TechCrunch that hackers accessed "all" of their historical student and teacher data stored in their student information systems. PowerSchool, whose school records software is used to support more than 50 million students across the United States, was hit by an intrusion in December that compromised the company's customer support portal with stolen credentials, allowing access to reams of personal data belonging to students and teachers in K-12 schools.

The attack has not yet been publicly attributed to a specific hacker or group. PowerSchool hasn't said how many of its school customers are affected. However, two sources at affected school districts -- who asked not to be named -- told TechCrunch that the hackers accessed troves of personal data belonging to both current and former students and teachers.
Further reading: Lawsuit Accuses PowerSchool of Selling Student Data To 3rd Parties.
China

US Finalizes Rule To Effectively Ban Chinese Vehicles (theverge.com) 115

An anonymous reader quotes a report from The Verge: The Biden administration finalized a new rule that would effectively ban all Chinese vehicles from the US under the auspices of blocking the "sale or import" of connected vehicle software from "countries of concern." The rule could have wide-ranging effects on big automakers, like Ford and GM, as well as smaller manufacturers like Polestar -- and even companies that don't produce cars, like Waymo. The rule covers everything that connects a vehicle to the outside world, such as Bluetooth, Wi-Fi, cellular, and satellite components. It also addresses concerns that technology like cameras, sensors, and onboard computers could be exploited by foreign adversaries to collect sensitive data about US citizens and infrastructure. And it would ban China from testing its self-driving cars on US soil.

"Cars today have cameras, microphones, GPS tracking, and other technologies connected to the internet," US Secretary of Commerce Gina Raimondo said in a statement. "It doesn't take much imagination to understand how a foreign adversary with access to this information could pose a serious risk to both our national security and the privacy of U.S. citizens. To address these national security concerns, the Commerce Department is taking targeted, proactive steps to keep [People's Republic of China] and Russian-manufactured technologies off American roads." The rules for prohibited software go into effect for model year 2027 vehicles, while the ban on hardware from China waits until model year 2030 vehicles. According to Reuters, the rules were updated from the original proposal to exempt vehicles weighing over 10,000 pounds, which would allow companies like BYD to continue to assemble electric buses in California.
The Biden administration published a fact sheet with more information about this rule.

"[F]oreign adversary involvement in the supply chains of connected vehicles poses a significant threat in most cars on the road today, granting malign actors unfettered access to these connected systems and the data they collect," the White House said. "As PRC automakers aggressively seek to increase their presence in American and global automotive markets, through this final rule, President Biden is delivering on his commitment to secure critical American supply chains and protect our national security."
Transportation

Texas Sues Allstate For Collecting Driver Data To Raise Premiums (gizmodo.com) 62

An anonymous reader quotes a report from Gizmodo: Texas has sued (PDF) one of the nation's largest car insurance providers alleging that it violated the state's privacy laws by surreptitiously collecting detailed location data on millions of drivers and using that information to justify raising insurance premiums. The state's attorney general, Ken Paxton, said the lawsuit against Allstate and its subsidiary Arity is the first enforcement action ever filed by a state attorney general to enforce a data privacy law. It also follows a deceptive business practice lawsuit he filed against General Motors accusing the car manufacturer of misleading customers by collecting and selling driver data.

In 2015, Allstate developed the Arity Driving Engine software development kit (SDK), a package of code that the company allegedly paid mobile app developers to install in their products in order to collect a variety of sensitive data from consumers' phones. The SDK gathered phone geolocation data, accelerometer, and gyroscopic data, details about where phone owners started and ended their trips, and information about "driving behavior," such as whether phone owners appeared to be speeding or driving while distracted, according to the lawsuit. The apps that installed the SDK included GasBuddy, Fuel Rewards, and Life360, a popular family monitoring app, according to the lawsuit.

Paxton's complaint said that Allstate and Arity used the data collected by its SDK to develop and sell products to other insurers like Drivesight, an algorithmic model that assigned a driving risk score to individuals, and ArityIQ, which allowed other insurers to "[a]ccess actual driving behavior collected from mobile phones and connected vehicles to use at time of quote to more precisely price nearly any driver." Allstate and Arity marketed the products as providing "driver behavior" data but because the information was collected via mobile phones the companies had no way of determining whether the owner was actually driving, according to the lawsuit. "For example, if a person was a passenger in a bus, a taxi, or in a friend's car, and that vehicle's driver sped, hard braked, or made a sharp turn, Defendants would conclude that the passenger, not the actual driver, engaged in 'bad' driving behavior," the suit states. Neither Allstate and Arity nor the app developers properly informed customers in their privacy policies about what data the SDK was collecting or how it would be used, according to the lawsuit.
The lawsuit violates Texas' Data Privacy and Security Act (DPSA) and insurance code by failing to address violations within the required 30-day cure period. "In its complaint, filed in federal court, Texas requested that Allstate be ordered to pay a penalty of $7,500 per violation of the state's data privacy law and $10,000 per violation of the state's insurance code, which would likely amount to millions of dollars given the number of consumers allegedly affected," adds the report.

"The lawsuit also asks the court to make Allstate delete all the data it obtained through actions that allegedly violated the privacy law and to make full restitution to customers harmed by the companies' actions."
The Internet

Double-keyed Browser Caching Is Hitting Web Performance 88

A Google engineer has warned that a major shift in web browser caching is upending long-standing performance optimization practices. Browsers have overhauled their caching systems that forces websites to maintain separate copies of shared resources instead of reusing them across domains.

The new "double-keyed caching" system, implemented to enhance privacy, is ending the era of shared public content delivery networks, writes Google engineer Addy Osmani. According to Chrome's data, the change has led to a 3.6% increase in cache misses and 4% rise in network bandwidth usage.
Encryption

Ransomware Crew Abuses AWS Native Encryption, Sets Data-Destruct Timer for 7 Days (theregister.com) 18

A new ransomware group called Codefinger targets AWS S3 buckets by exploiting compromised or publicly exposed AWS keys to encrypt victims' data using AWS's own SSE-C encryption, rendering it inaccessible without the attacker-generated AES-256 keys. While other security researchers have documented techniques for encrypting S3 buckets, "this is the first instance we know of leveraging AWS's native secure encryption infrastructure via SSE-C in the wild," Tim West, VP of services with the Halcyon RISE Team, told The Register. "Historically AWS Identity IAM keys are leaked and used for data theft but if this approach gains widespread adoption, it could represent a significant systemic risk to organizations relying on AWS S3 for the storage of critical data," he warned. From the report: ... in addition to encrypting the data, Codefinder marks the compromised files for deletion within seven days using the S3 Object Lifecycle Management API â" the criminals themselves do not threaten to leak or sell the data, we're told. "This is unique in that most ransomware operators and affiliate attackers do not engage in straight up data destruction as part of a double extortion scheme or to otherwise put pressure on the victim to pay the ransom demand," West said. "Data destruction represents an additional risk to targeted organizations."

Codefinger also leaves a ransom note in each affected directory that includes the attacker's Bitcoin address and a client ID associated with the encrypted data. "The note warns that changes to account permissions or files will end negotiations," the Halcyon researchers said in a report about S3 bucket attacks shared with The Register. While West declined to name or provide any additional details about the two Codefinger victims -- including if they paid the ransom demands -- he suggests that AWS customers restrict the use of SSE-C.

"This can be achieved by leveraging the Condition element in IAM policies to prevent unauthorized applications of SSE-C on S3 buckets, ensuring that only approved data and users can utilize this feature," he explained. Plus, it's important to monitor and regularly audit AWS keys, as these make very attractive targets for all types of criminals looking to break into companies' cloud environments and steal data. "Permissions should be reviewed frequently to confirm they align with the principle of least privilege, while unused keys should be disabled, and active ones rotated regularly to minimize exposure," West said.
An AWS spokesperson said it notifies affected customers of exposed keys and "quickly takes any necessary actions, such as applying quarantine policies to minimize risks for customers without disrupting their IT environment."

They also directed users to this post about what to do upon noticing unauthorized activity.
AI

Ministers Mull Allowing Private Firms to Make Profit From NHS Data In AI Push 35

UK ministers are considering allowing private companies to profit from anonymized NHS data as part of a push to leverage AI for medical advancements, despite concerns over privacy and ethical risks. The Guardian reports: Keir Starmer on Monday announced a push to open up the government to AI innovation, including allowing companies to use anonymized patient data to develop new treatments, drugs and diagnostic tools. With the prime minister and the chancellor, Rachel Reeves, under pressure over Britain's economic outlook, Starmer said AI could bolster the country's anaemic growth, as he put concerns over privacy, disinformation and discrimination to one side.

"We are in a unique position in this country, because we've got the National Health Service, and the use of that data has already driven forward advances in medicine, and will continue to do so," he told an audience in east London. "We have to see this as a huge opportunity that will impact on the lives of millions of people really profoundly." Starmer added: "It is important that we keep control of that data. I completely accept that challenge, and we will also do so, but I don't think that we should have a defensive stance here that will inhibit the sort of breakthroughs that we need."

The move to embrace the potential of AI rather than its risks comes at a difficult moment for the prime minister, with financial markets having driven UK borrowing costs to a 30-year high and the pound hitting new lows against the dollar. Starmer said on Monday that AI could help give the UK the economic boost it needed, adding that the technology had the potential "to increase productivity hugely, to do things differently, to provide a better economy that works in a different way in the future." Part of that, as detailed in a report by the technology investor Matt Clifford, will be to create new datasets for startups and researchers to train their AI models.

Data from various sources will be included, such as content from the National Archives and the BBC, as well as anonymized NHS records. Officials are working out the details on how those records will be shared, but said on Monday that they would take into account national security and ethical concerns. Starmer's aides say the public sector will keep "control" of the data, but added that could still allow it to be used for commercial purposes.
Facebook

Meta Is Blocking Links to Decentralized Instagram Competitor Pixelfed (404media.co) 53

Meta is deleting links to Pixelfed, a decentralized, open-source Instagram competitor, labeling them as "spam" on Facebook and removing them immediately. 404 Media reports: Pixelfed is an open-source, community funded and decentralized image sharing platform that runs on Activity Pub, which is the same technology that supports Mastodon and other federated services. Pixelfed.social is the largest Pixelfed server, which was launched in 2018 but has gained renewed attention over the last week. Bluesky user AJ Sadauskas originally posted that links to Pixelfed were being deleted by Meta; 404 Media then also tried to post a link to Pixelfed on Facebook. It was immediately deleted. Pixelfed has seen a surge in user signups in recent days, after Meta announced it is ending fact-checking and removing restrictions on speech across its platforms.

Daniel Supernault, the creator of Pixelfed, published a "declaration of fundamental rights and principles for ethical digital platforms, ensuring privacy, dignity, and fairness in online spaces." The open source charter contains sections titled "right to privacy," "freedom from surveillance," "safeguards against hate speech," "strong protections for vulnerable communities," and "data portability and user agency."

"Pixelfed is a lot of things, but one thing it is not, is an opportunity for VC or others to ruin the vibe. I've turned down VC funding and will not inject advertising of any form into the project," Supernault wrote on Mastodon. "Pixelfed is for the people, period."
Google

Google Wants to Track Your Digital Fingerprints Again (mashable.com) 54

Google is reintroducing "digital fingerprinting" in five weeks, reports Mashable, describing it as "a data collection process that ingests all of your online signals (from IP address to complex browser information) and pinpoints unique users or devices." Or, to put it another way, Google "is tracking your online behavior in the name of advertising."

The UK's Information Commissioner's Office called Google's decision "irresponsible": it is likely to reduce people's choice and control over how their information is collected. The change to Google's policy means that fingerprinting could now replace the functions of third-party cookies... Google itself has previously said that fingerprinting does not meet users' expectations for privacy, as users cannot easily consent to it as they would cookies. This in turn means they cannot control how their information is collected. To quote Google's own position on fingerprinting from 2019: "We think this subverts user choice and is wrong...." When the new policy comes into force on 16 February 2025, organisations using Google's advertising technology will be able to deploy fingerprinting without being in breach of Google's own policies. Given Google's position and scale in the online advertising ecosystem, this is significant.
Their post ends with a warning that those hoping to use fingerprinting for advertising "will need to demonstrate how they are complying with the requirements of data protection law. These include providing users with transparency, securing freely-given consent, ensuring fair processing and upholding information rights such as the right to erasure."

But security and privacy researcher Lukasz Olejnik asks if Google's move is the biggest privacy erosion in 10 years.... Could this mark the end of nearly a decade of progress in internet and web privacy? It would be unfortunate if the newly developing AI economy started from a decrease of privacy and data protection standards. Some analysts or observers might then be inclined to wonder whether this approach to privacy online might signal similar attitudes in other future Google products, like AI... The shift is rather drastic. Where clear restrictions once existed, the new policy removes the prohibition (so allows such uses) and now only requires disclosure... [I]f the ICO's claims about Google sharing IP addresses within the adtech ecosystem are accurate, this represents a significant policy shift with critical implications for privacy, trust, and the integrity of previously proposed Privacy Sandbox initiatives.
Their post includes a disturbing thought. "Reversing the stance on fingerprinting could open the door to further data collection, including to crafting dynamic, generative AI-powered ads tailored with huge precision. Indeed, such applications would require new data..."

Thanks to long-time Slashdot reader sinij for sharing the news.
United States

Should In-Game Currency Receive Federal Government Banking Protections? (yahoo.com) 91

Friday America's consumer watchdog agency "proposed a rule to give virtual video game currencies protections similar to those of real-world bank accounts..." reports the Washington Post, "so players can receive refunds or compensation for unauthorized transactions, similar to how banks are required to respond to claims of fraudulent activity." The Consumer Financial Protection Bureau is seeking public input on a rule interpretation to clarify which rights are protected and available to video game consumers under the Electronic Fund Transfer Act. It would hold video game companies subject to violations of federal consumer financial law if they fail to address financial issues reported by customers. The public comment period lasts from Friday through March 31. In particular, the independent federal agency wants to hear from gamers about the types of transactions they make, any issues with in-game currencies, and stories about how companies helped or denied help.

The effort is in response to complaints to the bureau and the Federal Trade Commission about unauthorized transactions, scams, hacking attempts and account theft, outlined in an April bureau report that covered banking in video games and virtual worlds. The complaints said consumers "received limited recourse from gaming companies." Companies may ban or lock accounts or shut down a service, according to the report, but they don't generally guarantee refunds to people who lost property... The April report says the bureau and FTC received numerous complaints from players who contacted their banks regarding unauthorized charges on Roblox. "These complaints note that while they received refunds through their financial institutions, Roblox then terminated or locked their account," the report says.

Youtube

CES 'Worst In Show' Devices Mocked In IFixit Video - While YouTube Inserts Ads For Them (worstinshowces.com) 55

While CES wraps up this week, "Not all innovation is good innovation," warns Elizabeth Chamberlain, iFixit's Director of Sustainability (heading their Right to Repair advocacy team). So this year the group held its fourth annual "anti-awards ceremony" to call out CES's "least repairable, least private, and least sustainable products..." (iFixit co-founder Kyle Wiens mocked a $2,200 "smart ring" with a battery that only lasts for 500 charges. "Wanna open it up and change the battery? Well you can't! Trying to open it will completely destroy this device...") There's also a category for the worst in security — plus a special award titled "Who asked for this?" — and then a final inglorious prize declaring "the Overall Worst in Show..."

Thursday their "panel of dystopia experts" livestreamed to iFixit's feed of over 1 million subscribers on YouTube, with the video's description warning about manufacturers "hoping to convince us that they have invented the future. But will their vision make our lives better, or lead humanity down a dark and twisted path?" The video "is a fun and rollicking romp that tries to forestall a future clogged with power-hungry AI and data-collecting sensors," writes The New Stack — though noting one final irony.

"While the ceremony criticized these products, YouTube was displaying ads for them..."

UPDATE: Slashdot reached out to iFixit co-founder Kyle Wiens, who says this teaches us all a lesson. "The gadget industry is insidious and has their tentacles everywhere."

"Of course they injected ads into our video. The beast can't stop feeding, and will keep growing until we knife it in the heart."

Long-time Slashdot reader destinyland summarizes the article: "We're seeing more and more of these things that have basically surveillance technology built into them," iFixit's Chamberlain told The Associated Press... Proving this point was EFF executive director Cindy Cohn, who gave a truly impassioned takedown for "smart" infant products that "end up traumatizing new parents with false reports that their baby has stopped breathing." But worst for privacy was the $1,200 "Revol" baby bassinet — equipped with a camera, a microphone, and a radar sensor. The video also mocks Samsung's "AI Home" initiative which let you answer phone calls with your washing machine, oven, or refrigerator. (And LG's overpowered "smart" refrigerator won the "Overall Worst in Show" award.)

One of the scariest presentations came from Paul Roberts, founder of SecuRepairs, a group advocating both cybersecurity and the right to repair. Roberts notes that about 65% of the routers sold in the U.S. are from a Chinese company named TP-Link — both wifi routers and the wifi/ethernet routers sold for homes and small offices.Roberts reminded viewers that in October, Microsoft reported "thousands" of compromised routers — most of them manufactured by TP-Link — were found working together in a malicious network trying to crack passwords and penetrate "think tanks, government organizations, non-governmental organizations, law firms, defense industrial base, and others" in North America and in Europe. The U.S. Justice Department soon launched an investigation (as did the U.S. Commerce Department) into TP-Link's ties to China's government and military, according to a SecuRepairs blog post.

The reason? "As a China-based company, TP-Link is required by law to disclose flaws it discovers in its software to China's Ministry of Industry and Information Technology before making them public." Inevitably, this creates a window "to exploit the publicly undisclosed flaw... That fact, and the coincidence of TP-Link devices playing a role in state-sponsored hacking campaigns, raises the prospects of the U.S. government declaring a ban on the sale of TP-Link technology at some point in the next year."

TP-Link won the award for the worst in security.

Privacy

Database Tables of Student, Teacher Info Stolen From PowerSchool In Cyberattack (theregister.com) 18

An anonymous reader quotes a report from The Register: A leading education software maker has admitted its IT environment was compromised in a cyberattack, with students and teachers' personal data -- including some Social Security Numbers and medical info -- stolen. PowerSchool says its cloud-based student information system is used by 18,000 customers around the globe, including the US and Canada, to handle grading, attendance records, and personal information of more than 60 million K-12 students and teachers. On December 28 someone managed to get into its systems and access their contents "using a compromised credential," the California-based biz told its clients in an email seen by Register this week.

[...] "We believe the unauthorized actor extracted two tables within the student information system database," a spokesperson told us. "These tables primarily include contact information with data elements such as name and address information for families and educators. "For a certain subset of the customers, these tables may also include Social Security Number, other personally identifiable information, and limited medical and grade information. "Not all PowerSchool student information system customers were impacted, and we anticipate that only a subset of impacted customers will have notification obligations."
While the company has tightened security measures and offered identity protection services to affected individuals, cybersecurity firm Cyble suggests the intrusion "may have been more serious and gone on much longer than has been publicly acknowledged so far," reports The Register. The cybersecurity vendor says the intrusion could have occurred as far back as June 16, 2011, with it ending on January 2 of this year.

"Critical systems and applications such as Oracle Netsuite ERP, HR software UltiPro, Zoom, Slack, Jira, GitLab, and sensitive credentials for platforms like Microsoft login, LogMeIn, Windows AD Azure, and BeyondTrust" may have been compromised, too.
Privacy

See the Thousands of Apps Hijacked To Spy On Your Location (404media.co) 49

An anonymous reader quotes a report from 404 Media: Some of the world's most popular apps are likely being co-opted by rogue members of the advertising industry to harvest sensitive location data on a massive scale, with that data ending up with a location data company whose subsidiary has previously sold global location data to US law enforcement. The thousands of apps, included in hacked files from location data company Gravy Analytics, include everything from games likeCandy Crushand dating apps like Tinder to pregnancy tracking and religious prayer apps across both Android and iOS. Because much of the collection is occurring through the advertising ecosystem -- not code developed by the app creators themselves -- this data collection is likely happening without users' or even app developers' knowledge.

"For the first time publicly, we seem to have proof that one of the largest data brokers selling to both commercial and government clients appears to be acquiring their data from the online advertising 'bid stream,'" rather than code embedded into the apps themselves, Zach Edwards, senior threat analyst at cybersecurity firm Silent Push and who has followed the location data industry closely, tells 404 Media after reviewing some of the data. The data provides a rare glimpse inside the world of real-time bidding (RTB). Historically, location data firms paid app developers to include bundles of code that collected the location data of their users. Many companies have turned instead to sourcing location information through the advertising ecosystem, where companies bid to place ads inside apps. But a side effect is that data brokers can listen in on that process and harvest the location of peoples' mobile phones.

"This is a nightmare scenario for privacy, because not only does this data breach contain data scraped from the RTB systems, but there's some company out there acting like a global honey badger, doing whatever it pleases with every piece of data that comes its way," Edwards says. Included in the hacked Gravy data are tens of millions of mobile phone coordinates of devices inside the US, Russia, and Europe. Some of those files also reference an app next to each piece of location data. 404 Media extracted the app names and built a list of mentioned apps. The list includes dating sites Tinder and Grindr; massive games such asCandy Crush,Temple Run,Subway Surfers, andHarry Potter: Puzzles & Spells; transit app Moovit; My Period Calendar & Tracker, a period-tracking app with more than 10 million downloads; popular fitness app MyFitnessPal; social network Tumblr; Yahoo's email client; Microsoft's 365 office app; and flight tracker Flightradar24. The list also mentions multiple religious-focused apps such as Muslim prayer and Christian Bible apps, various pregnancy trackers, and many VPN apps, which some users may download, ironically, in an attempt to protect their privacy.
404 Media's full list of apps included in the data can be found here. There are also other lists available from other security researchers.
The Courts

Google Faces Trial For Collecting Data On Users Who Opted Out (arstechnica.com) 21

An anonymous reader quotes a report from Ars Technica: A federal judge this week rejected Google's motion to throw out a class-action lawsuit alleging that it invaded the privacy of users who opted out of functionality that records a users' web and app activities. A jury trial is scheduled for August 2025 in US District Court in San Francisco. The lawsuit concerns Google's Web & App Activity (WAA) settings, with the lead plaintiff representing two subclasses of people with Android and non-Android phones who opted out of tracking. "The WAA button is a Google account setting that purports to give users privacy control of Google's data logging of the user's web app and activity, such as a user's searches and activity from other Google services, information associated with the user's activity, and information about the user's location and device," wrote (PDF) US District Judge Richard Seeborg, the chief judge in the Northern District Of California.

Google says that Web & App Activity "saves your activity on Google sites and apps, including associated info like location, to give you faster searches, better recommendations, and more personalized experiences in Maps, Search, and other Google services." Google also has a supplemental Web App and Activity setting that the judge's ruling refers to as "(s)WAA." "The (s)WAA button, which can only be switched on if WAA is also switched on, governs information regarding a user's '[Google] Chrome history and activity from sites, apps, and devices that use Google services.' Disabling WAA also disables the (s)WAA button," Seeborg wrote. But data is still sent to third-party app developers through the Google Analytics for Firebase (GA4F), "a free analytical tool that takes user data from the Firebase kit and provides app developers with insight on app usage and user engagement," the ruling said. GA4F "is integrated in 60 percent of the top apps" and "works by automatically sending to Google a user's ad interactions and certain identifiers regardless of a user's (s)WAA settings, and Google will, in turn, provide analysis of that data back to the app developer."

Plaintiffs have brought claims of privacy invasion under California law. Plaintiffs "present evidence that their data has economic value," and "a reasonable juror could find that Plaintiffs suffered damage or loss because Google profited from the misappropriation of their data," Seeborg wrote. The lawsuit was filed in July 2020. The judge notes that summary judgment can be granted when "there is no genuine dispute as to any material fact and the movant is entitled to judgment as a matter of law." Google hasn't met that standard, he ruled.
In a statement provided to Ars, Google said that "privacy controls have long been built into our service and the allegations here are a deliberate attempt to mischaracterize the way our products work. We will continue to make our case in court against these patently false claims."
AI

'Omi' Wants To Boost Your Productivity Using AI and a 'Brain Interface' 46

An anonymous reader quotes a report from TechCrunch: San Francisco startup Based Hardware announced during the Consumer Electronics Show in Las Vegas this week the launch of a new AI wearable, Omi, to boost productivity. The device can be worn as a necklace where Omi's AI assistant can be activated by saying "Hey Omi." The startup also claims Omi can be attached to the side of your head, using medical tape, using a "brain interface" to understand when you're talking to it. The startup's founder, Nik Shevchenko, started marketing this device on Kickstarter as "Friend," but changed the device's name after another San Francisco hardware maker launched his own Friend device and bought the domain name for $1.8 million.

Shevchenko, a Thiel fellow with a history of eye-grabbing stunts, is taking a slightly different approach with Omi. Instead of seeing the device as a smartphone replacement or an AI companion, he wants Omi to be a complementary device to your phone that boosts your productivity. The Omi device itself is a small, round orb that looks like it fell out of a pack of Mentos. The consumer version costs $89 and will start shipping in Q2 of 2025. However, you can order a developer version for delivery today for roughly $70. Based Hardware says the Omi device can answer your questions, summarize your conversations, create to-do lists, and help schedule meetings. The device is constantly listening and running your conversations through GPT-4o, and it also can remember the context about each user to offer personalized advice.

In an interview with TechCrunch, Shevchenko says he understands that there may be privacy concerns with a device that's always listening. That's why he built Omi on an open source platform where users can see where their data is going, or choose to store it locally. Omi's open source platform also allows developers to build their own applications or use the AI model of their choice. Shevchenko says developers have already created more than 250 apps on Omi's app store. [...] It's unclear if the "brain interface" of Omi actually works, but the startup is tackling a fairly simple use case to start. Shevchenko wants his device to understand whether a user is talking to Omi or not, without using one of its wake words.
Privacy

Telegram Hands US Authorities Data On Thousands of Users (404media.co) 13

Telegram's Transparency Report reveals a sharp increase in U.S. government data requests, with 900 fulfilled requests affecting 2,253 users. "The news shows a massive spike in the number of data requests fulfilled by Telegram after French authorities arrested Telegram CEO Pavel Durov in August, in part because of the company's unwillingness to provide user data in a child abuse investigation," notes 404 Media. From the report: Between January 1 and September 30, 2024, Telegram fulfilled 14 requests "for IP addresses and/or phone numbers" from the United States, which affected a total of 108 users, according to Telegram's Transparency Reports bot. But for the entire year of 2024, it fulfilled 900 requests from the U.S. affecting a total of 2,253 users, meaning that the number of fulfilled requests skyrocketed between October and December, according to the newly released data. "Fulfilled requests from the United States of America for IP address and/or phone number: 900," Telegram's Transparency Reports bot said when prompted for the latest report by 404 Media. "Affected users: 2253," it added.

A month after Durov's arrest in August, Telegram updated its privacy policy to say that the company will provide user data, including IP addresses and phone numbers, to law enforcement agencies in response to valid legal orders. Up until then, the privacy policy only mentioned it would do so when concerning terror cases, and said that such a disclosure had never happened anyway. Even though the data technically covers the entire of 2024, the jump from a total of 108 affected users in October to 2253 as of now, indicates that the vast majority of fulfilled data requests were in the last quarter of 2024, showing a huge increase in the number of law enforcement requests that Telegram completed.
You can access the platform's transparency reports here.
Security

Hackers Claim Massive Breach of Location Data Giant, Threaten To Leak Data (404media.co) 42

Hackers claim to have compromised Gravy Analytics, the parent company of Venntel which has sold masses of smartphone location data to the U.S. government. 404 Media: The hackers said they have stolen a massive amount of data, including customer lists, information on the broader industry, and even location data harvested from smartphones which show peoples' precise movements, and they are threatening to publish the data publicly.

The news is a crystalizing moment for the location data industry. For years, companies have harvested location information from smartphones, either through ordinary apps or the advertising ecosystem, and then built products based on that data or sold it to others. In many cases, those customers include the U.S. government, with arms of the military, DHS, the IRS, and FBI using it for various purposes. But collecting that data presents an attractive target to hackers.

Privacy

Online Gift Card Store Exposed Hundreds of Thousands of People's Identity Documents (techcrunch.com) 15

An anonymous reader quotes a report from TechCrunch: A U.S. online gift card store has secured an online storage server that was publicly exposing hundreds of thousands of customer government-issued identity documents to the internet. A security researcher, who goes by the online handle JayeLTee, found the publicly exposed storage server late last year containing driving licenses, passports, and other identity documents belonging to MyGiftCardSupply, a company that sells digital gift cards for customers to redeem at popular brands and online services.

MyGiftCardSupply's website says it requires customers to upload a copy of their identity documents as part of its compliance efforts with U.S. anti-money laundering rules, often known as "know your customer" checks, or KYC. But the storage server containing the files had no password, allowing anyone on the internet to access the data stored inside. JayeLTee alerted TechCrunch to the exposure last week after MyGiftCardSupply did not respond to the researcher's email about the exposed data. [...]

According to JayeLTee, the exposed data -- hosted on Microsoft's Azure cloud -- contained over 600,000 front and back images of identity documents and selfie photos of around 200,000 customers. It's not uncommon for companies subject to KYC checks to ask their customers to take a selfie while holding a copy of their identity documents to verify that the customer is who they say they are, and to weed out forgeries.
MyGiftCardSupply founder Sam Gastro told TechCrunch: "The files are now secure, and we are doing a full audit of the KYC verification procedure. Going forward, we are going to delete the files promptly after doing the identity verification." It's not known how long the data was exposed or if the company would commit to notifying affected individuals.
Privacy

Cloudflare's VPN App Among Half-Dozen Pulled From Indian App Stores (techcrunch.com) 12

More than half-a-dozen VPN apps, including Cloudflare's widely-used 1.1.1.1, have been pulled from India's Apple App Store and Google Play Store following intervention from government authorities, TechCrunch reported Friday. From the report: The Indian Ministry of Home Affairs issued removal orders for the apps, according to a document reviewed by TechCrunch and a disclosure made by Google to Lumen, Harvard University's database that tracks government takedown requests globally.

Slashdot Top Deals