AI

PIN AI Launches Mobile App Letting You Make Your Own Personalized, Private AI Model (venturebeat.com) 13

An anonymous reader quotes a report from VentureBeat: A new startup PIN AI (not to be confused with the poorly reviewed hardware device the AI Pin by Humane) has emerged from stealth to launch its first mobile app, which lets a user select an underlying open-source AI model that runs directly on their smartphone (iOS/Apple iPhone and Google Android supported) and remains private and totally customized to their preferences. Built with a decentralized infrastructure that prioritizes privacy, PIN AI aims to challenge big tech's dominance over user data by ensuring that personal AI serves individuals -- not corporate interests. Founded by AI and blockchain experts from Columbia, MIT and Stanford, PIN AI is led by Davide Crapis, Ben Wu and Bill Sun, who bring deep experience in AI research, large-scale data infrastructure and blockchain security. [...]

PIN AI introduces an alternative to centralized AI models that collect and monetize user data. Unlike cloud-based AI controlled by large tech firms, PIN AI's personal AI runs locally on user devices, allowing for secure, customized AI experiences without third-party surveillance. At the heart of PIN AI is a user-controlled data bank, which enables individuals to store and manage their personal information while allowing developers access to anonymized, multi-category insights -- ranging from shopping habits to investment strategies. This approach ensures that AI-powered services can benefit from high-quality contextual data without compromising user privacy. [...] The new mobile app launched in the U.S. and multiple regions also includes key features such as:

- The "God model" (guardian of data): Helps users track how well their AI understands them, ensuring it aligns with their preferences.
- Ask PIN AI: A personalized AI assistant capable of handling tasks like financial planning, travel coordination and product recommendations.
- Open-source integrations: Users can connect apps like Gmail, social media platforms and financial services to their personal AI, training it to better serve them without exposing data to third parties.
- "With our app, you have a personal AI that is your model," Crapis added. "You own the weights, and it's completely private, with privacy-preserving fine-tuning."
Davide Crapis, co-founder of PIN AI, told VentureBeat that the app currently supports several open-source AI models, including small versions of DeepSeek and Meta's Llama. "With our app, you have a personal AI that is your model," Crapis added. "You own the weights, and it's completely private, with privacy-preserving fine-tuning."

You can sign up for early access to the PIN AI app here.
Businesses

AI Licensing Deals With Google and OpenAI Make Up 10% of Reddit's Revenue (adweek.com) 27

Reddit's recent earnings report revealed that AI licensing deals with Google and OpenAI account for about 10% of its $1.3 billion revenue, totaling approximately $130 million. With Google paying $60 million, OpenAI is estimated to be paying Reddit around $70 million annually for content licensing. Adweek reports: "It's a small part of our revenue -- I'll call it 10%. For a business of our size, that's material, because it's valuable revenue," [said the company's COO Jen Wong]. The social platform -- which on Wednesday reported a 71% year-over-year lift in fourth-quarter revenue -- has been "very thoughtful" about the AI developers it chooses to work with, Wong said. To date, the company has inked two content licensing deals: one with Google for a reported $60 million, and one with ChatGPT parent OpenAI.

Reddit has elected to work only with partners who can agree to "specific terms ... that are really important to us." These terms include user privacy protections and conditions regarding "how [Reddit is] represented," Wong said. While licensing agreements with AI firms offer a valuable business opportunity for Reddit, advertising remains the company's core revenue driver. Much of Reddit's $427.7 million Q4 revenues were generated by the ongoing expansion of its advertising business. And its ad revenue as a whole grew 60% YoY, underscoring the platform's growing appeal to brands. [...]

Helping to accelerate ad revenue growth is Reddit's rising traffic. While Reddit's Q4 user growth came in under Wall Street projections, causing shares to dip, its weekly active uniques grew 42% YoY to over 379 million visitors. Average revenue per unique visitor was $4.21 during the quarter, up 23% from the prior year. While Google is "nicely reinforcing" Reddit's growth in traffic, Wong said, she added that the site's logged-in users, which have grown 27% year-over-year, are "the bedrock of our business."

Apple

German Regulator Charges Apple With Abuse of Power Over App Tracking Tool (yahoo.com) 17

The German antitrust authority has charged Apple with abusing its market power through its app tracking tool and giving itself preferential treatment in a move that could result in daily fines for the iPhone maker if it fails to change its business practices. From a report: The move follows a three-year investigation by the Federal Cartel Office into Apple's App Tracking Transparency feature, which allows users to block advertisers from tracking them across different applications.

The U.S. tech giant has said the feature allows users to control their privacy but has drawn criticism from Meta Platforms, app developers and startups whose business models rely on advertising tracking. "The ATTF (app tracking tool) makes it far more difficult for competing app publishers to access the user data relevant for advertising," Andreas Mundt, cartel office president, said in a statement.

AI

Tech Leaders Hold Back on AI Agents Despite Vendor Push, Survey Shows 24

Most corporate tech leaders are hesitant to deploy AI agents despite vendors' push for rapid adoption, according to a Wall Street Journal CIO Network Summit poll on Tuesday. While 61% of attendees at the Menlo Park summit said they are experimenting with AI agents, which perform automated tasks, 21% reported no usage at all.

Reliability concerns and cybersecurity risks remain key barriers, with 29% citing data privacy as their primary concern. OpenAI, Microsoft and Sierra are urging businesses not to wait for the technology to be perfected. "Accept that it is imperfect," said Bret Taylor, Sierra CEO and OpenAI chairman. "Rather than say, 'Will AI do something wrong', say, 'When it does something wrong, what are the operational mitigations that we've put in place?'" Three-quarters of the polled executives said AI currently delivers minimal value for their investments. Some companies are "having hammers looking for nails," said Jim Siders, Palantir's chief information officer, describing firms that purchase AI solutions before identifying clear use cases.
Google

Google Fixes Flaw That Could Unmask YouTube Users' Email Addresses 5

An anonymous reader shares a report: Google has fixed two vulnerabilities that, when chained together, could expose the email addresses of YouTube accounts, causing a massive privacy breach for those using the site anonymously.

The flaws were discovered by security researchers Brutecat (brutecat.com) and Nathan (schizo.org), who found that YouTube and Pixel Recorder APIs could be used to obtain user's Google Gaia IDs and convert them into their email addresses. The ability to convert a YouTube channel into an owner's email address is a significant privacy risk to content creators, whistleblowers, and activists relying on being anonymous online.
The Internet

Brave Now Lets You Inject Custom JavaScript To Tweak Websites (bleepingcomputer.com) 12

Brave Browser version 1.75 introduces "custom scriptlets," a new feature that allows advanced users to inject their own JavaScript into websites for enhanced customization, privacy, and usability. The feature is similar to the TamperMonkey and GreaseMonkey browser extensions, notes BleepingComputer. From the report: "Starting with desktop version 1.75, advanced Brave users will be able to write and inject their own scriptlets into a page, allowing for better control over their browsing experience," explained Brave in the announcement. Brave says that the feature was initially created to debug the browser's adblock feature but felt it was too valuable not to share with users. Brave's custom scriptlets feature can be used to modify webpages for a wide variety of privacy, security, and usability purposes.

For privacy-related changes, users write scripts that block JavaScript-based trackers, randomize fingerprinting APIs, and substitute Google Analytics scripts with a dummy version. In terms of customization and accessibility, the scriptlets could be used for hiding sidebars, pop-ups, floating ads, or annoying widgets, force dark mode even on sites that don't support it, expand content areas, force infinite scrolling, adjust text colors and font size, and auto-expand hidden content.

For performance and usability, the scriptlets can block video autoplay, lazy-load images, auto-fill forms with predefined data, enable custom keyboard shortcuts, bypass right-click restrictions, and automatically click confirmation dialogs. The possible actions achievable by injected JavaScript snippets are virtually endless. However, caution is advised, as running untrusted custom scriptlets may cause issues or even introduce some risk.

AI

DeepSeek IOS App Sends Data Unencrypted To ByteDance-Controlled Servers (arstechnica.com) 68

An anonymous Slashdot reader quotes a new article from Ars Technica: On Thursday, mobile security company NowSecure reported that [DeepSeek] sends sensitive data over unencrypted channels, making the data readable to anyone who can monitor the traffic. More sophisticated attackers could also tamper with the data while it's in transit. Apple strongly encourages iPhone and iPad developers to enforce encryption of data sent over the wire using ATS (App Transport Security). For unknown reasons, that protection is globally disabled in the app, NowSecure said. What's more, the data is sent to servers that are controlled by ByteDance, the Chinese company that owns TikTok...

[DeepSeek] is "not equipped or willing to provide basic security protections of your data and identity," NowSecure co-founder Andrew Hoog told Ars. "There are fundamental security practices that are not being observed, either intentionally or unintentionally. In the end, it puts your and your company's data and identity at risk...." This data, along with a mix of other encrypted information, is sent to DeepSeek over infrastructure provided by Volcengine a cloud platform developed by ByteDance. While the IP address the app connects to geo-locates to the US and is owned by US-based telecom Level 3 Communications, the DeepSeek privacy policy makes clear that the company "store[s] the data we collect in secure servers located in the People's Republic of China...."

US lawmakers began pushing to immediately ban DeepSeek from all government devices, citing national security concerns that the Chinese Communist Party may have built a backdoor into the service to access Americans' sensitive private data. If passed, DeepSeek could be banned within 60 days.

Chrome

Google's 7-Year Slog To Improve Chrome Extensions Still Hasn't Satisfied Developers (theregister.com) 30

The Register's Thomas Claburn reports: Google's overhaul of Chrome's extension architecture continues to pose problems for developers of ad blockers, content filters, and privacy tools. [...] While Google's desire to improve the security, privacy, and performance of the Chrome extension platform is reasonable, its approach -- which focuses on code and permissions more than human oversight -- remains a work-in-progress that has left extension developers frustrated.

Alexei Miagkov, senior staff technology at the Electronic Frontier Foundation, who oversees the organization's Privacy Badger extension, told The Register, "Making extensions under MV3 is much harder than making extensions under MV2. That's just a fact. They made things harder to build and more confusing." Miagkov said with Privacy Badger the problem has been the slowness with which Google addresses gaps in the MV3 platform. "It feels like MV3 is here and the web extensions team at Google is in no rush to fix the frayed ends, to fix what's missing or what's broken still." According to Google's documentation, "There are currently no open issues considered a critical platform gap," and various issues have been addressed through the addition of new API capabilities.

Miagkov described an unresolved problem that means Privacy Badger is unable to strip Google tracking redirects on Google sites. "We can't do it the correct way because when Google engineers design the [chrome.declarativeNetRequest API], they fail to think of this scenario," he said. "We can do a redirect to get rid of the tracking, but it ends up being a broken redirect for a lot of URLs. Basically, if the URL has any kind of query string parameters -- the question mark and anything beyond that -- we will break the link." Miagkov said a Chrome developer relations engineer had helped identify a workaround, but it's not great. Miagkov thinks these problems are of Google's own making -- the company changed the rules and has been slow to write the new ones. "It was completely predictable because they moved the ability to fix things from extensions to themselves," he said. "And now they need to fix things and they're not doing it."

Privacy

OpenAI Investigating Claim of 20 Million Stolen User Credentials 15

OpenAI says it's investigating after a hacker claimed to have stolen login credentials for 20 million OpenAI accounts and advertised the data for sale on a dark web forum. Though security researchers doubt on the legitimacy of the breach, the AI company stated that it takes the claims seriously, advising users to enable two-factor authentication and stay vigilant against phishing attempts. Decrypt reports: Daily Dot reporter Mikael Thalan wrote on X that he found invalid email addresses in the supposed sample data: "No evidence (suggests) this alleged OpenAI breach is legitimate. At least two addresses were invalid. The user's only other post on the forum is for a stealer log. Thread has since been deleted as well."

"We take these claims seriously," the spokesperson said, adding: "We have not seen any evidence that this is connected to a compromise of OpenAI systems to date."
Security

Phishing Tests, the Bane of Work Life, Are Getting Meaner (msn.com) 99

U.S. employers are deploying increasingly aggressive phishing tests to combat cyber threats, sparking backlash from workers who say the simulated scams create unnecessary panic and distrust in the workplace. At the University of California, Santa Cruz, a test email about a fake Ebola outbreak sent staff scrambling before learning it was a security drill. At Lehigh Valley Health Network, employees who fall for phishing tests lose external email access, with termination possible after three failures.

Despite widespread use, recent studies question these tests' effectiveness. Research from ETH Zurich found that phishing tests combined with voluntary training actually made employees more vulnerable, while a University of California, San Diego study showed only a 2% reduction [PDF] in phishing success rates. "These are just an ineffective and inefficient way to educate users," said Grant Ho, who co-authored the UCSD study.
Security

Ransomware Payments Dropped 35% In 2024 (therecord.media) 44

An anonymous reader quotes a report from CyberScoop: Ransomware payments saw a dramatic 35% drop last year compared to 2023, even as the overall frequency of ransomware attacks increased, according to a new report released by blockchain analysis firm Chainalysis. The considerable decline in extortion payments is somewhat surprising, given that other cybersecurity firms have claimed that 2024 saw the most ransomware activity to date. Chainalysis itself warned in its mid-year report that 2024's activity was on pace to reach new heights, but attacks in the second half of the year tailed off. The total amount in payments that Chainalysis tracked in 2024 was $812.55 million, down from 2023's mark of $1.25 billion.

The disruption of major ransomware groups, such as LockBit and ALPHV/BlackCat, were key to the reduction in ransomware payments. Operations spearheaded by agencies like the United Kingdom's National Crime Agency (NCA) and the Federal Bureau of Investigation (FBI) caused significant declines in LockBit activity, while ALPHV/BlackCat essentially rug-pulled its affiliates and disappeared after its attack on Change Healthcare. [...] Additionally, [Chainalysis] says more organizations have become stronger against attacks, with many choosing not to pay a ransom and instead using better cybersecurity practices and backups to recover from these incidents. [...]
Chainalysis also says ransomware operators are letting funds sit in wallets, refraining from moving any money out of fear they are being watched by law enforcement.

You can read the full report here.
Government

Bill Banning Social Media For Youngsters Advances (politico.com) 86

The Senate Commerce Committee approved the Kids Off Social Media Act, banning children under 13 from social media and requiring federally funded schools to restrict access on networks and devices. Politico reports: The panel approved the Kids Off Social Media Act -- sponsored by the panel's chair, Texas Republican Ted Cruz, and a senior Democrat on the panel, Hawaii's Brian Schatz -- by voice vote, clearing the way for consideration by the full Senate. Only Ed Markey (D-Mass.) asked to be recorded as a no on the bill. "When you've got Ted Cruz and myself in agreement on something, you've pretty much captured the ideological spectrum of the whole Congress," Sen. Schatz told POLITICO's Gabby Miller.

[...] "KOSMA comes from very good intentions of lawmakers, and establishing national screen time standards for schools is sensible. However, the bill's in-effect requirements on access to protected information jeopardize all Americans' digital privacy and endanger free speech online," said Amy Bos, NetChoice director of state and federal affairs. The trade association represents big tech firms including Meta and Google. Netchoice has been aggressive in combating social media legislation by arguing that these laws illegally restrict -- and in some cases compel -- speech. [...] A Commerce Committee aide told POLITICO that because social media platforms already voluntarily require users to be at least 13 years old, the bill does not restrict speech currently available to kids.

The Internet

The Enshittification Hall of Shame 249

In 2022, writer and activist Cory Doctorow coined the term "enshittification" to describe the gradual deterioration of a service or product. The term's prevalence has increased to the point that it was the National Dictionary of Australia's word of the year last year. The editors at Ars Technica, having "covered a lot of things that have been enshittified," decided to highlight some of the worst examples the've come across. Here's a summary of each thing mentioned in their report: Smart TVs: Evolved into data-collecting billboards, prioritizing advertising and user tracking over user experience and privacy. Features like convenient input buttons are sacrificed for pushing ads and webOS apps. "This is all likely to get worse as TV companies target software, tracking, and ad sales as ways to monetize customers after their TV purchases -- even at the cost of customer convenience and privacy," writes Scharon Harding. "When budget brands like Roku are selling TV sets at a loss, you know something's up."

Google's Voice Assistant (e.g., Nest Hubs): Functionality has degraded over time, with previously working features becoming unreliable. Users report frequent misunderstandings and unresponsiveness. "I'm fine just saying it now: Google Assistant is worse now than it was soon after it started," writes Kevin Purdy. "Even if Google is turning its entire supertanker toward AI now, it's not clear why 'Start my morning routine,' 'Turn on the garage lights,' and 'Set an alarm for 8 pm' had to suffer."

Portable Document Format (PDF): While initially useful for cross-platform document sharing and preserving formatting, PDFs have become bloated and problematic. Copying text, especially from academic journals, is often garbled or impossible. "Apple, which had given the PDF a reprieve, has now killed its main selling point," writes John Timmer. "Because Apple has added OCR to the MacOS image display system, I can get more reliable results by screenshotting the PDF and then copying the text out of that. This is the true mark of its enshittification: I now wish the journals would just give me a giant PNG."

Televised Sports (specifically cycling and Formula 1): Streaming services have consolidated, leading to significantly increased costs for viewers. Previously affordable and comprehensive options have been replaced by expensive bundles across multiple platforms. "Formula 1 racing has largely gone behind paywalls, and viewership is down significantly over the last 15 years," writes Eric Berger. "Major US sports such as professional and college football had largely been exempt, but even that is now changing, with NFL games being shown on Peacock, Amazon Prime, and Netflix. None of this helps viewers. It enshittifies the experience for us in the name of corporate greed."

Google Search: AI overviews often bury relevant search results under lengthy, sometimes inaccurate AI-generated content. This makes finding specific information, especially primary source documents, more difficult. "Google, like many big tech companies, expects AI to revolutionize search and is seemingly intent on ignoring any criticism of that idea," writes Ashley Belanger.

Email AI Tools (e.g., Gemini in Gmail): Intrusive and difficult to disable, these tools offer questionable value due to their potential for factual inaccuracies. Users report being unable to fully opt-out. "Gmail won't take no for an answer," writes Dan Goodin. "It keeps asking me if I want to use Google's Gemini AI tool to summarize emails or draft responses. As the disclaimer at the bottom of the Gemini tool indicates, I can't count on the output being factual, so no, I definitely don't want it."

Windows: While many complaints about Windows 11 originated with Windows 10, the newer version continues the trend of unwanted features, forced updates, and telemetry data collection. Bugs and performance issues also plague the operating system. "... it sure is easy to resent Windows 11 these days, between the well-documented annoyances, the constant drumbeat of AI stuff (some of it gated to pricey new PCs), and a batch of weird bugs that mostly seem to be related to the under-the-hood overhauls in October's Windows 11 24H2 update," writes Andrew Cunningham. "That list includes broken updates for some users, inoperable scanners, and a few unplayable games. With every release, the list of things you need to do to get rid of and turn off the most annoying stuff gets a little longer."

Web Discourse: The rapid spread of memes, trends, and corporate jargon on social media has led to a homogenization of online communication, making it difficult to distinguish original content and creating a sense of constant noise. "[T]he enshittifcation of social media, particularly due to its speed and virality, has led to millions vying for their moment in the sun, and all I see is a constant glare that makes everything look indistinguishable," writes Jacob May. "No wonder some companies think AI is the future."
The Internet

Let's Encrypt Is Ending Expiration Notice Emails (arstechnica.com) 50

Let's Encrypt will stop sending expiration notice emails for its free HTTPS certificates starting June 4, 2025. From the report: Let's Encrypt is ending automated emails for four stated reasons, and all of them are pretty sensible. For one thing, lots of customers have been able to automate their certificate renewal. For another, providing the expiration notices costs "tens of thousands of dollars per year" and adds complexity to the nonprofit's infrastructure as they are looking to add new and more useful services.

If those were not enough, there is this particularly notable reason: "Providing expiration notification emails means that we have to retain millions of email addresses connected to issuance records. As an organization that values privacy, removing this requirement is important to us." Let's Encrypt recommends using Red Sift Certificates Lite to monitor certificate expirations, a service that is free for up to 250 certificates. The service also points to other options, including Datadog SSL monitoring and TrackSSL.

China

Researchers Link DeepSeek To Chinese Telecom Banned In US (apnews.com) 86

An anonymous reader quotes a report from the Associated Press: The website of the Chinese artificial intelligence company DeepSeek, whose chatbot became the most downloaded app in the United States, has computer code that could send some user login information to a Chinese state-owned telecommunications company that has been barred from operating in the United States, security researchers say. The web login page of DeepSeek's chatbot contains heavily obfuscated computer script that when deciphered shows connections to computer infrastructure owned by China Mobile, a state-owned telecommunications company. The code appears to be part of the account creation and user login process for DeepSeek.

In its privacy policy, DeepSeek acknowledged storing data on servers inside the People's Republic of China. But its chatbot appears more directly tied to the Chinese state than previously known through the link revealed by researchers to China Mobile. The U.S. has claimed there are close ties between China Mobile and the Chinese military as justification for placing limited sanctions on the company. [...] The code linking DeepSeek to one of China's leading mobile phone providers was first discovered by Feroot Security, a Canadian cybersecurity company, which shared its findings with The Associated Press. The AP took Feroot's findings to a second set of computer experts, who independently confirmed that China Mobile code is present. Neither Feroot nor the other researchers observed data transferred to China Mobile when testing logins in North America, but they could not rule out that data for some users was being transferred to the Chinese telecom.

The analysis only applies to the web version of DeepSeek. They did not analyze the mobile version, which remains one of the most downloaded pieces of software on both the Apple and the Google app stores. The U.S. Federal Communications Commission unanimously denied China Mobile authority to operate in the United States in 2019, citing "substantial" national security concerns about links between the company and the Chinese state. In 2021, the Biden administration also issued sanctions limiting the ability of Americans to invest in China Mobile after the Pentagon linked it to the Chinese military.
"It's mindboggling that we are unknowingly allowing China to survey Americans and we're doing nothing about it," said Ivan Tsarynny, CEO of Feroot. "It's hard to believe that something like this was accidental. There are so many unusual things to this. You know that saying 'Where there's smoke, there's fire'? In this instance, there's a lot of smoke," Tsarynny said.

Further reading: Senator Hawley Proposes Jail Time For People Who Download DeepSeek
Security

First OCR Spyware Breaches Both Apple and Google App Stores To Steal Crypto Wallet Phrases (securelist.com) 24

Kaspersky researchers have discovered malware hiding in both Google Play and Apple's App Store that uses optical character recognition to steal cryptocurrency wallet recovery phrases from users' photo galleries. Dubbed "SparkCat" by security firm ESET, the malware was embedded in several messaging and food delivery apps, with the infected Google Play apps accumulating over 242,000 downloads combined.

This marks the first known instance of such OCR-based spyware making it into Apple's App Store. The malware, active since March 2024, masquerades as an analytics SDK called "Spark" and leverages Google's ML Kit library to scan users' photos for wallet recovery phrases in multiple languages. It requests gallery access under the guise of allowing users to attach images to support chat messages. When granted access, it searches for specific keywords related to crypto wallets and uploads matching images to attacker-controlled servers.

The researchers found both Android and iOS variants using similar techniques, with the iOS version being particularly notable as it circumvented Apple's typically stringent app review process. The malware's creators appear to be Chinese-speaking actors based on code comments and server error messages, though definitive attribution remains unclear.
Privacy

TSA's Airport Facial-Recognition Tech Faces Audit Probe (theregister.com) 14

The Department of Homeland Security's Inspector General has launched an audit of the TSA's use of facial recognition technology at U.S. airports following concerns from lawmakers and privacy advocates. The Register reports: Homeland Security Inspector General Joseph Cuffari notified a bipartisan group of US Senators who had asked for such an investigation last year that his office has announced an audit of TSA facial recognition technology in a letter [PDF] sent to the group Friday. "We have reviewed the concerns raised in your letter as part of our work planning process," said Cuffari, a Trump appointee who survived the recent purge of several Inspectors General. "[The audit] will determine the extent to which TSA's facial recognition and identification technologies enhance security screening to identify persons of interest and authenticate flight traveler information while protecting passenger privacy," Cuffari said.

The letter from the Homeland Security OIG was addressed to Senator Jeff Merkley (D-OR), who co-led the group of 12 Senators who asked for an inspection of TSA facial recognition in November last year. "Americans don't want a national surveillance state, but right now, more Americans than ever before are having their faces scanned at the airport without being able to exercise their right to opt-out," Merkley said in a statement accompanying Cuffari's letter. "I have long sounded the alarm about the TSA's expanding use of facial recognition ... I'll keep pushing for strong Congressional oversight."

[...] While Cuffari's office was light on details of what would be included in the audit, the November letter from the Senators was explicit in its list of requests. They asked for the systems to be evaluated via red team testing, with a specific investigation into effectiveness - whether it reduced screening delays, stopped known terrorists, led to workforce cuts, or amounted to little more than security theater with errors.

The Courts

NetChoice Sues To Block Maryland's Kids Code, Saying It Violates the First Amendment (theverge.com) 27

NetChoice has filed (PDF) its 10th lawsuit challenging state internet regulations, this time opposing Maryland's Age-Appropriate Design Code Act. The Verge's Lauren Feiner reports: NetChoice has become one of the fiercest -- and most successful -- opponents of age verification, moderation, and design code laws, all of which would put new obligations on tech platforms and change how users experience the internet. [...] NetChoice's latest suit opposes the Maryland Age-Appropriate Design Code Act, a rule that echoes a California law of a similar name. In the California litigation, NetChoice notched a partial win in the Ninth Circuit Court of Appeals, which upheld the district court's decision to block a part of the law requiring platforms to file reports about their services' impact on kids. (It sent another part of the law back to the lower court for further review.)

A similar provision in Maryland's law is at the center of NetChoice's complaint. The group says that Maryland's reporting requirement lets regulators subjectively determine the "best interests of children," inviting "discriminatory enforcement." The reporting requirement on tech companies essentially mandates them "to disparage their services and opine on far-ranging and ill-defined harms that could purportedly arise from their services' 'design' and use of information," NetChoice alleges. NetChoice points out that both California and Maryland have passed separate online privacy laws, which NetChoice Litigation Center director Chris Marchese says shows that "lawmakers know how to write laws to protect online privacy when what they want to do is protect online privacy."

Supporters of the Maryland law say legislators learned from California's challenges and "optimized" their law to avoid questions about speech, according to Tech Policy Press. In a blog analyzing Maryland's approach, Future of Privacy Forum points out that the state made some significant changes from California's version -- such as avoiding an "express obligationâ to determine users' ages and defining the "best interests of children." The NetChoice challenge will test how well those changes can hold up to First Amendment scrutiny. NetChoice has consistently maintained that even well-intentioned attempts to protect kids online are likely to backfire. Though the Maryland law does not explicitly require the use of specific age verification tools, Marchese says it essentially leaves tech platforms with a no-win decision: collect more data on users to determine their ages and create varied user experiences or cater to the lowest common denominator and self-censor lawful content that might be considered inappropriate for its youngest users. And similar to its arguments in other cases, Marchese worries that collecting more data to identify users as minors could create a "honey pot" of kids' information, creating a different problem in attempting to solve another.

Android

Google Stops Malicious Apps With 'AI-Powered Threat Detection' and Continuous Scanning (googleblog.com) 15

Android and Google Play have billions of users, Google wrote in its security blog this week. "However, like any flourishing ecosystem, it also attracts its share of bad actors... That's why every year, we continue to invest in more ways to protect our community." Google's tactics include industry-wide alliances, stronger privacy policies, and "AI-powered threat detection."

"As a result, we prevented 2.36 million policy-violating apps from being published on Google Play and banned more than 158,000 bad developer accounts that attempted to publish harmful apps. " To keep out bad actors, we have always used a combination of human security experts and the latest threat-detection technology. In 2024, we used Google's advanced AI to improve our systems' ability to proactively identify malware, enabling us to detect and block bad apps more effectively. It also helps us streamline review processes for developers with a proven track record of policy compliance. Today, over 92% of our human reviews for harmful apps are AI-assisted, allowing us to take quicker and more accurate action to help prevent harmful apps from becoming available on Google Play. That's enabled us to stop more bad apps than ever from reaching users through the Play Store, protecting users from harmful or malicious apps before they can cause any damage.
Starting in 2024 Google also "required apps to be more transparent about how they handle user information by launching new developer requirements and a new 'Data deletion' option for apps that support user accounts and data collection.... We're also constantly working to improve the safety of apps on Play at scale, such as with the Google Play SDK Index. This tool offers insights and data to help developers make more informed decisions about the safety of an SDK."

And once an app is installed, "Google Play Protect, Android's built-in security protection, helps to shield their Android device by continuously scanning for malicious app behavior." Google Play Protect automatically scans every app on Android devices with Google Play Services, no matter the download source. This built-in protection, enabled by default, provides crucial security against malware and unwanted software. Google Play Protect scans more than 200 billion apps daily and performs real-time scanning at the code-level on novel apps to combat emerging and hidden threats, like polymorphic malware. In 2024, Google Play Protect's real-time scanning identified more than 13 million new malicious apps from outside Google Play [based on Google Play Protect 2024 internal data]...

According to our research, more than 95 percent of app installations from major malware families that exploit sensitive permissions highly correlated to financial fraud came from Internet-sideloading sources like web browsers, messaging apps, or file managers. To help users stay protected when browsing the web, Chrome will now display a reminder notification to re-enable Google Play Protect if it has been turned off... Scammers may manipulate users into disabling Play Protect during calls to download malicious Internet-sideloaded apps. To prevent this, the Play Protect app scanning toggle is now temporarily disabled during phone or video calls...

Google Play Protect's enhanced fraud protection pilot analyzes and automatically blocks the installation of apps that may use sensitive permissions frequently abused for financial fraud when the user attempts to install the app from an Internet-sideloading source (web browsers, messaging apps, or file managers). Building on the success of our initial pilot in partnership with the Cyber Security Agency of Singapore (CSA), additional enhanced fraud protection pilots are now active in nine regions — Brazil, Hong Kong, India, Kenya, Nigeria, Philippines, South Africa, Thailand, and Vietnam.

In 2024, Google Play Protect's enhanced fraud protection pilots have shielded 10 million devices from over 36 million risky installation attempts, encompassing over 200,000 unique apps.

Windows

After 'Copilot Price Hike' for Microsoft 365, It's Ending Its Free VPN (windowscentral.com) 81

In 2023, Microsoft began including a free VPN feature in its "Microsoft Defender" security app for all Microsoft 365 subscribers ("Personal" and "Family"). Originally Microsoft had "called it a privacy protection feature," writes the blog Windows Central, "designed to let you access sensitive data on the web via a VPN tunnel." But.... Unfortunately, Microsoft has now announced that it's killing the feature later this month, only a couple of years after it first debuted...

To add insult to injury, this announcement comes just days after Microsoft increased subscription prices across the board. Both Personal and Family subscriptions went up by three dollars a month, which the company says is the first price hike Microsoft 365 has seen in over a decade. The increased price does now include Microsoft 365 Copilot, which adds AI features to Word, PowerPoint, Excel, and others.

However, it also comes with the removal of the free VPN in Microsoft Defender, which I've found to be much more useful so far.

Slashdot Top Deals