Privacy

What Happened After Security Researchers Found 60 Flock Cameras Livestreaming to the Internet (youtube.com) 50

A couple months ago, YouTuber Benn Jordan "found vulnerabilities in some of Flock's license plate reader cameras," reports 404 Media's Jason Koebler. "He reached out to me to tell me he had learned that some of Flock's Condor cameras were left live-streaming to the open internet."

This led to a remarkable article where Koebler confirmed the breach by visiting a Flock surveillance camera mounted on a California traffic signal. ("On my phone, I am watching myself in real time as the camera records and livestreams me — without any password or login — to the open internet... Hundreds of miles away, my colleagues are remotely watching me too through the exposed feed.") Flock left livestreams and administrator control panels for at least 60 of its AI-enabled Condor cameras around the country exposed to the open internet, where anyone could watch them, download 30 days worth of video archive, and change settings, see log files, and run diagnostics. Unlike many of Flock's cameras, which are designed to capture license plates as people drive by, Flock's Condor cameras are pan-tilt-zoom (PTZ) cameras designed to record and track people, not vehicles. Condor cameras can be set to automatically zoom in on people's faces... The exposure was initially discovered by YouTuber and technologist Benn Jordan and was shared with security researcher Jon "GainSec" Gaines, who recently found numerous vulnerabilities in several other models of Flock's automated license plate reader (ALPR) cameras.
Jordan appeared this week as a guest on Koebler's own YouTube channel, while Jordan released a video of his own about the experience. titled "We Hacked Flock Safety Cameras in under 30 Seconds." (Thanks to Slashdot reader beadon for sharing the link.) But together Jordan and 404 Media also created another video three weeks ago titled "The Flock Camera Leak is Like Netflix for Stalkers" which includes footage he says was "completely accessible at the time Flock Safety was telling cities that the devices are secure after they're deployed."

The video decries cities "too lazy to conduct their own security audit or research the efficacy versus risk," but also calls weak security "an industry-wide problem." Jordan explains in the video how he "very easily found the administration interfaces for dozens of Flock safety cameras..." — but also what happened next: None of the data or video footage was encrypted. There was no username or password required. These were all completely public-facing, for the world to see.... Making any modification to the cameras is illegal, so I didn't do this. But I had the ability to delete any of the video footage or evidence by simply pressing a button. I could see the paths where all of the evidence files were located on the file system...

During and after the process of conducting that research and making that video, I was visited by the police and had what I believed to be private investigators outside my home photographing me and my property and bothering my neighbors. John Gaines or GainSec, the brains behind most of this research, lost employment within 48 hours of the video being released. And the sad reality is that I don't view these things as consequences or punishment for researching security vulnerabilities. I view these as consequences and punishment for doing it ethically and transparently.

I've been contacted by people on or communicating with civic councils who found my videos concerning, and they shared Flock Safety's response with me. The company claimed that the devices in my video did not reflect the security standards of the ones being publicly deployed. The CEO even posted on LinkedIn and boasted about Flock Safety's security policies. So, I formally and publicly offered to personally fund security research into Flock Safety's deployed ecosystem. But the law prevents me from touching their live devices. So, all I needed was their permission so I wouldn't get arrested. And I was even willing to let them supervise this research.

I got no response.

So instead, he read Flock's official response to a security/surveillance industry research group — while standing in front of one of their security cameras, streaming his reading to the public internet.

"Might as well. It's my tax dollars that paid for it."

" 'Flock is committed to continuously improving security...'"
Australia

Nearly 5 Million Accounts Removed Under Australia's New Social Media Ban (nytimes.com) 72

An anonymous reader quotes a report from the New York Times: Nearly five million social media accounts belonging to Australian teenagers have been deactivated or removed, a month after a landmark law barring those younger than 16 from using the services took effect, the government said on Thursday. The announcement was the first reported metric reflecting the rollout of the law, which is being closely watched by several other countries weighing whether the regulation can be a blueprint for protecting children from the harms of social media, or a cautionary tale highlighting the challenges of such attempts.

The law required 10 social media platforms, including Instagram, Facebook, Snapchat and Reddit, to prevent users under 16 from accessing their services. Under the law, which came into force in December, failure by the companies to take "reasonable steps" to remove underage users could lead to fines of up to 49.5 million Australian dollars, about $33 million. [...] The number of removed accounts offered only a limited picture of the ban's impact. Many teenagers have said in the weeks since the law took effect that they were able to get around the ban by lying about their age, or that they could easily bypass verification systems.

The regulator tasked with enforcing and tracking the law, the eSafety Commissioner, did not release a detailed breakdown beyond announcing that the companies had "removed access" to about 4.7 million accounts belonging to children under 16. Meta, the parent company of Instagram and Facebook, said this week that it had removed almost 550,000 accounts of users younger than 16 before the ban came into effect.
"Change doesn't happen overnight," said Prime Minister Anthony Albanese. "But these early signs show it's important we've acted to make this change."
Social Networks

Study Finds Weak Evidence Linking Social Media Use to Teen Mental Health Problems (theguardian.com) 40

An anonymous reader quotes a report from the Guardian: Screen time spent gaming or on social media does not cause mental health problems in teenagers, according to a large-scale study. [...] Researchers at the University of Manchester followed 25,000 11- to 14-year-olds over three school years, tracking their self-reported social media habits, gaming frequency and emotional difficulties to find out whether technology use genuinely predicted later mental health difficulties. Participants were asked how much time on a normal weekday in term time they spent on TikTok, Instagram, Snapchat and other social media, or gaming. They were also asked questions about their feelings, mood and wider mental health.

The study found no evidence for boys or girls that heavier social media use or more frequent gaming increased teenagers' symptoms of anxiety or depression over the following year. Increases in girls' and boys' social media use from year 8 to year 9 and from year 9 to year 10 had zero detrimental impact on their mental health the following year, the authors found. More time spent gaming also had a zero negative effect on pupils' mental health. "We know families are worried, but our results do not support the idea that simply spending time on social media or gaming leads to mental health problems -- the story is far more complex than that," said the lead author Dr Qiqi Cheng.

The research, published in the Journal of Public Health, also examined whether how pupils use social media makes a difference, with participants asked how much time spent chatting with others, posting stories, pictures and videos, browsing feeds, profiles or scrolling through photos and stories. The scientists found that actively chatting on social media or passive scrolling feeds did not appear to drive mental health difficulties. The authors stressed that the findings did not mean online experiences were harmless. Hurtful messages, online pressures and extreme content could have detrimental effects on wellbeing, but focusing on screen time alone was not helpful, they said.

Businesses

'White-Collar Workers Shouldn't Dismiss a Blue-Collar Career Change' (msn.com) 145

White-collar workers stuck in a cycle of layoffs and stagnant wages might want to look past the traditional tech, finance and media job postings to an unexpected source of opportunity: the blue-collar sector, which faces a labor shortage and is seeing rapid transformation through private-equity investment. These jobs are generally less vulnerable to AI, and the earning trajectory can be steep, the WSJ writes.

At Crash Champions, a car-repair chain that has grown from 13 locations in 2019 to about 650 shops across 38 states, service advisers start at roughly $60,000 after a six-month apprenticeship and can double that within 18 months, according to CEO Matt Ebert. Directors overseeing multiple locations earn more than $200,000. Power Home Remodeling, a PE-backed construction company, says tech sales professionals earning $85,000 to $100,000 could make lateral moves after a 10-week training program.

The share of workers in their early 20s employed in blue-collar roles rose from 16.3% in 2019 to 18.4% in 2024, according to ADP -- five times the increase among 35- to 39-year-olds.
Social Networks

Digg Launches Its New Reddit Rival To the Public (techcrunch.com) 44

Digg is officially back under the ownership of its original founder, Kevin Rose, along with Reddit co-founder Alexis Ohanian. "Similar to Reddit, the new Digg offers a website and mobile app where you can browse feeds featuring posts from across a selection of its communities and join other communities that align with your interests," reports TechCrunch. "There, you can post, comment, and upvote (or 'digg') the site's content." From the report: [T]he rise of AI has presented an opportunity to rebuild Digg, Rose and Ohanian believe, leading them to acquire Digg last March through a leveraged buyout by True Ventures, Ohanian's firm Seven Seven Six, Rose and Ohanian themselves, and the venture firm S32. The company has not disclosed its funding. They're betting that AI can help to address some of the messiness and toxicity of today's social media landscape. At the same time, social platforms will need a new set of tools to ensure they're not taken over by AI bots posing as people.

"We obviously don't want to force everyone down some kind of crazy KYC process," said Rose in an interview with TechCrunch, referring to the 'know your customer' verification process used by financial institutions to confirm someone's identity. Instead of simply offering verification checkmarks to designate trust, Digg will try out new technologies, like using zero-knowledge proofs (cryptographic methods that verify information without revealing the underlying data) to verify the people using its platform. It could also do other things, like require that people who join a product-focused community verify they actually own or use the product being discussed there.

As an example, a community for Oura ring owners could verify that everyone who posts has proven they own one of the smart rings. Plus, Rose suggests Digg could use signals acquired from mobile devices to help verify members -- for instance, the app could identify when Digg users attended a meetup in the same location. "I don't think there's going to be any one silver bullet here," said Rose. "It's just going to be us saying ... here's a platter of things that you can add together to create trust."

Communications

Widespread Verizon Outage Prompts Emergency Alerts in Washington, New York City (nbcnews.com) 16

Verizon said on Wednesday that its wireless service was suffering an outage impacting cellular data and voice services. From a report: The nation's largest wireless carrier said that its "engineers are engaged and are working to identify and solve the issue quickly." Verizon's statement came after a swath of social media comments directed at Verizon, with users saying that their mobile devices were showing no bars of service or "SOS," indicating a lack of connection.

Verizon, which has more than 146 million customers, appears to have started experiencing services issues around 12:00 p.m. ET, according to comments on social media site X. Users also reported problems with Verizon competitor T-Mobile. But the company said that it was not having any service issues. "T-Mobile's network is keeping our customers connected, and we've confirmed that our network is operating optimally," a spokesperson told NBC News. "However, due to Verizon's reported outage, our customers may not be able to reach someone with Verizon service at this time."

Microsoft

UK Police Blame Microsoft Copilot for Intelligence Mistake (theverge.com) 60

The chief constable of one of Britain's largest police forces has admitted that Microsoft's Copilot AI assistant made a mistake in a football (soccer) intelligence report. From a report: The report, which led to Israeli football fans being banned from a match last year, included a nonexistent match between West Ham and Maccabi Tel Aviv.

Copilot hallucinated the game and West Midlands Police included the error in its intelligence report without fact checking it. "On Friday afternoon I became aware that the erroneous result concerning the West Ham v Maccabi Tel Aviv match arose as result of a use of Microsoft Co Pilot [sic]," says Craig Guildford, chief constable of West Midlands Police, in a letter to the Home Affairs Committee earlier this week. Guildford previously denied in December that the West Midlands Police had used AI to prepare the report, blaming "social media scraping" for the error.

Science

Doubt Cast On Discovery of Microplastics Throughout Human Body (theguardian.com) 50

An anonymous reader quotes a report from the Guardian: High-profile studies reporting the presence of microplastics throughout the human body have been thrown into doubt by scientists who say the discoveries are probably the result of contamination and false positives. One chemist called the concerns "a bombshell." Studies claiming to have revealed micro and nanoplastics in the brain, testes, placentas, arteries and elsewhere were reported by media across the world, including the Guardian.

There is no doubt that plastic pollution of the natural world is ubiquitous, and present in the food and drink we consume and the air we breathe. But the health damage potentially caused by microplastics and the chemicals they contain is unclear, and an explosion of research has taken off in this area in recent years. However, micro- and nanoplastic particles are tiny and at the limit of today's analytical techniques, especially in human tissue. There is no suggestion of malpractice, but researchers told the Guardian of their concern that the race to publish results, in some cases by groups with limited analytical expertise, has led to rushed results and routine scientific checks sometimes being overlooked.

The Guardian has identified seven studies that have been challenged by researchers publishing criticism in the respective journals, while a recent analysis listed 18 studies that it said had not considered that some human tissue can produce measurements easily confused with the signal given by common plastics. There is an increasing international focus on the need to control plastic pollution but faulty evidence on the level of microplastics in humans could lead to misguided regulations and policies, which is dangerous, researchers say. It could also help lobbyists for the plastics industry to dismiss real concerns by claiming they are unfounded. While researchers say analytical techniques are improving rapidly, the doubts over recent high-profile studies also raise the questions of what is really known today and how concerned people should be about microplastics in their bodies.

Government

Senate Passes a Bill That Would Let Nonconsensual Deepfake Victims Sue (theverge.com) 63

The U.S. Senate unanimously passed the Disrupt Explicit Forged Images and Non-Consensual Edits Act (DEFIANCE Act), giving victims of sexually explicit AI deepfakes the right to sue the individuals who created them. The Verge reports: The bill passed with unanimous consent -- meaning there was no roll-call vote, and no Senator objected to its passage on the floor Tuesday. It's meant to build on the work of the Take It Down Act, a law that criminalizes the distribution of nonconsensual intimate images (NCII) and requires social media platforms to promptly remove them. [...] Now the ball is again in the House leadership's court; if they decide to bring the bill to the floor, it will have to pass in order to reach the president's desk.
Power

Trump Says Microsoft To Make Changes To Curb Data Center Power Costs For Americans (cnbc.com) 42

An anonymous reader quotes a report from CNBC: President Donald Trump said in a social media post on Monday that Microsoft will announce changes to ensure that Americans won't see rising utility bills as the company builds more data centers to meet rising artificial intelligence demand. "I never want Americans to pay higher Electricity bills because of Data Centers," Trump wrote on Truth Social. "Therefore, my Administration is working with major American Technology Companies to secure their commitment to the American People, and we will have much to announce in the coming weeks."

[...] Trump congratulated Microsoft on its efforts to keep prices in check, suggesting that other companies will make similar commitments. "First up is Microsoft, who my team has been working with, and which will make major changes beginning this week to ensure that Americans don't 'pick up the tab' for their POWER consumption, in the form of paying higher Utility bills," Trump wrote on Monday. Utilities charged U.S. consumers 6% more for electricity in August from a year earlier, including in states with many data centers, CNBC reported in November.

Microsoft is paying close to attention to the impact of its data centers on local residents. "I just want you to know we are doing everything we can, and I believe we're succeeding, in managing this issue well, so that you all don't have to pay more for electricity because of our presence," Brad Smith, the company's president and vice chair, said at a September town hall meeting in Wisconsin, where Microsoft is building an AI data center. While Microsoft is moving forward with some facilities, the company withdrew plans for a data center in Caledonia, Wisconsin, amid loud opposition to its efforts there. The project would would have been located 20 miles away from a data center in the village of Mount Pleasant.

China

Viral Chinese App 'Are You Dead?' Checks On Those Who Live Alone (cybernews.com) 53

The viral Chinese app Are You Dead? (known as Sileme in Chinese) targets people who live alone by requiring regular check-ins and alerting an emergency contact if the user doesn't respond. It launched in May and is now the most downloaded paid app in China. Cybernews reports: Users need to check in with the app every two days by clicking a large button to confirm that they are alive. Otherwise, the app will inform the user's appointed emergency contact that they may be in trouble, Chinese state-run outlet Global Times reports. The app is marketed as a "safety companion" for those who live far from home or choose a solitary lifestyle.

Initially launched as a free app, "eAre You Dead?" now costs 8 yuan, equivalent to $1.15. Despite its growing popularity, the app has sparked criticism in China, where some said they were repulsed by the negative connotation of death. Some suggested the app should be renamed to "Are You Alive?" The app's creators told Chinese media that they will focus on improving the product, such as adding SMS notification features or a messaging function. Moreover, they will consider the criticism over the app's name.

The Internet

Cloudflare Threatens Italy Exit After $16.3M Fine For Refusing Piracy Blocks (x.com) 50

Cloudflare CEO Matthew Prince has threatened to withdraw free cybersecurity services from Italy's Milano-Cortina Winter Olympics and potentially exit the country after Italy's telecommunications regulator fined the company approximately 14 million euros for failing to comply with anti-piracy blocking orders. The penalty equals 1% of Cloudflare's global annual revenue but exceeds twice what the company earned from Italy in 2024.

Prince called Italy's Autorita per le Garanzie nelle Comunicazioni a "quasi-judicial body" administering a "scheme to censor the Internet" on behalf of "a shadowy cabal of European media elites." The fine stems from Cloudflare's refusal to comply with Italy's Piracy Shield law, which requires internet service providers and DNS operators to block sites within 30 minutes of receiving blocking requests from copyright holders. Prince said Cloudflare may discontinue free services for Italian users, remove servers from Italian cities and cancel plans to build an Italian office.
AI

Amazon's AI Tool Listed Products from Small Businesses Without Their Knowledge (msn.com) 40

Bloomberg reports on Amazon listings "automatically generated by an experimental AI tool" for stores that don't sell on Amazon.

Bloomberg notes that the listings "didn't always correspond to the correct product", leaving the stores to handle the complaints from angry customers: Between the Christmas and New Year holidays, small shop owners and artisans who had found their products listed on Amazon took to social media to compare notes and warn their peers... In interviews, six small shop owners said they found themselves unwittingly selling their products on Amazon's digital marketplace. Some, especially those who deliberately avoided Amazon, said they should have been asked for their consent. Others said it was ironic that Amazon was scouring the web for products with AI tools despite suing Perplexity AI Inc.for using similar technology to buy products on Amazon... Some retailers say the listings displayed the wrong product image or mistakenly showed wholesale pricing. Users of Shopify Inc.'s e-commerce tools said the system flagged Amazon's automated purchases as potentially fraudulent...

In a statement, Amazon spokesperson Maxine Tagay said sellers are free to opt out. Two Amazon initiatives — Shop Direct, which links out to make purchases on other retailers' sites, and Buy For Me, which duplicates listings and handles purchases without leaving Amazon — "are programs we're testing that help customers discover brands and products not currently sold in Amazon's store, while helping businessesâreach new customers and drive incremental sales," she said in an emailed statement. "We have received positive feedback on these programs." Tagay didn't say why the sellers were enrolled without notifying them. She added that the Buy For Me selection features more than 500,000 items, up from about 65,000 at launch in April.

The article includes quotes from the owners of affected businesses.
  • A one-person company complained that "If suddenly there were 100 orders, I couldn't necessarily manage. When someone takes your proprietary, copyrighted works, I should be asked about that. This is my business. It's not their business."
  • One business owner said "I just don't want my products on there... It's like if Airbnb showed up and tried to put your house on the market without your permission."
  • One business owner complained "When things started to go wrong, there was no system set up by Amazon to resolve it. It's just 'We set this up for you, you should be grateful, you fix it.'" One Amazon representative even suggested they try opening a $39-a-month Amazon seller account.

Social Networks

Elon Musk: X's New Algorithm Will Be Made Open Source in Seven Days (msn.com) 90

"We will make the new ð algorithm...open source in 7 days," Elon Musk posted Saturday on X.com. Musk says this is "including all code used to determine what organic and advertising posts are recommended to users," and "This will be repeated every 4 weeks, with comprehensive developer notes, to help you understand what changed."

Some context from Engadget: Musk has been making promises of open-sourcing the algorithm since his takeover of Twitter, and in 2023 published the code for the site's "For You" feed on GitHub. But the code wasn't all that revealing, leaving out key details, according to analyses at the time. And it hasn't been kept up to date.
Bloomberg also reported on Saturday's announcement: The billionaire didn't say why X was making its algorithm open source. He and the company have clashed several times with regulators over content being shown to users.

Some X users had previously complained that they were receiving fewer posts on the social media platform from people they follow. In October, Musk confirmed in a post on X that the company had found a "significant bug" in the platform's "For You" algorithm and pledged a fix. The company has also been working to incorporate more artificial intelligence into its recommendation algorithm for X, using Grok, Musk's artificial intelligence chatbot...

In September, Musk wrote that the goal was for X's recommendation engine to "be purely AI" and that the company would share its open source algorithm about every two weeks. "To the degree that people are seeing improvements in their feed, it is not due to the actions of specific individuals changing heuristics, but rather increasing use of Grok and other AI tools," Musk wrote in October. The company was working to have all of the more than 100 million daily posts published to X evaluated by Grok, which would then offer individual users the posts most likely to interest them, Musk wrote. "This will profoundly improve the quality of your feed." He added that the company was planning to roll out the new features by November.

Social Networks

AI-Powered Social Media App Hopes To Build More Purposeful Lives (msn.com) 32

A founder of Twitter and a founder of Pinterest are now working on "social media for people who hate social media," writes a Washington Post columnist.

"When I heard that this platform would harness AI to help us live more meaningful lives, I wanted to know more..." Their bid for redemption is West Co. — the Workshop for Emotional and Spiritual Technology Corporation — and the platform they're testing is called Tangle, a "purpose discovery tool" that uses AI to help users define their life purposes, then encourages them to set intentions toward achieving those purposes, reminds them periodically and builds a community of supporters to encourage steps toward meeting those intentions. "A lot of people, myself included, have been on autopilot," Stone said. "If all goes well, we'll introduce a lot of people to the concept of turning off autopilot."

But will all go well? The entrepreneurs have been at it for two years, and they've scrapped three iterations before even testing them. They still don't have a revenue model. "This is a really hard thing to do," Stone admitted. "If we were a traditional start-up, we would have probably been folded by now." But the two men, with a combined net worth of at least hundreds of millions, and possibly billions, had the luxury of self-funding for a year, and now they have $29 million in seed funding led by Spark Capital...

[T]he project revolves around training existing AI models in "what good intentions and helpful purposes look like," explained Long Cheng, the founding designer. When you join Tangle, which is invitation-only until this spring at the earliest, the AI peruses your calendar, examines your photos, asks you questions and then produces "threads," or categories that define your life purpose. You're free to accept, reject or change the suggestions. It then encourages you to make "intentions" toward achieving your threads, and to add "reflections" when you experience something meaningful in your life. Users then receive encouragement from friends, or "supporters." A few of the "threads" on Tangle are about personal satisfaction (traveler, connoisseur), but the vast majority involve causes greater than self: family (partner, parent, sibling), community (caregiver, connector, guardian), service (volunteer, advocate, healer) and spirituality (seeker, believer). Even the work-related threads (mentor, leader) suggest a higher purpose.

The column includes this caveat. "I have no idea whether they will succeed. But as a columnist writing about how to keep our humanity in the 21st century, I believe it's important to focus on people who are at least trying..."

"Quite possibly, West Co. and the various other enterprises trying to nudge technology in a more humane direction will find that it doesn't work socially or economically — they don't yet have a viable product, after all — but it would be a noble failure."
AI

AI Is Intensifying a 'Collapse' of Trust Online, Experts Say (nbcnews.com) 60

Experts interviewed by NBC News warn that the rapid spread of AI-generated images and videos is accelerating an online trust breakdown, especially during fast-moving news events where context is scarce. From the report: President Donald Trump's Venezuela operation almost immediately spurred the spread of AI-generated images, old videos and altered photos across social media. On Wednesday, after an Immigration and Customs Enforcement officer fatally shot a woman in her car, many online circulated a fake, most likely AI-edited image of the scene that appears to be based on real video. Others used AI in attempts to digitally remove the mask of the ICE officer who shot her.

The confusion around AI content comes as many social media platforms, which pay creators for engagement, have given users incentives to recycle old photos and videos to ramp up emotion around viral news moments. The amalgam of misinformation, experts say, is creating a heightened erosion of trust online -- especially when it mixes with authentic evidence. "As we start to worry about AI, it will likely, at least in the short term, undermine our trust default -- that is, that we believe communication until we have some reason to disbelieve," said Jeff Hancock, founding director of the Stanford Social Media Lab. "That's going to be the big challenge, is that for a while people are really going to not trust things they see in digital spaces."

Though AI is the latest technology to spark concern about surging misinformation, similar trust breakdowns have cycled through history, from election misinformation in 2016 to the mass production of propaganda after the printing press was invented in the 1400s. Before AI, there was Photoshop, and before Photoshop, there were analog image manipulation techniques. Fast-moving news events are where manipulated media have the biggest effect, because they fill in for the broad lack of information, Hancock said.
"In terms of just looking at an image or a video, it will essentially become impossible to detect if it's fake. I think that we're getting close to that point, if we're not already there," said Hancock. "The old sort of AI literacy ideas of 'let's just look at the number of fingers' and things like that are likely to go away."

Renee Hobbs, a professor of communication studies at the University of Rhode Island, added: "If constant doubt and anxiety about what to trust is the norm, then actually, disengagement is a logical response. It's a coping mechanism. And then when people stop caring about whether something's true or not, then the danger is not just deception, but actually it's worse than that. It's the whole collapse of even being motivated to seek truth."
Media

Microsoft Windows Media Player Stops Serving Up CD Album Info (theregister.com) 59

An anonymous reader shares a report: Microsoft is celebrating the resurgence of interest in physical media in the only way it knows how... by halting the Windows Media Player metadata service. Readers of a certain vintage will remember inserting a CD into their PC and watching Windows Media Player populate with track listings and album artwork. No more.

Sometime before Christmas, the metadata servers stopped working and on Windows 10 or 11, the result is the same: album not found. We tried this out at Vulture Central on some sacrificial Windows devices that had media drives and can confirm that a variety of compact discs were met with stony indifference. Some 90s cheese that was successfully ripped (for personal use, of course) decades ago? No longer recognized. A reissue of something achingly hip? Also not recognized.

Privacy

Illinois Health Department Exposed Over 700,000 Residents' Personal Data For Years (techcrunch.com) 14

Illinois Department of Human Services disclosed that a misconfigured internal mapping website exposed sensitive personal data for more than 700,000 Illinois residents for over four years, from April 2021 to September 2025. Officials say they can't confirm whether the publicly accessible data was ever viewed. TechCrunch reports: Officials said the exposed data included personal information on 672,616 individuals who are Medicaid and Medicare Savings Program recipients. The data included their addresses, case numbers, and demographic data -- but not individuals' names. The exposed data also included names, addresses, case statuses, and other information relating to 32,401 individuals in receipt of services from the department's Division of Rehabilitation Services.
Social Networks

Iran in 'Digital Blackout' as Tehran Throttles Mobile Internet Access (thenationalnews.com) 45

An anonymous reader shares a report: Internet access available through mobile devices in Iran appears to be limited, according to several social media accounts that routinely track such developments. Cloudflare Radar, which monitors internet traffic on behalf of the internet infrastructure firm Cloudflare, said on Thursday that IPv6 (Internet Protocol version 6), a standard widely used for mobile infrastructure, was affected.

"IPv6 address space in Iran dropped by 98.5 per cent, concurrent with IPv6 traffic share dropping from 12 per cent to 1.8 per cent, as the government selectively blocks internet access amid protests," read Cloudflare Radar's social post. NetBlocks, which tracks internet access and digital rights around the world, also confirmed it was seeing problems with connectivity through various internet providers in Iran. "Live network data show Tehran and other parts of Iran are now entering a digital blackout," NetBlocks posted on X.

AI

An AI-Generated NWS Map Invented Fake Towns In Idaho (washingtonpost.com) 42

National Weather Service pulled an AI-generated forecast graphic after it hallucinated fake town names in Idaho. "The blunder -- not the first of its kind to be posted by the NWS in the past year -- comes as the agency experiments with a wide range of AI uses, from advanced forecasting to graphic design," reports the Washington Post. "Experts worry that without properly trained officials, mistakes could erode trust in the agency and the technology." From the report: At first glance, there was nothing out of the ordinary about Saturday's wind forecast for Camas Prairie, Idaho. "Hold onto your hats!" said a social media post from the local weather office in Missoula, Montana. "Orangeotild" had a 10 percent chance of high winds, while just south, "Whata Bod" would be spared larger gusts. The problem? Neither of those places exist. Nor do a handful of the other spots marked on the National Weather Service's forecast graphic, riddled with spelling and geographical errors that the agency confirmed were linked to the use of generative AI.

NWS said AI is not commonly used for public-facing content, nor is its use prohibited. The agency said it is exploring ways to employ AI to inform the public and acknowledged mistakes have been made. "Recently, a local office used AI to create a base map to display forecast information, however the map inadvertently displayed illegible city names," said NWS spokeswoman Erica Grow Cei. "The map was quickly corrected and updated social media posts were distributed."

A post with the inaccurate map was deleted Monday, the same day The Washington Post contacted officials with questions about the image. Cei added that "NWS is exploring strategic ways to continue optimizing our service delivery for Americans, including the implementation of AI where it makes sense. NWS will continue to carefully evaluate results in cases where AI is implemented to ensure accuracy and efficiency, and will discontinue use in scenarios where AI is not effective." A Nov. 25 tweet out of the Rapid City, South Dakota, office also had misspelled locations and the Google Gemini logo in its forecast. NWS did not confirm whether the Rapid City image was made with generative AI.

Slashdot Top Deals