×
Privacy

Ukraine Bans Official Use of Telegram App Over Fears of Russian Spying (reuters.com) 33

Ukraine has banned use of Telegram on official devices used by state officials, military personnel and critical workers because it believes its enemy Russia can spy on both messages and users, a top security body said on Friday. Reuters: The National Security and Defence Council announced the restrictions after Kyrylo Budanov, head of Ukraine's GUR military intelligence agency, presented the Council with evidence of Russian special services' ability to snoop on the platform, it said in a statement. But Andriy Kovalenko, head of the security council's centre on countering disinformation, posted on Telegram that the restrictions apply only to official devices, not personal phones.

Telegram is heavily used in both Ukraine and Russia and has become a critical source of information since the Russian invasion of Ukraine in February 2022. But Ukrainian security officials had repeatedly voiced concerns about its use during the war. Based in Dubai, Telegram was founded by Russian-born Pavel Durov, who left Russia in 2014 after refusing to comply with demands to shut down opposition communities on his social media platform VKontakte, which he has sold.

Security

Disney To Stop Using Salesforce-Owned Slack After Hack Exposed Company Data (reuters.com) 22

Disney plans to transition away from using Slack as its companywide collaboration tool after a hacking group leaked over a terabyte of data from the platform. Many teams at Disney have already begun moving to other enterprise-wide tools, with the full transition expected later this year. Reuters reports: Hacking group NullBulge had published data from thousands of Slack channels at the entertainment giant, including computer code and details about unreleased projects, the Journal reported in July. The data spans more than 44 million messages from Disney's Slack workplace communications tool, WSJ reported earlier this month. The company had said in August it was investigating an unauthorized release of over a terabyte of data from one of its communication systems.
Security

Google Passkeys Can Now Sync Across Devices On Multiple Platforms (engadget.com) 29

Google is updating its Password Manager to allow users to sync passkeys across multiple devices, including Windows, macOS, Linux, and Android, with iOS and ChromeOS support coming soon. Engadget reports: Once saved, the passkey automatically syncs across other devices using Google Password Manager. The company says this data is end-to-end encrypted, so it'll be pretty tough for someone to go in and steal credentials. [...] Today's update also brings another layer of security to passkeys on Google Password Manager. The company has introduced a six-digit PIN that will be required when using passkeys on a new device. This would likely stop nefarious actors from logging into an account even if they've somehow gotten ahold of the digital credentials. Just don't leave the PIN number laying on a sheet of paper directly next to the computer.
Privacy

FTC Study Finds 'Vast Surveillance' of Social Media Users (nytimes.com) 59

The Federal Trade Commission said on Thursday it found that several social media and streaming services engaged in a "vast surveillance" of consumers, including minors, collecting and sharing more personal information than most users realized. From a report: The findings come from a study of how nine companies -- including Meta, YouTube and TikTok -- collected and used consumer data. The sites, which mostly offer free services, profited off the data by feeding it into advertising that targets specific users by demographics, according to the report. The companies also failed to protect users, especially children and teens.

The F.T.C. said it began its study nearly four years ago to offer the first holistic look into the opaque business practices of some of the biggest online platforms that have created multibillion-dollar ad businesses using consumer data. The agency said the report showed the need for federal privacy legislation and restrictions on how companies collect and use data. "Surveillance practices can endanger people's privacy, threaten their freedoms, and expose them to a host of harms, from identify theft to stalking," said Lina Kahn, the F.T.C.'s chair, in a statement.

AI

Snapchat Reserves the Right To Use AI-Generated Images of Your Face In Ads 29

Snapchat's terms of service for its "My Selfie" tool reserve the right to use users' AI-generated images in ads. While users can opt out by disabling the "See My Selfie in Ads" feature, it is enabled by default. 404 Media's Emanuel Maiberg reports: A support page on the Snapchat website titled "What is My Selfie?" explains further: "You'll take selfies with your Snap camera or select images from your camera roll. These images will be used to understand what you look like to enable you, Snap and your friends to generate novel images of you. If you're uploading images from the camera roll, only add images of yourself," Snapchat's site says. "After you've successfully onboarded, you may have access to some features powered by My Selfie, like Cameos stickers and AI Snaps. We are constantly adding features and functionality so stay tuned for more My Selfie features."

After seeing the popup, I searched for instances of people getting ads featuring their own face on Snapchat, and found this thread on the r/Privacy Reddit community where a user claimed exactly this happened to them. In an email to 404 Media, Snapchat said that it couldn't confirm or deny whether this user was served an ad featuring their face, but if they did, the ad was not using My Selfie images. Snapchat also said that it investigated the claim in the Reddit thread and that the advertiser, yourdreamdegree.com, has a history of advertising on Snapchat and that Snapchat believes the ad in question does not violate any of its policies. "The photo that was used in the advertisement is clearly AI, however, it is very clearly me," the Reddit user said. "It has my face, my hair, the clothing I wear, and even has my lamp & part of a painting on my wall in the background. I have no idea how they got photos of me to be able to generate this ad."
Snapchat confirmed the news but emphasized that advertisers do not have access to Snapchat users' generative AI data. "You are correct that our terms do reserve the right, in the future, to offer advertising based on My Selfies in which a Snapchatter can see themselves in a generated image delivered to them," a Snapchat spokesperson said. "As explained in the onboarding modal, Snapchatters have full control over this, and can turn this on and off in My Selfie Settings at any time."
Privacy

Chinese Spies Spent Months Inside Aerospace Engineering Firm's Network Via Legacy IT (theregister.com) 16

The Register's Jessica Lyons reports: Chinese state-sponsored spies have been spotted inside a global engineering firm's network, having gained initial entry using an admin portal's default credentials on an IBM AIX server. In an exclusive interview with The Register, Binary Defense's Director of Security Research John Dwyer said the cyber snoops first compromised one of the victim's three unmanaged AIX servers in March, and remained inside the US-headquartered manufacturer's IT environment for four months while poking around for more boxes to commandeer. It's a tale that should be a warning to those with long- or almost-forgotten machines connected to their networks; those with shadow IT deployments; and those with unmanaged equipment. While the rest of your environment is protected by whatever threat detection you have in place, these legacy services are perfect starting points for miscreants.

This particular company, which Dwyer declined to name, makes components for public and private aerospace organizations and other critical sectors, including oil and gas. The intrusion has been attributed to an unnamed People's Republic of China team, whose motivation appears to be espionage and blueprint theft. It's worth noting the Feds have issued multiple security alerts this year about Beijing's spy crews including APT40 and Volt Typhoon, which has been accused of burrowing into American networks in preparation for destructive cyberattacks.

After discovering China's agents within its network in August, the manufacturer alerted local and federal law enforcement agencies and worked with government cybersecurity officials on attribution and mitigation, we're told. Binary Defense was also called in to investigate. Before being caught and subsequently booted off the network, the Chinese intruders uploaded a web shell and established persistent access, thus giving them full, remote access to the IT network -- putting the spies in a prime position for potential intellectual property theft and supply-chain manipulation. If a compromised component makes it out of the supply chain and into machinery in production, whoever is using that equipment or vehicle will end up feeling the brunt when that component fails, goes rogue, or goes awry.

"The scary side of it is: With our supply chain, we have an assumed risk chain, where whoever is consuming the final product -- whether it is the government, the US Department of the Defense, school systems â" assumes all of the risks of all the interconnected pieces of the supply chain," Dwyer told The Register. Plus, he added, adversarial nations are well aware of this, "and the attacks continually seem to be shifting left." That is to say, attempts to meddle with products are happening earlier and earlier in the supply-chain pipeline, thus affecting more and more victims and being more deep-rooted in systems. Breaking into a classified network to steal designs or cause trouble is not super easy. "But can I get into a piece of the supply chain at a manufacturing center that isn't beholden to the same standards and accomplish my goals and objectives?" Dwyer asked. The answer, of course, is yes. [...]

AI

Ellison Declares Oracle 'All In' On AI Mass Surveillance 114

Oracle cofounder Larry Ellison envisions AI as the backbone of a new era of mass surveillance, positioning Oracle as a key player in AI infrastructure through its unique networking architecture and partnerships with AWS and Microsoft. The Register reports: Ellison made the comments near the end of an hour-long chat at the Oracle financial analyst meeting last week during a question and answer session in which he painted Oracle as the AI infrastructure player to beat in light of its recent deals with AWS and Microsoft. Many companies, Ellison touted, build AI models at Oracle because of its "unique networking architecture," which dates back to the database era.

"AI is hot, and databases are not," he said, making Oracle's part of the puzzle less sexy, but no less important, at least according to the man himself - AI systems have to have well-organized data, or else they won't be that valuable. The fact that some of the biggest names in cloud computing (and Elon Musk's Grok) have turned to Oracle to run their AI infrastructure means it's clear that Oracle is doing something right, claimed now-CTO Ellison. "If Elon and Satya [Nadella] want to pick us, that's a good sign - we have tech that's valuable and differentiated," Ellison said, adding: One of the ideal uses of that differentiated offering? Maximizing AI's pubic security capabilities.

"The police will be on their best behavior because we're constantly watching and recording everything that's going on," Ellison told analysts. He described police body cameras that were constantly on, with no ability for officers to disable the feed to Oracle. Even requesting privacy for a bathroom break or a meal only meant sections of recording would require a subpoena to view - not that the video feed was ever stopped. AI would be trained to monitor officer feeds for anything untoward, which Ellison said could prevent abuse of police power and save lives. [...] "Citizens will be on their best behavior because we're constantly recording and reporting," Ellison added, though it's not clear what he sees as the source of those recordings - police body cams or publicly placed security cameras. "There are so many opportunities to exploit AI," he said.
Privacy

23andMe To Pay $30 Million In Genetics Data Breach Settlement (bleepingcomputer.com) 36

23andMe has agreed to pay $30 million to settle a lawsuit over a data breach that exposed the personal information of 6.4 million customers in 2023. BleepingComputer reports: The proposed class action settlement (PDF), filed Thursday in a San Francisco federal court and awaiting judicial approval, includes cash payments for affected customers, which will be distributed within ten days of final approval. "23andMe believes the settlement is fair, adequate, and reasonable," the company said in a memorandum filed (PDF) Friday.

23andMe has also agreed to strengthen its security protocols, including protections against credential-stuffing attacks, mandatory two-factor authentication for all users, and annual cybersecurity audits. The company must also create and maintain a data breach incident response plan and stop retaining personal data for inactive or deactivated accounts. An updated Information Security Program will also be provided to all employees during annual training sessions.
"23andMe denies the claims and allegations set forth in the Complaint, denies that it failed to properly protect the Personal Information of its consumers and users, and further denies the viability of Settlement Class Representatives' claims for statutory damages," the company said in the filed preliminary settlement.

"23andMe denies any wrongdoing whatsoever, and this Agreement shall in no event be construed or deemed to be evidence of or an admission or concession on the part of 23andMe with respect to any claim of any fault or liability or wrongdoing or damage whatsoever."
Privacy

Apple Vision Pro's Eye Tracking Exposed What People Type 7

An anonymous reader quotes a report from Wired: You can tell a lot about someone from their eyes. They can indicate how tired you are, the type of mood you're in, and potentially provide clues about health problems. But your eyes could also leak more secretive information: your passwords, PINs, and messages you type. Today, a group of six computer scientists are revealing a new attack against Apple's Vision Pro mixed reality headset where exposed eye-tracking data allowed them to decipher what people entered on the device's virtual keyboard. The attack, dubbed GAZEploit and shared exclusively with WIRED, allowed the researchers to successfully reconstruct passwords, PINs, and messages people typed with their eyes. "Based on the direction of the eye movement, the hacker can determine which key the victim is now typing," says Hanqiu Wang, one of the leading researchers involved in the work. They identified the correct letters people typed in passwords 77 percent of the time within five guesses and 92 percent of the time in messages.

To be clear, the researchers did not gain access to Apple's headset to see what they were viewing. Instead, they worked out what people were typing by remotely analyzing the eye movements of a virtual avatar created by the Vision Pro. This avatar can be used in Zoom calls, Teams, Slack, Reddit, Tinder, Twitter, Skype, and FaceTime. The researchers alerted Apple to the vulnerability in April, and the company issued a patch to stop the potential for data to leak at the end of July. It is the first attack to exploit people's "gaze" data in this way, the researchers say. The findings underline how people's biometric data -- information and measurements about your body -- can expose sensitive information and beused as part of the burgeoning surveillance industry.

The GAZEploit attack consists of two parts, says Zhan, one of the lead researchers. First, the researchers created a way to identify when someone wearing the Vision Pro is typing by analyzing the 3D avatar they are sharing. For this, they trained a recurrent neural network, a type of deep learning model, with recordings of 30 people's avatars while they completed a variety of typing tasks. When someone is typing using the Vision Pro, their gaze fixates on the key they are likely to press, the researchers say, before quickly moving to the next key. "When we are typing our gaze will show some regular patterns," Zhan says. Wang says these patterns are more common during typing than if someone is browsing a website or watching a video while wearing the headset. "During tasks like gaze typing, the frequency of your eye blinking decreases because you are more focused," Wang says. In short: Looking at a QWERTY keyboard and moving between the letters is a pretty distinct behavior.

The second part of the research, Zhan explains, uses geometric calculations to work out where someone has positioned the keyboard and the size they've made it. "The only requirement is that as long as we get enough gaze information that can accurately recover the keyboard, then all following keystrokes can be detected." Combining these two elements, they were able to predict the keys someone was likely to be typing. In a series of lab tests, they didn't have any knowledge of the victim's typing habits, speed, or know where the keyboard was placed. However, the researchers could predict the correct letters typed, in a maximum of five guesses, with 92.1 percent accuracy in messages, 77 percent of the time for passwords, 73 percent of the time for PINs, and 86.1 percent of occasions for emails, URLs, and webpages. (On the first guess, the letters would be right between 35 and 59 percent of the time, depending on what kind of information they were trying to work out.) Duplicate letters and typos add extra challenges.
Privacy

How SEC Mobile Phones Can Signal an Imminent Stock Price Drop 34

Mobile phone location data has linked site visits by US securities watchdogs to the headquarters of companies with measurable drops in their share prices -- even when no enforcement action is taken. From a report: When insiders sold shares right around a non-public visit by staff from the Securities and Exchange Commission, they avoided average losses of 4.9 per cent in the three months after the visit, according to a study led by researchers at four Midwestern universities. By matching commercially available data with share price moves, the study offers a window into the secretive world of securities enforcement beyond publicly announced cases. It also raises questions about the rules around insider trading.

"Maybe we should be thinking about what the rules are when the SEC shows up," said Marcus Painter, assistant professor of finance at Saint Louis University and one of the authors. The research used geolocation data to identify mobile phones that spent significant amounts of time at the SEC's various offices around the country. They then tracked those phones to corporate headquarters around the world in the 12-month period right before Covid-19 lockdowns led to extensive working from home.
AI

Facebook Admits To Scraping Every Australian Adult User's Public Photos and Posts To Train AI, With No Opt-out Option (abc.net.au) 56

Facebook has admitted that it scrapes the public photos, posts and other data of Australian adult users to train its AI models and provides no opt-out option, even though it allows people in the European Union to refuse consent. From a report: Meta's global privacy director Melinda Claybaugh was pressed at an inquiry as to whether the social media giant was hoovering up the data of all Australians in order to build its generative artificial intelligence tools, and initially rejected that claim. Labor senator Tony Sheldon asked whether Meta had used Australian posts from as far back as 2007 to feed its AI products, to which Ms Claybaugh responded "we have not done that".

But that was quickly challenged by Greens senator David Shoebridge.

Shoebridge: "The truth of the matter is that unless you have consciously set those posts to private since 2007, Meta has just decided that you will scrape all of the photos and all of the texts from every public post on Instagram or Facebook since 2007, unless there was a conscious decision to set them on private. That's the reality, isn't it?
Claybaugh: "Correct."

Ms Claybaugh added that accounts of people under 18 were not scraped, but when asked by Senator Sheldon whether public photos of his own children on his account would be scraped, Ms Claybaugh acknowledged they would.

Australia

Australia Plans Age Limit To Ban Children From Social Media (yahoo.com) 99

An anonymous reader quotes a report from Agence France-Presse: Australia will ban children from using social media with a minimum age limit as high as 16, the prime minister said Tuesday, vowing to get kids off their devices and "onto the footy fields." Federal legislation to keep children off social media will be introduced this year, Anthony Albanese said, describing the impact of the sites on young people as a "scourge." The minimum age for children to log into sites such as Facebook, Instagram, and TikTok has not been decided but is expected to be between 14 and 16 years, Albanese said. The prime minister said his own preference would be a block on users aged below 16. An age verification trial to test various approaches is being conducted over the coming months, the centre-left leader said. [...]

It is not even clear that the technology exists to reliably enforce such bans, said the University of Melbourne's associate professor in computing and information technology, Toby Murray. "The government is currently trialling age assurance technology. But we already know that present age verification methods are unreliable, too easy to circumvent, or risk user privacy," he said. But the prime minister said parents expected a response to online bullying and the access social media gave to harmful material. "These social media companies think they're above everyone," he told a radio interviewer. "Well, they have a social responsibility and at the moment, they're not exercising it. And we're determined to make sure that they do," he said.

Privacy

The NSA Has a Podcast (wired.com) 14

Steven Levy, writing for Wired: My first story for WIRED -- yep, 31 years ago -- looked at a group of "crypto rebels" who were trying to pry strong encryption technology from the government-classified world and send it into the mainstream. Naturally I attempted to speak to someone at the National Security Agency for comment and ideally get a window into its thinking. Unsurprisingly, that was a no-go, because the NSA was famous for its reticence. Eventually we agreed that I could fax (!) a list of questions. In return I got an unsigned response in unhelpful bureaucratese that didn't address my queries. Even that represented a loosening of what once was total blackout on anything having to do with this ultra-secretive intelligence agency. For decades after its post-World War II founding, the government revealed nothing, not even the name, of this agency and its activities. Those in the know referred to it as "No Such Agency."

In recent years, the widespread adoption of encryption technology and the vital need for cybersecurity has led to more openness. Its directors began to speak in public; in 2012, NSA director Keith Alexander actually keynoted Defcon. I'd spent the entire 1990s lobbying to visit the agency for my book Crypto; in 2013, I finally crossed the threshold of its iconic Fort Meade Headquarters for an on-the-record conversation with officials, including Alexander. NSA now has social media accounts on Twitter, Instagram, Facebook. And there is a form on the agency website for podcasters to request guest appearances by an actual NSA-ite.

So it shouldn't be a total shock that NSA is now doing its own podcast. You don't need to be an intelligence agency to know that pods are a unique way to tell stories and hold people's attention. The first two episodes of the seven-part season dropped this week. It's called No Such Podcast, earning some self-irony points from the get-go. In keeping with the openness vibe, the NSA granted me an interview with an official in charge of the project -- one of the de facto podcast producers, a title that apparently is still not an official NSA job posting. Since NSA still gotta NSA, I can't use this person's name. But my source did point out that in the podcast itself, both the hosts and the guests -- who are past and present agency officials -- speak under their actual identities.

Government

Is the Tech World Now 'Central' to Foreign Policy? (wired.com) 41

Wired interviews America's foreign policy chief, Secretary of State Antony Blinken, about U.S. digital polices, starting with a new "cybersecurity bureau" created in 2022 (which Wired previously reported includes "a crash course in cybersecurity, telecommunications, privacy, surveillance, and other digital issues.") Look, what I've seen since coming back to the State Department three and a half years ago is that everything happening in the technological world and in cyberspace is increasingly central to our foreign policy. There's almost a perfect storm that's come together over the last few years, several major developments that have really brought this to the forefront of what we're doing and what we need to do. First, we have a new generation of foundational technologies that are literally changing the world all at the same time — whether it's AI, quantum, microelectronics, biotech, telecommunications. They're having a profound impact, and increasingly they're converging and feeding off of each other.

Second, we're seeing that the line between the digital and physical worlds is evaporating, erasing. We have cars, ports, hospitals that are, in effect, huge data centers. They're big vulnerabilities. At the same time, we have increasingly rare materials that are critical to technology and fragile supply chains. In each of these areas, the State Department is taking action. We have to look at everything in terms of "stacks" — the hardware, the software, the talent, and the norms, the rules, the standards by which this technology is used.

Besides setting up an entire new Bureau of Cyberspace and Digital Policy — and the bureaus are really the building blocks in our department — we've now trained more than 200 cybersecurity and digital officers, people who are genuinely expert. Every one of our embassies around the world will have at least one person who is truly fluent in tech and digital policy. My goal is to make sure that across the entire department we have basic literacy — ideally fluency — and even, eventually, mastery. All of this to make sure that, as I said, this department is fit for purpose across the entire information and digital space.

Wired notes it was Blinken's Department that discovered China's 2023 breach of Microsoft systems. And on the emerging issue of AI, Blinken cites "incredible work done by the White House to develop basic principles with the foundational companies." The voluntary commitments that they made, the State Department has worked to internationalize those commitments. We have a G7 code of conduct — the leading democratic economies in the world — all agreeing to basic principles with a focus on safety. We managed to get the very first resolution ever on artificial intelligence through the United Nations General Assembly — 192 countries also signing up to basic principles on safety and a focus on using AI to advance sustainable development goals on things like health, education, climate. We also have more than 50 countries that have signed on to basic principles on the responsible military use of AI. The goal here is not to have a world that is bifurcated in any way. It's to try to bring everyone together.
Privacy

Signal is More Than Encrypted Messaging. It Wants to Prove Surveillance Capitalism Is Wrong (wired.com) 70

Slashdot reader echo123 shared a new article from Wired titled "Signal Is More Than Encrypted Messaging. Under Meredith Whittaker, It's Out to Prove Surveillance Capitalism Wrong." ("On its 10th anniversary, Signal's president wants to remind you that the world's most secure communications platform is a nonprofit. It's free. It doesn't track you or serve you ads. It pays its engineers very well. And it's a go-to app for hundreds of millions of people.") Ten years ago, WIRED published a news story about how two little-known, slightly ramshackle encryption apps called RedPhone and TextSecure were merging to form something called Signal. Since that July in 2014, Signal has transformed from a cypherpunk curiosity — created by an anarchist coder, run by a scrappy team working in a single room in San Francisco, spread word-of-mouth by hackers competing for paranoia points — into a full-blown, mainstream, encrypted communications phenomenon... Billions more use Signal's encryption protocols integrated into platforms like WhatsApp...

But Signal is, in many ways, the exact opposite of the Silicon Valley model. It's a nonprofit funded by donations. It has never taken investment, makes its product available for free, has no advertisements, and collects virtually no information on its users — while competing with tech giants and winning... Signal stands as a counterfactual: evidence that venture capitalism and surveillance capitalism — hell, capitalism, period — are not the only paths forward for the future of technology.

Over its past decade, no leader of Signal has embodied that iconoclasm as visibly as Meredith Whittaker. Signal's president since 2022 is one of the world's most prominent tech critics: When she worked at Google, she led walkouts to protest its discriminatory practices and spoke out against its military contracts. She cofounded the AI Now Institute to address ethical implications of artificial intelligence and has become a leading voice for the notion that AI and surveillance are inherently intertwined. Since she took on the presidency at the Signal Foundation, she has come to see her central task as working to find a long-term taproot of funding to keep Signal alive for decades to come — with zero compromises or corporate entanglements — so it can serve as a model for an entirely new kind of tech ecosystem...

Meredith Whittaker: "The Signal model is going to keep growing, and thriving and providing, if we're successful. We're already seeing Proton [a startup that offers end-to-end encrypted email, calendars, note-taking apps, and the like] becoming a nonprofit. It's the paradigm shift that's going to involve a lot of different forces pointing in a similar direction."

Key quotes from the interview:
  • "Given that governments in the U.S. and elsewhere have not always been uncritical of encryption, a future where we have jurisdictional flexibility is something we're looking at."
  • "It's not by accident that WhatsApp and Apple are spending billions of dollars defining themselves as private. Because privacy is incredibly valuable. And who's the gold standard for privacy? It's Signal."
  • "AI is a product of the mass surveillance business model in its current form. It is not a separate technological phenomenon."
  • "...alternative models have not received the capital they need, the support they need. And they've been swimming upstream against a business model that opposes their success. It's not for lack of ideas or possibilities. It's that we actually have to start taking seriously the shifts that are going to be required to do this thing — to build tech that rejects surveillance and centralized control — whose necessity is now obvious to everyone."

Security

SpyAgent Android Malware Steals Your Crypto Recovery Phrases From Images 32

SpyAgent is a new Android malware that uses optical character recognition (OCR) to steal cryptocurrency wallet recovery phrases from screenshots stored on mobile devices, allowing attackers to hijack wallets and steal funds. The malware primarily targets South Korea but poses a growing threat as it expands to other regions and possibly iOS. BleepingComputer reports: A malware operation discovered by McAfee was traced back to at least 280 APKs distributed outside of Google Play using SMS or malicious social media posts. This malware can use OCR to recover cryptocurrency recovery phrases from images stored on an Android device, making it a significant threat. [...] Once it infects a new device, SpyAgent begins sending the following sensitive information to its command and control (C2) server:

- Victim's contact list, likely for distributing the malware via SMS originating from trusted contacts.
- Incoming SMS messages, including those containing one-time passwords (OTPs).
- Images stored on the device to use for OCR scanning.
- Generic device information, likely for optimizing the attacks.

SpyAgent can also receive commands from the C2 to change the sound settings or send SMS messages, likely used to send phishing texts to distribute the malware. McAfee found that the operators of the SpyAgent campaign did not follow proper security practices in configuring their servers, allowing the researchers to gain access to them. Admin panel pages, as well as files and data stolen from victims, were easily accessible, allowing McAfee to confirm that the malware had claimed multiple victims. The stolen images are processed and OCR-scanned on the server side and then organized on the admin panel accordingly to allow easy management and immediate utilization in wallet hijack attacks.
Technology

Smartphone Firm Born From Essential's Ashes is Shutting Down (androidauthority.com) 3

An anonymous reader shares a report: It's been a rough week for OSOM Products. The company has been embroiled in legal controversy stemming from a lawsuit filed by a former executive. Now, Android Authority has learned that the company is effectively shutting down later this week. OSOM Products was formed in 2020 following the disbanding of Essential, a smartphone startup led by Andy Rubin, the founder of Android.

Essential collapsed following the poor sales of its first smartphone, the Essential Phone, as well as a loss of confidence in Rubin due to allegations of sexual misconduct at his previous stint at Google. Although Essential as a company was on its way out after Rubin's departure, many of its most talented hardware designers and software engineers remained at the company, looking for another opportunity to build something new. In 2020, the former head of R&D at Essential, Jason Keats, along with several other former executives and employees came together to form OSOM, which stands for "Out of Sight, Out of Mind." The name reflected their desire to create privacy-focused products such as the OSOM Privacy Cable, a USB-C cable with a switch to disable data signaling, and the OSOM OV1, an Android smartphone with lots of privacy and security-focused features.

Privacy

Telegram Allows Private Chat Reports After Founder's Arrest (techcrunch.com) 48

An anonymous reader shares a report: Telegram has quietly updated its policy to allow users to report private chats to its moderators following the arrest of founder Pavel Durov in France over "crimes committed by third parties" on the platform. [...] The Dubai-headquartered company has additionally edited its FAQ page, removing two sentences that previously emphasized its privacy stance on private chats. The earlier version had stated: "All Telegram chats and group chats are private amongst their participants. We do not process any requests related to them."
Privacy

Leaked Disney Data Reveals Financial and Strategy Secrets (msn.com) 48

An anonymous reader shares a report: Passport numbers for a group of Disney cruise line workers. Disney+ streaming revenue. Sales of Genie+ theme park passes. The trove of data from Disney that was leaked online by hackers earlier this summer includes a range of financial and strategy information that sheds light on the entertainment giant's operations, according to files viewed by The Wall Street Journal. It also includes personally identifiable information of some staff and customers.

The leaked files include granular details about revenue generated by such products as Disney+ and ESPN+; park pricing offers the company has modeled; and what appear to be login credentials for some of Disney's cloud infrastructure. (The Journal didn't attempt to access any Disney systems.) "We decline to comment on unverified information The Wall Street Journal has purportedly obtained as a result of a bad actor's illegal activity," a Disney spokesman said. Disney told investors in an August regulatory filing that it is investigating the unauthorized release of "over a terabyte of data" from one of its communications systems. It said the incident hadn't had a material impact on its operations or financial performance and doesn't expect that it will.

Data that a hacking entity calling itself Nullbulge released online spans more than 44 million messages from Disney's Slack workplace communications tool, upward of 18,800 spreadsheets and at least 13,000 PDFs, the Journal found. The scope of the material taken appears to be limited to public and private channels within Disney's Slack that one employee had access to. No private messages between executives appear to be included. Slack is only one online forum in which Disney employees communicate at work.

Movies

The Search For the Face Behind Mavis Beacon Teaches Typing (wired.com) 56

An anonymous reader quotes a report from Wired: Jazmin Jones knowswhat she did. "If you're online, there's this idea of trolling," Jones, the director behindSeeking Mavis Beacon, said during a recent panel for her new documentary. "For this project, some things we're taking incredibly seriously ... and other things we're trolling. We're trolling this idea of a detective because we're also, like,ACAB." Her trolling, though, was for a good reason. Jones and fellow filmmaker Olivia Mckayla Ross did it in hopes of finding the woman behind Mavis Beacon Teaches Typing. The popular teaching tool was released in 1987 by The Software Toolworks, a video game and software company based in California that produced educational chess, reading, and math games. Mavis, essentially the "mascot" of the game, is a Black woman donned in professional clothes and a slicked-back bun. Though Mavis Beacon was not an actual person, Jones and Ross say that she is one of the first examples of Black representation they witnessed in tech. Seeking Mavis Beacon, which opened in New York City on August 30 and is rolling out to other cities in September, is their attempt to uncover the story behind the face, which appeared on the tool's packaging and later as part of its interface.

The film shows the duo setting up a detective room, conversing over FaceTime, running up to people on the street, and even tracking down a relative connected to the ever-elusive Mavis. But the journey of their search turned up a different question they didn't initially expect: What are the impacts of sexism, racism, privacy, and exploitation in a world where you can present yourself any way you want to? Using shots from computer screens, deep dives through archival footage, and sit-down interviews, the noir-style documentary reveals that Mavis Beacon is actually Renee L'Esperance, a Black model from Haiti who was paid $500 for her likeness with no royalties, despite the program selling millions of copies. [...]

In a world where anyone can create images of folks of any race, gender, or sexual orientation without having to fully compensate the real people who inspired them, Jones and Ross are working to preserve not only the data behind Mavis Beacon but also the humanity behind the software. On the panel, hosted by Black Girls in Media, Ross stated that the film's social media has a form where users of Mavis Beacon can share what the game has meant to them, for archival purposes. "On some level, Olivia and I are trolling ideas of worlds that we never felt safe in or protected by," Jones said during the panel. "And in other ways, we are honoring this legacy of cyber feminism, historians, and care workers that we are very seriously indebted to."
You can watch the trailer for "Seeking Mavis Beacon" on YouTube.

Slashdot Top Deals