×
The Internet

Internet Traffic Dipped as Viewers Took in the Eclipse (nytimes.com) 18

As the moon blocked the view of the sun across parts of Mexico, the United States and Canada on Monday, the celestial event managed another magnificent feat: It got people offline. From a report: According to Cloudflare, a cloud-computing service used by about 20 percent of websites globally, internet traffic dipped along the path of totality as spellbound viewers took a break from their phones and computers to catch a glimpse of the real-life spectacle.

The places with the most dramatic views saw the biggest dips in traffic compared with the previous week. In Vermont, Arkansas, Indiana, Maine, New Hampshire and Ohio -- states that were in the path of totality, meaning the moon completely blocked out the sun -- internet traffic dropped by 40 percent to 60 percent around the time of the eclipse, Cloudflare said. States that had partial views also saw drops in internet activity, but to a much lesser extent. At 3:25 p.m. Eastern time, internet traffic in New York dropped by 29 percent compared with the previous week, Cloudflare found.

The path of totality made up a roughly 110-mile-wide belt that stretched from Mazatlan, Mexico, to Montreal. In the Mexican state of Durango, which was in the eclipse zone, internet traffic measured by Cloudflare dipped 57 percent compared with the previous week, while farther south, in Mexico City, traffic was down 22 percent. The duration of the eclipse's totality varied by location, with some places experiencing it for more than four minutes while for others, it was just one to two minutes.

AI

Google's Gemini Pro 1.5 Enters Public Preview on Vertex AI (techcrunch.com) 1

Gemini 1.5 Pro, Google's most capable generative AI model, is now available in public preview on Vertex AI, Google's enterprise-focused AI development platform. From a report: The company announced the news during its annual Cloud Next conference, which is taking place in Las Vegas this week. Gemini 1.5 Pro launched in February, joining Google's Gemini family of generative AI models. Undoubtedly its headlining feature is the amount of context that it can process: between 128,000 tokens to up to 1 million tokens, where "tokens" refers to subdivided bits of raw data (like the syllables "fan," "tas" and "tic" in the word "fantastic").

One million tokens is equivalent to around 700,000 words or around 30,000 lines of code. It's about four times the amount of data that Anthropic's flagship model, Claude 3, can take as input and about eight times as high as OpenAI's GPT-4 Turbo max context. A model's context, or context window, refers to the initial set of data (e.g. text) the model considers before generating output (e.g. additional text). A simple question -- "Who won the 2020 U.S. presidential election?" -- can serve as context, as can a movie script, email, essay or e-book.

Google

Google Announces Axion, Its First Custom Arm-based Data Center Processor (techcrunch.com) 22

Google Cloud on Tuesday joined AWS and Azure in announcing its first custom-built Arm processor, dubbed Axion. From a report: Based on Arm's Neoverse 2 designs, Google says its Axion instances offer 30% better performance than other Arm-based instances from competitors like AWS and Microsoft and up to 50% better performance and 60% better energy efficiency than comparable X86-based instances. [...] "Technical documentation, including benchmarking and architecture details, will be available later this year," Google spokesperson Amanda Lam said. Maybe the chips aren't even ready yet? After all, it took Google a while to announce Arm-chips in the cloud, especially considering that Google has long built its in-house TPU AI chips and, more recently, custom Arm-based mobile chips for its Pixel phones. AWS launched its Graviton chips back in 2018.
United Kingdom

UK Govt Office Admits Ability To Negotiate Billions in Cloud Spending Curbed By Vendor Lock-in (theregister.com) 32

The UK government has admitted its negotiating power over billions of pounds of cloud infrastructure spending has been inhibited by vendor lock-in. From a report: A document from the Cabinet Office's Central Digital & Data Office, circulated within Whitehall, seen by The Register, says the "UK government's current approach to cloud adoption and management across its departments faces several challenges" which combined result "in risk concentration and vendor lock-in that inhibit UK government's negotiating power over the cloud vendors."

The paper also says that if the UK government -- which has spent tens of billions on cloud services in the last decade -- does not change its approach, "the existing dominance of AWS and Azure in the UK Government's cloud services is set to continue." Doing nothing would mean "leaving the government with minimal leverage over pricing and product options.

"This path forecasts a future where, within a decade, the public sector could face the end of its ability to negotiate favourable terms, leading to entrenched vendor lock-in and potential regulatory scrutiny from [UK regulator] the Competition and Markets Authority." The document has been circulated under the heading "UK Public Sector Cloud Marketplace." It is authored by Chris Nesbitt-Smith, a CDDO consultant, and sponsored by CDDO principal technical architect Edward McCutcheon and David Knott, CDDO chief technical officer.

Businesses

Stability AI Reportedly Ran Out of Cash To Pay Its Bills For Rented Cloud GPUs (theregister.com) 45

An anonymous reader writes: The massive GPU clusters needed to train Stability AI's popular text-to-image generation model Stable Diffusion are apparently also at least partially responsible for former CEO Emad Mostaque's downfall -- because he couldn't find a way to pay for them. According to an extensive expose citing company documents and dozens of persons familiar with the matter, it's indicated that the British model builder's extreme infrastructure costs drained its coffers, leaving the biz with just $4 million in reserve by last October. Stability rented its infrastructure from Amazon Web Services, Google Cloud Platform, and GPU-centric cloud operator CoreWeave, at a reported cost of around $99 million a year. That's on top of the $54 million in wages and operating expenses required to keep the AI upstart afloat.

What's more, it appears that a sizable portion of the cloudy resources Stability AI paid for were being given away to anyone outside the startup interested in experimenting with Stability's models. One external researcher cited in the report estimated that a now-cancelled project was provided with at least $2.5 million worth of compute over the span of four months. Stability AI's infrastructure spending was not matched by revenue or fresh funding. The startup was projected to make just $11 million in sales for the 2023 calendar year. Its financials were apparently so bad that it allegedly underpaid its July 2023 bills to AWS by $1 million and had no intention of paying its August bill for $7 million. Google Cloud and CoreWeave were also not paid in full, with debts to the pair reaching $1.6 million as of October, it's reported.

It's not clear whether those bills were ultimately paid, but it's reported that the company -- once valued at a billion dollars -- weighed delaying tax payments to the UK government rather than skimping on its American payroll and risking legal penalties. The failing was pinned on Mostaque's inability to devise and execute a viable business plan. The company also failed to land deals with clients including Canva, NightCafe, Tome, and the Singaporean government, which contemplated a custom model, the report asserts. Stability's financial predicament spiraled, eroding trust among investors, making it difficult for the generative AI darling to raise additional capital, it is claimed. According to the report, Mostaque hoped to bring in a $95 million lifeline at the end of last year, but only managed to bring in $50 million from Intel. Only $20 million of that sum was disbursed, a significant shortfall given that the processor titan has a vested interest in Stability, with the AI biz slated to be a key customer for a supercomputer powered by 4,000 of its Gaudi2 accelerators.
The report goes on to mention further fundraising challenges, issues retaining employees, and copyright infringement lawsuits challenging the company's future prospects. The full expose can be read via Forbes (paywalled).
Google

Users Say Google's VPN App Breaks the Windows DNS Settings (arstechnica.com) 37

An anonymous reader shares a report: Google offers a VPN via its "Google One" monthly subscription plan, and while it debuted on phones, a desktop app has been available for Windows and Mac OS for over a year now. Since a lot of people pay for Google One for the cloud storage increase for their Google accounts, you might be tempted to try the VPN on a desktop, but Windows users testing out the app haven't seemed too happy lately. An open bug report on Google's GitHub for the project says the Windows app "breaks" the Windows DNS, and this has been ongoing since at least November.

A VPN would naturally route all your traffic through a secure tunnel, but you've still got to do DNS lookups somewhere. A lot of VPN services also come with a DNS service, and Google is no different. The problem is that Google's VPN app changes the Windows DNS settings of all network adapters to always use Google's DNS, whether the VPN is on or off. Even if you change them, Google's program will change them back. Most VPN apps don't work this way, and even Google's Mac VPN program doesn't work this way. The users in the thread (and the ones emailing us) expect the app, at minimum, to use the original Windows settings when the VPN is off. Since running a VPN is often about privacy and security, users want to be able to change the DNS away from Google even when the VPN is running.

United States

Scathing Federal Report Rips Microsoft For Shoddy Security (apnews.com) 81

quonset shares a report: In a scathing indictment of Microsoft corporate security and transparency, a Biden administration-appointed review board issued a report Tuesday saying "a cascade of errors" by the tech giant let state-backed Chinese cyber operators break into email accounts of senior U.S. officials including Commerce Secretary Gina Raimondo.

The Cyber Safety Review Board, created in 2021 by executive order, describes shoddy cybersecurity practices, a lax corporate culture and a lack of sincerity about the company's knowledge of the targeted breach, which affected multiple U.S. agencies that deal with China. It concluded that "Microsoft's security culture was inadequate and requires an overhaul" given the company's ubiquity and critical role in the global technology ecosystem. Microsoft products "underpin essential services that support national security, the foundations of our economy, and public health and safety."

The panel said the intrusion, discovered in June by the State Department and dating to May "was preventable and should never have occurred," blaming its success on "a cascade of avoidable errors." What's more, the board said, Microsoft still doesn't know how the hackers got in. [...] It said Microsoft's CEO and board should institute "rapid cultural change" including publicly sharing "a plan with specific timelines to make fundamental, security-focused reforms across the company and its full suite of products."

Businesses

VMware By Broadcom Plots Pair of Cloud Foundation Releases (theregister.com) 23

An anonymous reader quotes a report from The Register: VMware by Broadcom will deliver a significant update to its flagship Cloud Foundation bundle in the middle of this year and follow it up with a major update early in 2025. Both releases will show off Broadcom's plan to make the package easier to implement and operate, and hopefully assuage customer concerns about price rises. More on that later. First, the updates. One release is currently scheduled to debut in July, according to Paul Turner, vice-president of product management and the leader of the VMware Cloud Foundation (VCF) team. The release will allow use of a single license key for all the components of Cloud Foundation, improve OAuth support as a step towards single sign-on across the VMware range, and add an NSX overlay that will allow implementation of software-defined networks without requiring IP address changes.

Turner explained those features as exemplifying the sort of simplification VMware by Broadcom thinks is needed to make Cloud Foundation easier to implement. A bigger release Turner hopes will debut in early 2025 -- though he would commit to only a H1 launch -- will be a "unified" release in which more of VCF is better integrated. Today, Turner admitted, VMware customers may have implemented vSphere and the Aria management suite, but might still need or choose discrete storage for each. Future VCF releases will increasingly unify the products so that silos aren't needed. Prashanth Shenoy, vice president for VMware by Broadcom's cloud platform, infrastructure, and solutions marketing, told The Register the release will be called VCF 9 and will represent "the fullest expression of Broadcom's vision for product integration." "When customers deploy VCF there are seams -- when they deploy networking and storage, they feel like they do not have a unified developer or operator experience," Shenoy admitted. VCF 9 will tidy that sort of thing up and make the process "seamless." Buyers can also expect improved log file analysis, the ability to acquire templates from a marketplace and adopt them as PaaS, and plenty more.

Turner and Shenoy told The Register that the two releases are hoped to make VCF adoption easier, and by doing so demonstrate the value of the bundle. Today, they argue, would-be hybrid cloud adopters using VCF are in reality integrating siloed products -- which doesn't prove the value of the vStack well. VCF 9's planned integrations, they argue, should demonstrate the power of the stack and the wisdom of Broadcom's decision to create a VMware unit dedicated to VCF. That team, they explained, means developers for each of the bundle's components work together on a unified experience, rather than to create their own product. It may also demonstrate the value of VMware by Broadcom's new licenses – which some users have complained are considerably more expensive now that subscriptions are required, and products are only sold in bundles.
Sylvain Cazard, president of Broadcom Software for Asia-Pacific, told The Register that complaints about higher prices are unwarranted since customers using at least two components of VMware's flagship Cloud Foundation will end up paying less. He also noted that the new pricing includes support, which VMware didn't include previously.
AI

Amazon Offers Free Credits For Startups To Use AI Models Including Anthropic (reuters.com) 9

AWS has expanded its free credits program for startups to cover the costs of using major AI models. From a report: In a move to attract startup customers, Amazon now allows its cloud credits to cover the use of models from other providers including Anthropic, Meta, Mistral AI, and Cohere. "This is another gift that we're making back to the startup ecosystem, in exchange for what we hope is startups continue to choose AWS as their first stop," said Howard Wright, vice president and global head of startups at AWS.

[...] As part of the deal, Anthropic will use AWS as its primary cloud provider, and Trainium and Inferentia chips to build and train its models. Wright said Amazon's free credit will contribute to revenue of Anthropic, one of the most popular models on Bedrock.

AI

Databricks Claims Its Open Source Foundational LLM Outsmarts GPT-3.5 (theregister.com) 17

Lindsay Clark reports via The Register: Analytics platform Databricks has launched an open source foundational large language model, hoping enterprises will opt to use its tools to jump on the LLM bandwagon. The biz, founded around Apache Spark, published a slew of benchmarks claiming its general-purpose LLM -- dubbed DBRX -- beat open source rivals on language understanding, programming, and math. The developer also claimed it beat OpenAI's proprietary GPT-3.5 across the same measures.

DBRX was developed by Mosaic AI, which Databricks acquired for $1.3 billion, and trained on Nvidia DGX Cloud. Databricks claims it optimized DBRX for efficiency with what it calls a mixture-of-experts (MoE) architecture â" where multiple expert networks or learners divide up a problem. Databricks explained that the model possesses 132 billion parameters, but only 36 billion are active on any one input. Joel Minnick, Databricks marketing vice president, told The Register: "That is a big reason why the model is able to run as efficiently as it does, but also runs blazingly fast. In practical terms, if you use any kind of major chatbots that are out there today, you're probably used to waiting and watching the answer get generated. With DBRX it is near instantaneous."

But the performance of the model itself is not the point for Databricks. The biz is, after all, making DBRX available for free on GitHub and Hugging Face. Databricks is hoping customers use the model as the basis for their own LLMs. If that happens it might improve customer chatbots or internal question answering, while also showing how DBRX was built using Databricks's proprietary tools. Databricks put together the dataset from which DBRX was developed using Apache Spark and Databricks notebooks for data processing, Unity Catalog for data management and governance, and MLflow for experiment tracking.

Games

Russia Is Making Its Own Gaming Consoles (gamerant.com) 161

Vladimir Putin has ordered Russia's government to explore the development of a series of homegrown consoles to compete with PlayStation and Xbox. Game Rant reports: Russia has taken issue with Western games and developers in recent years, leading the country to threaten the banning of certain titles like Apex Legends and The Last of Us Part 2. This is due to what the Russian government perceives as pro-LGBTQ messaging, which it openly opposes. In February, Russia's Organization for Developing the Video Game Industry (RVI) laid out a long-term plan that ended with the creation of a fully capable gaming console in 2026-2027. It seems that the Russian government may be attempting to follow through with this plan.

Following a meeting on the economic development of Kaliningrad, Putin requested government officials to research the requirements for domestic production of stationary and portable gaming consoles. The Russian president also ordered the planning of an appropriate operating system and cloud system for the consoles. The deadline for these plans is set for June 15, 2024, and Russia's prime minister was designated as the official overseeing these tasks. A Kremlin spokesperson confirmed that the orders intend to develop Russia's homegrown gaming industry.

Government

Congress Bans Staff Use of Microsoft's AI Copilot (axios.com) 32

The U.S. House has set a strict ban on congressional staffers' use of Microsoft Copilot, the company's AI-based chatbot, Axios reported Friday. From the report: The House last June restricted staffers' use of ChatGPT, allowing limited use of the paid subscription version while banning the free version. The House's Chief Administrative Officer Catherine Szpindor, in guidance to congressional offices obtained by Axios, said Microsoft Copilot is "unauthorized for House use."

"The Microsoft Copilot application has been deemed by the Office of Cybersecurity to be a risk to users due to the threat of leaking House data to non-House approved cloud services," it said. The guidance added that Copilot "will be removed from and blocked on all House Windows devices."

Cloud

Cloud Server Host Vultr Rips User Data Ownership Clause From ToS After Web Outage (theregister.com) 28

Tobias Mann reports via The Register: Cloud server provider Vultr has rapidly revised its terms-of-service after netizens raised the alarm over broad clauses that demanded the "perpetual, irrevocable, royalty-free" rights to customer "content." The red tape was updated in January, as captured by the Internet Archive, and this month users were asked to agree to the changes by a pop-up that appeared when using their web-based Vultr control panel. That prompted folks to look through the terms, and there they found clauses granting the US outfit a "worldwide license ... to use, reproduce, process, adapt ... modify, prepare derivative works, publish, transmit, and distribute" user content.

It turned out these demands have been in place since before the January update; customers have only just noticed them now. Given Vultr hosts servers and storage in the cloud for its subscribers, some feared the biz was giving itself way too much ownership over their stuff, all in this age of AI training data being put up for sale by platforms. In response to online outcry, largely stemming from Reddit, Vultr in the past few hours rewrote its ToS to delete those asserted content rights. CEO J.J. Kardwell told The Register earlier today it's a case of standard legal boilerplate being taken out of context. The clauses were supposed to apply to customer forum posts, rather than private server content, and while, yes, the terms make more sense with that in mind, one might argue the legalese was overly broad in any case.

"We do not use user data," Kardwell stressed to us. "We never have, and we never will. We take privacy and security very seriously. It's at the core of what we do globally." [...] According to Kardwell, the content clauses are entirely separate to user data deployed in its cloud, and are more aimed at one's use of the Vultr website, emphasizing the last line of the relevant fine print: "... for purposes of providing the services to you." He also pointed out that the wording has been that way for some time, and added the prompt asking users to agree to an updated ToS was actually spurred by unrelated Microsoft licensing changes. In light of the controversy, Vultr vowed to remove the above section to "simplify and further clarify" its ToS, and has indeed done so. In a separate statement, the biz told The Register the removal will be followed by a full review and update to its terms of service.
"It's clearly causing confusion for some portion of users. We recognize that the average user doesn't have a law degree," Kardwell added. "We're very focused on being responsive to the community and the concerns people have and we believe the strongest thing we can do to demonstrate that there is no bad intent here is to remove it."
Open Source

Linux Foundation Launches Valkey As A Redis Fork (phoronix.com) 12

Michael Larabel reports via Phoronix: Given the recent change by Redis to adopt dual source-available licensing for all their releases moving forward (Redis Source Available License v2 and Server Side Public License v1), the Linux Foundation announced today their fork of Redis. The Linux Foundation went public today with their intent to fork Valkey as an open-source alternative to the Redis in-memory store. Due to the Redis licensing changes, Valkey is forking from Redis 7.2.4 and will maintain a BSD 3-clause license. Google, AWS, Oracle, and others are helping form this new Valkey project.

The Linux Foundation press release shares: "To continue improving on this important technology and allow for unfettered distribution of the project, the community created Valkey, an open source high performance key-value store. Valkey supports the Linux, macOS, OpenBSD, NetBSD, and FreeBSD platforms. In addition, the community will continue working on its existing roadmap including new features such as a more reliable slot migration, dramatic scalability and stability improvements to the clustering system, multi-threaded performance improvements, triggers, new commands, vector search support, and more. Industry participants, including Amazon Web Services (AWS), Google Cloud, Oracle, Ericsson, and Snap Inc. are supporting Valkey. They are focused on making contributions that support the long-term health and viability of the project so that everyone can benefit from it."

Software

'Software Vendors Dump Open Source, Go For the Cash Grab' (computerworld.com) 120

Steven J. Vaughan-Nichols, writing for ComputerWorld: Essentially, all software is built using open source. By Synopsys' count, 96% of all codebases contain open-source software. Lately, though, there's been a very disturbing trend. A company will make its program using open source, make millions from it, and then -- and only then -- switch licenses, leaving their contributors, customers, and partners in the lurch as they try to grab billions. I'm sick of it. The latest IT melodrama baddie is Redis. Its program, which goes by the same name, is an extremely popular in-memory database. (Unless you're a developer, chances are you've never heard of it.) One recent valuation shows Redis to be worth about $2 billion -- even without an AI play! That, anyone can understand.

What did it do? To quote Redis: "Beginning today, all future versions of Redis will be released with source-available licenses. Starting with Redis 7.4, Redis will be dual-licensed under the Redis Source Available License (RSALv2) and Server Side Public License (SSPLv1). Consequently, Redis will no longer be distributed under the three-clause Berkeley Software Distribution (BSD)." For those of you who aren't open-source licensing experts, this means developers can no longer use Redis' code. Sure, they can look at it, but they can't export, borrow from, or touch it.

Redis pulled this same kind of trick in 2018 with some of its subsidiary code. Now it's done so with the company's crown jewels. Redis is far from the only company to make such a move. Last year, HashiCorp dumped its main program Terraform's Mozilla Public License (MPL) for the Business Source License (BSL) 1.1. Here, the name of the new license game is to prevent anyone from competing with Terraform. Would it surprise you to learn that not long after this, HashiCorp started shopping itself around for a buyer? Before this latest round of license changes, MongoDB and Elastic made similar shifts. Again, you might never have heard of these companies or their programs, but each is worth, at a minimum, hundreds of millions of dollars. And, while you might not know it, if your company uses cloud services behind the scenes, chances are you're using one or more of their programs,

Cloud

Amazon Bets $150 Billion on Data Centers Required for AI Boom (yahoo.com) 26

Amazon plans to spend almost $150 billion in the coming 15 years on data centers, giving the cloud-computing giant the firepower to handle an expected explosion in demand for artificial intelligence applications and other digital services. From a report: The spending spree is a show of force as the company looks to maintain its grip on the cloud services market, where it holds about twice the share of No. 2 player Microsoft. Sales growth at Amazon Web Services slowed to a record low last year as business customers cut costs and delayed modernization projects. Now spending is starting to pick up again, and Amazon is keen to secure land and electricity for its power-hungry facilities.

"We're expanding capacity quite significantly," said Kevin Miller, an AWS vice president who oversees the company's data centers. "I think that just gives us the ability to get closer to customers." Over the past two years, according to a Bloomberg tally, Amazon has committed to spending $148 billion to build and operate data centers around the world. The company plans to expand existing server farm hubs in northern Virginia and Oregon as well as push into new precincts, including Mississippi, Saudi Arabia and Malaysia.

XBox (Games)

Xbox Cloud Gaming Now Has Mouse and Keyboard Support In Select Games 30

Tom Warren reports via The Verge: Microsoft is starting to preview mouse and keyboard support for Xbox Cloud Gaming today. Xbox Insiders will be able to start playing with their mouse and keyboard in Edge, Chrome, or the Xbox app on Windows PCs, nearly two years after Microsoft announced it was preparing to add mouse and keyboard support to its Xbox Cloud Gaming (xCloud) service. Not every game will be supported during the preview, but there's a large selection, including Fortnite, Sea of Thieves, and Halo Infinite. Microsoft warns that some games will display controller UI elements briefly before adapting to mouse and keyboard input after you start interacting with the game.

If you're interested in trying games with mouse and keyboard in the browser version of Xbox Cloud Gaming, then you'll need to be in full-screen mode, according to Microsoft. This is so the game can correctly capture your pointer as input. If you want to exit out of mouse and keyboard mode and use an Xbox controller instead, there's an ALT+F9 shortcut to do so.
The full list of supported games include: Fortnite (browser only), ARK Survival Evolved, Sea of Thieves, Grounded, Halo Infinite, Atomic Heart, Sniper Elite 5, Deep Rock Galactic, High on Life, Zombie Army 4 Dead War, Gears Tactics, Pentiment, Doom 64, and Age of Empires 2.
Windows

Microsoft Has a New Windows and Surface Chief (theverge.com) 16

Tom Warren reports via The Verge: Microsoft is naming Pavan Davuluri as its new Windows and Surface chief today. After Panos Panay's surprise departure to Amazon last year, Microsoft split up the Windows and Surface groups under two different leaders. Davuluri took over the Surface silicon and devices work, with Mikhail Parakhin leading a new team focused on Windows and web experiences. Now both Windows and Surface will be Davuluri's responsibility, as Parakhin has "decided to explore new roles."

The Verge has obtained an internal memo from Rajesh Jha, Microsoft's head of experiences and devices, outlining the new Windows organization. Microsoft is now bringing together its Windows and devices teams once more. "This will enable us to take a holistic approach to building silicon, systems, experiences, and devices that span Windows client and cloud for this AI era," explains Jha. Pavan Davuluri is now the leader of Microsoft's Windows and Surface team, reporting directly to Rajesh Jha. Davuluri has worked at Microsoft for more than 23 years and was deeply involved in the company's work with Qualcomm and AMD to create custom Surface processors.

Mikhail Parakhin will now report to Kevin Scott during a transition phase, but his future at Microsoft looks uncertain, and it's likely those "new roles" will be outside the company. Parakhin had been working closely on Bing Chat before taking on the broader Windows engineering responsibilities and changes to Microsoft Edge. The Windows shake-up comes just days after Google DeepMind co-founder and former Inflection AI CEO Mustafa Suleyman joined Microsoft as the CEO of a new AI team. Microsoft also hired a bunch of Inflection AI employees, including co-founder Karen Simonyan who is now the chief scientist of Microsoft AI.

AI

'Humane' Demos New Features on Its Ai Pin - Which Starts Arriving April 11 (mashable.com) 27

Indian Express calls it "the ultimate smartphone killer". (Coming soon, its laser-on-your-palm feature will display stock prices, sports scores, and flight statuses.)

Humane's Ai Pin can even translate what you say, repeating it out loud in another language (with 50 different languages supported). And it can read you summaries of what's on your favorite web sites, so "You can just surf the web with your voice," according to a new video released this week.

The video also shows it answering specific questions like "What's that song by 21 Savage with the violin intro?" (And later, while the song is playing, answering more questions like "This was sampled from another song. What song was that?") But then co-founder Imran Chaudhri — an iPhone designer and one of several former Apple employees at Humane — demonstrated a "Vision" feature that's coming soon. Holding a Sony Walkman he asks the Pin to "Look at this and tell me when it first came out" — and the Pin obliges. ("The Sony Walkman WM-F73 was released in 1986...") In another demo it correctly supplied the designer of an Air Jordan basketball shoe.

They're also working on integrating this into a Nutrition Tracking application. (A demonstrator held a doughnut and asked the Pin to identify how much sugar was in it.) If you tell the Pin that you've eaten the doughnut, it can then calculate your intake of carbs, protein, and fats.

And in the video the Pin responded within seconds to the command "Make a spreadsheet about top consumer tech reviewers on YouTube [with] real names, subscriber counts, and URLs." It performed the research and created the spreadsheet, which appears on the demonstrator's laptop, apparently logged in to Humane's cloud-based user platform.

In the video Humane's co-founder stresses that its Ai Pin does all this without downloading applications, "which allows me to stay present in the moment and flow." But while it can also make phone calls and sends text messages, Imran Chaudhri adds that "Ai Pin is a completely new form factor for compute. It's never been about replacing. It's always been about creating new ways to interact with what you need. So instead of having to sit down to use a computer, or reaching in to your pocket and pulling out your phone and navigating apps, Ai Pin allows you to simply act on something the moment you think about it — letting AI do all the work for you."

Or, as they say later "This is about technology adapting and reacting to you. Not you having to adapt to it."

There's also talk about their "AI OS" — named Cosmos — with the Pin described as "our first entry point" into that operating system, with other devices planned to support it in the future. (Mashable's reporter notes that Humane's Ai Pin is backed by OpenAI CEO Sam Altman, and writes "I was impressed with how well it worked.") The video even ends with an update for SDK developers. In the second half of 2024, "you're going to be able to connect your services to the Ai Pin using REST APIs and OAuth." Phase two will let developers run their code directly on Humane's cloud platform — while Phase three will see developers codes on Ai Pin devices, "to get access to the mic, the camera, the sensors, and the laser. We are so excited to see what you're gonna build."

Humane says its Ai Pin will start shipping at the end of March, with priority orders arriving starting on April 11th.
Earth

A Problem for Sun-Blocking Cloud Geoengineering? Clouds Dissipate (eos.org) 57

Slashdot reader christoban writes: In what may be an issue for Sun-obscuring strategies to combat global warming, it turns out that during solar eclipses, low level cumulus clouds rapidly disappear, reducing by a factor of 4, researchers have found. The news comes from the science magazine Eos (published by the nonprofit organization of atmosphere/ocean/space scientists, the American Geophysical Union). Victor J. H. Trees, a geoscientist at Delft University of Technology in the Netherlands, and his colleagues recently analyzed cloud cover data obtained during an annular eclipse in 2005, visible in parts of Europe and Africa. They mined visible and infrared imagery collected by two geostationary satellites operated by the European Organisation for the Exploitation of Meteorological Satellites. Going to space was key, Trees said. "If you really want to quantify how clouds behave and how they react to a solar eclipse, it helps to study a large area. That's why we want to look from space...." [T]hey tracked cloud evolution for several hours leading up to the eclipse, during the eclipse, and for several hours afterward.

Low-level cumulus clouds — which tend to top out at altitudes around 2 kilometers (1.2 miles) — were strongly affected by the degree of solar obscuration. Cloud cover started to decrease when about 15% of the Sun's face was covered, about 30 minutes after the start of the eclipse. The clouds started to return only about 50 minutes after maximum obscuration. And whereas typical cloud cover hovered around 40% in noneclipse conditions, less than 10% of the sky was covered with clouds during maximum obscuration, the team noted. "On a large scale, the cumulus clouds started to disappear," Trees said... The temperature of the ground matters when it comes to cumulus clouds, Trees said, because they are low enough to be significantly affected by whatever is happening on Earth's surface...

Beyond shedding light on the physics of cloud dissipation during solar eclipses, these new findings also have implications for future geoengineering efforts, Trees and his collaborators suggested. Discussions are underway to mitigate the effects of climate change by, for instance, seeding the atmosphere with aerosols or launching solar reflectors into space to prevent some of the Sun's light from reaching Earth. Such geoengineering holds promise for cooling our planet, researchers agree, but its repercussions are largely unexplored and could be widespread and irreversible.

These new results suggest that cloud cover could decrease with geoengineering efforts involving solar obscuration. And because clouds reflect sunlight, the efficacy of any effort might correspondingly decrease, Trees said. That's an effect that needs to be taken into account when considering different options, the researchers concluded.

Another article on the site warns that "Planting Trees May Not Be as Good for the Climate as Previously Believed."

"The climate benefits of trees storing carbon dioxide is partially offset by dark forests' absorption of more heat from the Sun, and compounds they release that slow the destruction of methane in the atmosphere."

Slashdot Top Deals