Nerval's Lobster writes Whatever your plans for Thanksgiving, Google can offer some advice: try to avoid driving anywhere the day before. Analysts from the search-engine giant's Google Maps division crunched traffic data from 21 U.S. cities over the past two years and found that the Wednesday before Thanksgiving is by far the worst traffic day that week, with some notable exceptions. (In Honolulu, Providence, and San Francisco, the worst traffic is always on Saturday; in Boston, it's Tuesday.) Unfortunately, Wednesday is often the only available travel day for many Americans—but Google thinks they can beat the worst of the traffic if they leave before 2 P.M. or after 7 P.M. on that day. Traffic on Thanksgiving itself is also light, according to the data. When it comes to driving back home, Sunday beats Saturday from a traffic perspective. According to Google Maps' aggregated trends, Americans also seek out "ham shop," "pie shop," and "liquor store" on the day before Thanksgiving, as they rush to secure last-minute items.
mpicpp writes with news that Yahoo will soon become the default search engine in Firefox. Google's 10-year run as Firefox's default search engine is over. Yahoo wants more search traffic, and a deal with Mozilla will bring it. In a major departure for both Mozilla and Yahoo, Firefox's default search engine is switching from Google to Yahoo in the United States. "I'm thrilled to announce that we've entered into a five-year partnership with Mozilla to make Yahoo the default search experience on Firefox across mobile and desktop," Yahoo Chief Executive Marissa Mayer said in a blog post Wednesday. "This is the most significant partnership for Yahoo in five years." The change will come to Firefox users in the US in December, and later Yahoo will bring that new "clean, modern and immersive search experience" to all Yahoo search users. In another part of the deal, Yahoo will support the Do Not Track technology for Firefox users, meaning that it will respect users' preferences not to be tracked for advertising purposes. With millions of users who perform about 100 billion searches a year, Firefox is a major source of the search traffic that's Google's bread and butter. Some of those searches produce search ads, and Mozilla has been funded primarily from a portion of that revenue that Google shares. In 2012, the most recent year for which figures are available, that search revenue brought in the lion's share of Mozilla's $311 million in revenue.
wabrandsma writes with this news from Ars Technica: The regulation of Google's search results has come up from time to time over the past decade, and although the idea has gained some traction in Europe (most recently with "right to be forgotten" laws), courts and regulatory bodies in the U.S. have generally agreed that Google's search results are considered free speech. That consensus was upheld last Thursday, when a San Francisco Superior Court judge ruled in favor of Google's right to order its search results as it sees fit.
Krystalo writes: In addition to the debut of the Firefox Developer Edition, Mozilla today announced new features for its main Firefox browser. The company is launching a new Forget button in Firefox to help keep your browsing history private, adding DuckDuckGo as a search option, and rolling out its directory tiles advertising experiment.
An anonymous reader writes Amazon today quietly unveiled a new product dubbed Amazon Echo. The $200 device appears to be a voice-activated wireless speaker that can answer your questions, offer updates on what's going in the world, and of course play music. Echo is currently available for purchase via an invite-only system. If you have Amazon Prime, however, you can get it for $100. I've put in a request for one; hopefully we'll get a hands-on look at the Echo soon. It looks useful and interesting for random searches, and for controlling devices, but one small speaker (interesting driver arrangement notwithstanding) doesn't bode well for "fill[ing] any room with immersive sound," as Amazon's promo materials claim.
Goatbert writes with word that pianist Dejan Lazic, unhappy with the opinion of Post music critic Anne Midgette, "has asked the Washington Post to remove an old review from their site in perhaps the best example yet of why it is both a terrible ruling and concept." It’s the first request The Post has received under the E.U. ruling. It’s also a truly fascinating, troubling demonstration of how the ruling could work. “To wish for such an article to be removed from the internet has absolutely nothing to do with censorship or with closing down our access to information,” Lazic explained in a follow-up e-mail to The Post. Instead, he argued, it has to do with control of one’s personal image — control of, as he puts it, “the truth.” (Here is the 2010 review to which Lazic objects.)
wabrandsma writes with this excerpt from Torrentfreak: Disney has just obtained a patent for a search engine that ranks sites based on various "authenticity" factors. One of the goals of the technology is to filter pirated material from search results while boosting the profile of copyright and trademark holders' websites. A new patent awarded to Disney Enterprises this week describes a search engine through which pirated content is hard to find. Titled "Online content ranking system based on authenticity metric values for web elements," one of the patent's main goals is to prevent pirated movies and other illicit content from ranking well in the search results. According to Disney their patent makes it possible to "enable the filtering of undesirable search results, such as results referencing piracy websites." Disney believes that current search engines are using the wrong approach as they rely on a website's "popularity." This allows site owners to game the system in order to rank higher. "For example, a manipulated page for unauthorized sales of drugs, movies, etc. might be able to obtain a high popularity rating, but what the typical user will want to see is a more authentic page," they explain. Probably not a good place to look for a grey-market copy of Song of the South.
An anonymous reader writes Google Flu Trends was developed in 2009 to improve forecasts of flu levels in the U.S. by utilising Google search data. This early example showcased the potential which lies in the exploitation of human digital traces which all of us leave behind by using online services. The rise of Google Flu Trends was only stopped when the service dramatically overestimated the number of flu incidences recently. The fall raised questions about the value of online data for predictions in general. However, a study published yesterday demonstrates that it is not only about data but also about the adaptiveness of algorithms used for predictions. Scientists combined historic flu levels as reported by the CDC with Google Flu Trends data using an algorithmic framework which is able to adapt to changes in human search behaviour. Their results show that Google Flu Trends data sets significantly add information to the forecasts of current flu levels.
mikejuk writes The announcement on the Google Geo Developers blog has the catchy title No map is an island. It points out that while there are now around 2 million active sites that have Google Maps embedded, they store data independently, The new feature, called attributed save, aims to overcome this problem by creating an integrated experience between the apps you use that have map content and Google Maps, and all it requires is that users sign in. So if you use a map in a specific app you will be able to see locations you entered in other apps.This all sounds great and it makes sense to allow users to take all of the locations that have previously been stored in app silos and put them all together into one big map data pool. The only down side is that the pool is owned by Google and some users might not like the idea of letting Google have access to so much personal geo information. It seems you can have convenience or you can have privacy. It might just be that many users prefer their maps to be islands.
An anonymous reader writes Google has expanded its search engine with the capability to recognize video games. If your query references a game, a new Knowledge Graph panel on the right-hand side of Google's search results page will offer more information, including the series it belongs to, initial release date, supported platforms, developers, publishers, designers, and even review scores. Google spokesperson: "With today's update, you can ask questions about video games, and (while there will be ones we don't cover) you'll get answers for console and PC games as well as the most popular mobile apps."
oxide7 (1013325) writes "In June 2011, Julian Assange received an unusual visitor: the chairman of Google, Eric Schmidt. They outlined radically opposing perspectives: for Assange, the liberating power of the Internet is based on its freedom and statelessness. For Schmidt, emancipation is at one with U.S. foreign policy objectives and is driven by connecting non-Western countries to Western companies and markets. These differences embodied a tug-of-war over the Internet's future that has only gathered force subsequently. Assange describes his encounter with Schmidt and how he came to conclude that it was far from an innocent exchange of views."
itwbennett writes German publishers said they are bowing to Google's market power, and will allow the search engine to show news snippets in search results free of charge — at least for the time being. The decision is a step in an ongoing legal dispute between the publishers and Google in which, predictably, publishers are trying to get compensation from the search engine for republishing parts of their content and Google isn't interested in sharing revenue. The move follows a Google decision earlier this month — and which was to go into effect today — to stop using news snippets and thumbnails for some well-known German news sites.
mrspoonsi writes Google has announced changes to its search engine in an attempt to curb online piracy. The company has long been criticised for enabling people to find sites to download entertainment illegally. The entertainment industry has argued that illegal sites should be "demoted" in search results. The new measures, mostly welcomed by music trade group the BPI, will instead point users towards legal alternatives such as Spotify and Google Play. Google will now list these legal services in a box at the top of the search results, as well as in a box on the right-hand side of the page. Crucially, however, these will be adverts — meaning if legal sites want to appear there, they will need to pay Google for the placement.
Martin Spamer writes with word that the BBC is to publish a continually updated list of its articles removed from Google under the controversial 'right to be forgotten' notices." The BBC will begin - in the "next few weeks" - publishing the list of removed URLs it has been notified about by Google. [Editorial policy head David] Jordan said the BBC had so far been notified of 46 links to articles that had been removed. They included a link to a blog post by Economics Editor Robert Peston. The request was believed to have been made by a person who had left a comment underneath the article. An EU spokesman later said the removal was "not a good judgement" by Google.
An anonymous reader writes Every day my gmail account receives 30-50 spam emails. Some of it is UCE, partially due to a couple dingbats with similar names who apparently think my gmail account belongs to them. The remainder looks to be spambot or Nigerian 419 email. I also run my own MX for my own domain, where I also receive a lot of spam. But with a combination of a couple DNSBL in my sendmail config, SpamAssassin, and procmail, almost none of it gets through to my inbox. In both cases there are rare false positives where a legit email ends up in my spam folder, or in the case of my MX, a spam email gets through to my Inbox, but these are rare occurrences. I'd think with all the Oompa Loompas at the Chocolate Factory that they could do a better job rejecting the obvious spam emails. If they did it would make checking for the occasional false positives in my spam folder a teeny bit easier. For anyone who's responsible for shunting Web-scale spam toward the fate it deserves, what factors go into the decision tree that might lead to so much spam getting through?