Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Programming Stats

The First Open Ranking of the World Wide Web Is Available 53

First time accepted submitter vigna writes "The Laboratory for Web Algorithmics of the Università degli studi di Milano together with the Data and Web Science Group of the University of Mannheim have put together the first entirely open ranking of more than 100 million sites of the Web. The ranking is based on classic and easily explainable centrality measures applied to a host graph, and it is entirely open — all data and all software used is publicly available. Just in case you wonder, the number one site is YouTube, the second Wikipedia, and the third Twitter." They are using the Common Crawl data (first released in November 2011). Pages are ranked using harmonic centrality with raw Indegree centrality, Katz's index, and PageRank provided for comparison. More information about the web graph is available in a pre-print paper that will be presented at the World Wide Web Conference in April.
This discussion has been archived. No new comments can be posted.

The First Open Ranking of the World Wide Web Is Available

Comments Filter:
  • It looks like google.com is 4th, and bing.com is...

    um...

    ...ok, does anybody know where bing.com is?

  • Creative Commons? Why would that be in the top ten?
    • Maybe because the rank is controlled by links, and many pages link to CC even though people seldom follow those links?
      • I would like to compare the rank of this link graph to DNS requests for a "popularity" graph.

  • How can porn account for about 75% of all web traffic if the most common web sites are not listed on this report?
  • It's #1 in PageRank and Katz's index, #3 in Indegree centrality, What the hell is it? I went there, and I *still* don't know what it is.

    • by Ksevio ( 865461 )
      It's a metadata profile definition that's linked to by lots of social media sites. It's pretty much just defined in the header (lets people add different attributes to html). For the same reason that w3.org is up there since people link to it when setting a doctype.
    • by Trepidity ( 597 )

      There's an extension to the <link rel> tag that overloads it by, instead of linking to actual related data (as the tag was intended to do), treats the target of the link as defining a schema / data format, when rel="profile". The URL is then essentially a globally unique key for the data format; parsers that recognize the format will see the key and know how to parse some other information on the page. gmpg.org is the host of one of the early ones, XFN [gmpg.org], which is linked in default Wordpress installs [wordpress.org].

      • Should all of the <link rel>'s be excluded from the dataset used to build the giant graph?

        • by Trepidity ( 597 )

          Probably; I don't think they are really links, in the sense of something that is ever actually rendered as a hyperlink. I would probably also exclude things like loading JS resources.

  • by ausekilis ( 1513635 ) on Wednesday February 12, 2014 @01:50PM (#46230141)

    Up top you have those web sites that have their fingers in damned near everything, because they are looking at "centralization" of the website. More and more websites are using videos, and who better than YouTube to host? Need to provide a way to search your website? Google has already done it for you. Need to update your 3 billion fans what you're having for lunch? Facebook and Twitter have you covered. I can't see the list from work, but I'd wager that Facebook is up there too, with their ever-present "like" buttons. What's surprising is Wikipedia, you'll only sometimes see a link to Wikipedia, even on discussions on Slashdot, they don't go out there and wave their hands saying "everybody link to me" like other sites do.

    What about other aspects that would make a website "good"? Such as ease of navigation (find what you want in 5 clicks or less)? Size/amount of useful content? Number of external sites that link to their content?
    If we included that sort of data, YouTube could potentially be far up there with Wikipedia. I would think Google and Bing would be ruled out entirely since by their very design they don't hold real data.

    • Wikipedia aren't out there telling everyone to link to them, they're just sitting quietly in the corner being awesome, and everyone links to them because they're such a good source of information.

    • by rtb61 ( 674572 )

      More likely, with sufficient money, simply gaming the system with thousands even tens of thousands of bogus web sites all linking back to the advertising revenue targeted web site.

      What realistic rating get them from accurately identified people, with specific reviews, one set of review per person. The rest is just bullshot programming and cake.

  • by Anonymous Coward

    How nice of them to rank the problems of the internet for us.

  • Am I missing something? Does this site require Java or Silverlight or something. The page is very stark and there's no ranking shown. What am I missing? Did it get slashdotted?

  • If you look at the way they developed this list, it is closer to how Google ranks their searches. The metrics are scored on how many other pages link to the sites. For example, reddit and slashdot aren't high on the list because they link to other sites but very few link back. Creative Commons is in the top ten because everyone links there. It also explains why Myspace is so darn high.
  • I give it a solid 8 out of 10, based on the following:

    Deductions given for trolls and things like this [theonion.com].

  • by Trepidity ( 597 ) <delirium-slashdot@@@hackish...org> on Wednesday February 12, 2014 @02:21PM (#46230475)

    It is now official. The Università degli studi di Milano has confirmed: Linux is dying.

    One more crippling bombshell hit the already beleaguered Linux community when UNIMI confirmed that Linux's flagship domain, kernel.org [kernel.org], fell to a shocking #1797 in the Common Crawl rankings. You don't need to be the Amazing Kreskin to predict Linux's future. Its domain now ranks just behind Excite.com, the now-irrelevant search engine from the 1990s, which edges it out at #1796.

    The glaring gap between Linux's ranking and the rankings of those in the vibrant, enterprise-ready world is in itself embarrassing enough: Apple #8, Microsoft #17, even Oracle #248. But what seals the coffin is that Linux has fallen behind even the notoriously moribund FreeBSD operating system in these industry-leading metrics, trailing it by nearly one thousand, five hundred positions.

  • Amazon.com is not in the top 10, and MySpace is still in the top 20. Even more baffling, MySpace isn't even in the top 50 of any of the other rankings, so how did they come up with this score?
  • Google are common accused of giving their own sites preferential results.

    However this suggests not, with google Page Rank being generally lower than the "web data commons" result for the same sites, e.g. YouTube & Google

  • They need to put a caption on their results stating when the data for the ranking was last crawled.
  • So... Are we going to toss the bottom 10% or the top?

    Doesn't this work for Google?

  • by Anonymous Coward

    Whatever they are doing here does not reflect anything too useful (from my perspective). Source: I have a number of sites in the top 10,000 - and nothing here makes any sense. It doesn't correlate with any real world metrics I can see. ie: Sites that receive 140,000 visitors a day, and have millions of incoming links are showing up in the 1 million area, and sites of mine with little-to-no power are showing up in the top 100,000. Weird.

  • This article and open rankings work is great, but...

    The default ranking we show you is by harmonic centrality. If you want, you can find its definition in Wikipedia. But we can explain it easily.

    Suppose your site is example.com. Your score by harmonic centrality is, as a start, the number of sites with a link towards example.com. They are called sites at distance one. Say, there are 50 such sites: your score is now 50.

    There will be also sites with a link towards sites that have a link towards example.com, but they are not at distance one. They are called sites at distance two. Say, there are 80 such sites: they are not as important as before—we will give them just half a point. So you get 40 more points and your score is now 90.

    We can go on: there will be also sites with a link towards sites that have a link towards sites that have a link towards example.com (!), but they are not at distance one or two. They are called sites at distance three. Say, there are 100 such sites: as you can guess, we will give them just one third of a point. So you get 33.333 more points and your score is now 123.333.

    My intuition:

    Incoming links with degree one should be allocated 1 point. *yep*
    Incoming links with degree two should be allocated half of 1 point = 0.5 points. *yep*
    Incoming links with degree three should be allocated half of 0.5 points = 0.25 points. *NOPE* It actually gets allocated 0.33 points.

    This means degree ten links still get 0.1 point? 10 hops away and they're still showing up significantly? That measure is broken. 10 hops away should score

It is easier to write an incorrect program than understand a correct one.

Working...