Common Crawl Foundation Providing Data For Search Researchers 61
mikejuk writes with an excerpt from an article in I Programmer: "If you have ever thought that you could do a better job than Google but were intimidated by the hardware needed to build a web index, then the Common Crawl Foundation has a solution for you. It has indexed 5 billion web pages, placed the results on Amazon EC2/S3 and invites you to make use of it for free. All you have to do is setup your own Amazon EC2 Hadoop cluster and pay for the time you use it — accessing the data is free. This idea is to open up the whole area of web search to experiment and innovation. So if you want to challenge Google now you can't use the excuse that you can't afford it."
Their weblog promises source code for everything eventually. One thing I've always wondered is why no distributed crawlers or search engines have ever come about.
Interesting, however (Score:4, Interesting)
It should be obvious (Score:5, Interesting)
Because being 'distributed' is not a magic wand. (Nor is 'crowdsourcing', nor 'open source', or half a dozen other terms often used as buzzwords in defiance of the actual (technical) meanings.) You still need substantial bandwidth and processing power to handle the index, being distributed just makes the problems worse as now you need bandwidth and processing power to coordinate the nodes.
Fix GOOG's braindead pageranking system (Score:4, Interesting)
An essential improvement is coming up with a way to identify and rank by actual information content. No, I have no idea how to do that. I'm just a biologist, struggling with plain old "I." AI is beyond me.
Wait, what? (Score:5, Interesting)
It currently consists of an index of 5 billion web pages, their page rank, their link graphs and other metadata, all hosted on Amazon EC2.
The crawl is collated using a MapReduce process, compressed into 100Mbyte ARC files which are then uploaded to S3 storage buckets for you to access. Currently there are between 40,000 and 50,000 filled buckets waiting for you to search.
Each S3 storage bucket is 5TB. [amazon.com]
5TB * 40,000 / 5 billion = 42MB/web page
Either they made a typo, my math is wrong, or they started crawling the HD porn sites first. I really hope it's not the latter because 200 petabytes of porn will be the death of so many geeks that the year of Linux on the desktop might never come.
Re:Saves you on bandwidth (Score:3, Interesting)
Sure, if you're researcher and want to get quick results, then you can run a job for $100-200 against this dataset. One job. And it better not be anything complex, or you're paying more. In the end, if you're short on money, it would probably be better to do the crawling part yourself too. That isn't costly, it's just time taking.