Forgot your password?
typodupeerror
Open Source News

Common Crawl Foundation Providing Data For Search Researchers 61

Posted by Unknown Lamer
from the doesn't-archive-dot-org-do-that dept.
mikejuk writes with an excerpt from an article in I Programmer: "If you have ever thought that you could do a better job than Google but were intimidated by the hardware needed to build a web index, then the Common Crawl Foundation has a solution for you. It has indexed 5 billion web pages, placed the results on Amazon EC2/S3 and invites you to make use of it for free. All you have to do is setup your own Amazon EC2 Hadoop cluster and pay for the time you use it — accessing the data is free. This idea is to open up the whole area of web search to experiment and innovation. So if you want to challenge Google now you can't use the excuse that you can't afford it." Their weblog promises source code for everything eventually. One thing I've always wondered is why no distributed crawlers or search engines have ever come about.
This discussion has been archived. No new comments can be posted.

Common Crawl Foundation Providing Data For Search Researchers

Comments Filter:

Brain off-line, please wait.

Working...