Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Programming Software

MapReduce For the Masses With Common Crawl Data 29

New submitter happyscientist writes "This is a nice 'Hello World' for using Hadoop MapReduce on Common Crawl data. I was interested when Common Crawl announced themselves a few weeks ago, but I was hesitant to dive in. This is a good video/example that makes it clear how easy it is to start playing with the crawl data."
This discussion has been archived. No new comments can be posted.

MapReduce For the Masses With Common Crawl Data

Comments Filter:
  • This will be my first (and hopefully not last) headfirst dive into MapReduce.

    • by InsightIn140Bytes ( 2522112 ) on Sunday December 18, 2011 @10:16PM (#38421056)
      Then you probably want to use it with some local data so you don't rack up huge bill. One Hadoop job on the whole dataset costs at least like $200, and that's for simple stuff.
      • Warning heeded, but I saw this on a blog post at commoncrawl.org. [commoncrawl.org]

        This bucket is marked with Amazon Requester-Pays flag, which means all access to the bucket contents requires an an http request that is signed with your Amazon Customer Id. The bucket contents are accessible to everyone, but the Requester-Pays restriction ensures that if you access the contents of the bucket from outside the EC2 network, you are responsible for the resulting access charges. You don’t pay any access charges if you access the bucket from EC2, for example via a map-reduce job, but you still have to sign your access request. Details of the Requeser-Pays API can be found here: http://docs.amazonwebservices.com/AmazonS3/latest/dev/index.html?RequesterPaysBuckets.html [amazonwebservices.com]

        If I understood that right, at least getting started with the tutorial will not result in me coughing up $200. Correct me if I am mistaken.

    • MapReduce is an implementation of an algorithm first presented in a 1970's issue of the ACM. I would commend to startups membership and ownership of the patent-expired content composed therein. There's a lot of untapped potential in there yet - and much dross. If we will stand on the shoulders of giants though it's good to know where the giants were and what they did. Brin was a good scholar here, and Page gave something new. It was the fusion of old ideas and new that made Google. If you want to be t
  • Regarding crawling (Score:3, Interesting)

    by gajop ( 1285284 ) on Monday December 19, 2011 @05:16AM (#38422562)

    Hmm, similar article so I'll ask a question of personal nature.

    I've recently created a crawler to collect certain information from a website, that would help me gather data sets for a small machine learning project.
    While I've followed robots.txt and nofollow links, site's TOU was against it. After confirming with the admin, I was told that it's not allowed to gather information, as the site owns it (as it's written in the TOU).

    The data however is publicly available, so you actually wouldn't have to agree to a TOU to collect the data, and as it's some data I wanted, I still concluded I should get a small sample (less than 1% of the total data, around 200MB) at least, to see if something's even possible to be done with it.

    What are your thoughts /.? Should I have abandoned the attempt, have I done right or even should I disregard their plead and simply get as much as I please (during a long period of time, as to not hammer on it's bandwidth)?

  • what this is.

You will lose an important tape file.

Working...