MapReduce For the Masses With Common Crawl Data 29
New submitter happyscientist writes "This is a nice 'Hello World' for using Hadoop MapReduce on Common Crawl data. I was interested when Common Crawl announced themselves a few weeks ago, but I was hesitant to dive in. This is a good video/example that makes it clear how easy it is to start playing with the crawl data."
Re: (Score:2)
Not sure if trolling (if not, well played), but it is.
Citation [youtube.com]
Thanks for posting this.. (Score:2)
This will be my first (and hopefully not last) headfirst dive into MapReduce.
Re:Thanks for posting this.. (Score:5, Informative)
Re: (Score:2)
Warning heeded, but I saw this on a blog post at commoncrawl.org. [commoncrawl.org]
This bucket is marked with Amazon Requester-Pays flag, which means all access to the bucket contents requires an an http request that is signed with your Amazon Customer Id. The bucket contents are accessible to everyone, but the Requester-Pays restriction ensures that if you access the contents of the bucket from outside the EC2 network, you are responsible for the resulting access charges. You don’t pay any access charges if you access the bucket from EC2, for example via a map-reduce job, but you still have to sign your access request. Details of the Requeser-Pays API can be found here: http://docs.amazonwebservices.com/AmazonS3/latest/dev/index.html?RequesterPaysBuckets.html [amazonwebservices.com]
If I understood that right, at least getting started with the tutorial will not result in me coughing up $200. Correct me if I am mistaken.
Re: (Score:2)
How Does One Profile a MapReduce Job? (Score:3)
I think any total newbie that tried to process all the crawl data would soon find that his first attempt would not terminate until after The Heat Death of the Universe.
Surely there must be some doc on how to make such jobs runs faster, use less memory as well as less storage?
Actually, entry Level EC2 is free for 1 year (Score:2)
Actually, entry Level EC2 is free for 1 year, and has been since Nov. 2010.
You don't need to pay for accessing it, but you still need to pay for the processing power, storage and RAM in your EC2
See here:
http://www.infoworld.com/d/cloud-computing/amazon-web-services-offers-ec2-access-no-charge-531 [infoworld.com]
-- Terry
Re: (Score:2)
Re: (Score:2)
So you're hoping to find child porn, ghost fetish stuff, or both?
No, Really I Am Absolutely Serious (Score:1)
The problem I've got is that searches with Google and the like turn up a lot of junk that I'm not looking for, with the file search engines like FilesTube simply ignoring the numeric years specified in my search queries.
What I want to do is find PDF files of specific issues (Month and Year combinations) of certain magazine titles. But when I try these searches, the results contain a lot of years that I had not specified, with the year I did specify not falling anywhere in the resulting pages.
There are all
Re: (Score:1)
Heh. Now That's Really Funny. Thanks for the Tip! (Score:2)
WGet is chugging away even when I speak. I'm gonna have to cough up for more storage.
Here is an SEO tips for y'all. I didn't discover it, but I stumbled across it just now:
placing the terms "index of", "parent directory", "name", "last modified", "size" and "description" on your web pages is a real good way to attract visitors.
I wasn't able to turn up any actual Apache directory listings for Penthouse Pet of the Year Corinne Alphen. They were all your typical pr0n site that not only weren't presenting di
Regarding crawling (Score:3, Interesting)
Hmm, similar article so I'll ask a question of personal nature.
I've recently created a crawler to collect certain information from a website, that would help me gather data sets for a small machine learning project.
While I've followed robots.txt and nofollow links, site's TOU was against it. After confirming with the admin, I was told that it's not allowed to gather information, as the site owns it (as it's written in the TOU).
The data however is publicly available, so you actually wouldn't have to agree to a TOU to collect the data, and as it's some data I wanted, I still concluded I should get a small sample (less than 1% of the total data, around 200MB) at least, to see if something's even possible to be done with it.
What are your thoughts /.? Should I have abandoned the attempt, have I done right or even should I disregard their plead and simply get as much as I please (during a long period of time, as to not hammer on it's bandwidth)?
Re: (Score:2)
Felony.
Re: (Score:2)
I have no idea (Score:1)
what this is.