Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Google The Internet Technology

Google Redesigns Image Search, Raises Copyright and Hosting Concerns 203

An anonymous reader writes "Google has recently announced changes to its image search. The search provides larger views of the images with direct links to the full-sized source image. Although this new layout is being praised by users for its intuitiveness, it has raised concerns amongst image copyright holders and webmasters. Large images can now easily be seen and downloaded directly from the Google image search results without sending visitors to the hosting website. Webmasters have expressed concerns about a decrease in traffic and an increase in bandwidth usage since this change was rolled out. Some have set up a petition requesting Google remove the direct links to the images."
This discussion has been archived. No new comments can be posted.

Google Redesigns Image Search, Raises Copyright and Hosting Concerns

Comments Filter:
  • by stevenh2 ( 1853442 ) on Tuesday February 05, 2013 @07:30PM (#42803083)
    Some websites use a annoying script that redirects people when they click a image.
  • by mk1004 ( 2488060 ) on Tuesday February 05, 2013 @07:45PM (#42803225)

    IIRC, jpeg images allow header data that includes copyright info. If you don't care about use of the image, leave it blank. If you do, insert the copyright info. Google's bot can look for copyright data and if it finds it, it can link to the original html page. Otherwise, it can give a link for a direct download.

    I think there was something on /. awhile back that talked about some system for the owner to indicate how an image could be used, e.g. commercial, non-commercial, free and so on. Couldn't find it on a quick search, but that might be another option to tell Google how to handle an image.

  • by VortexCortex ( 1117377 ) <VortexCortex AT ... trograde DOT com> on Tuesday February 05, 2013 @07:48PM (#42803247)

    So, your answer is that because google has decided it has the right to redistribute copyrighted images in full resolution in most cases, that everyone else on the web should go to Google and opt out of their caching system? Site owners are in coorperation with google, we like google when they don't do fucked up illegal things... We see thumbnails as "fair use", maybe. We don't mind much as long as the users end up on our site to see the image. Google understands advert revenue funded websites... They are one. So, it's really hard to understand users who want free stuff saying that we have to change our business practices, and maybe not even give them free stuff (or make it harder to find free stuff) simply because a bigger free stuff provider decides they can get away with infringing copyrights of everyone.

    Your solution is not a solution. A real solution will be to address the issues. Hell, maybe while google is processing the images to reduce their resolution and run heuristic matching algorithms for their other-sizes and search terms feature, they can water-mark them with the domain name of the site they downloaded the image from.

    Or, let's simply turn your moronic suggestion on it's ear. Why don't we all just say: Hey Google, If you want the feature to work that way, you needed to GET PERMISSION FROM EVERYONE BEFORE INFRINGING THEIR COPYRIGHTS. Fuck you and your opt-out "let's piss off everyone, then apologize until we get our way", Facebook feature roll-out model.

  • by fyngyrz ( 762201 ) on Tuesday February 05, 2013 @08:00PM (#42803343) Homepage Journal

    If you're running a website with Apache, you can configure Apache to look at the HTTP_REFERER header and see where the web surfer was when they made the request for the image. If they weren't on your website, (or if they don't provide the header, an act to be widely discouraged), just re-direct them to your home page instead of serving the image.

    I would think that other web servers could do the same thing, one way or another.

    For most people, it costs money -- perhaps not a huge amount, but still, real money -- to put up a website and serve content to the world. The expectation, if not agreement, is that you'll look at the site's content on the site.

    The webmaster's position is no more hostile than that of the deep miner: There are expectations, but no promises.

    Google's search goes far beyond fair use, as far as I'm concerned.

  • I can see both sides (Score:4, Interesting)

    by Miamicanes ( 730264 ) on Tuesday February 05, 2013 @08:50PM (#42803789)

    On one hand, I think the site owners deserve the traffic. On the other hand, it seems like at least a quarter of the pages end up being dead when I click on them, or redirect to sites attempting to install malware on old versions of Firefox, or seemingly have nothing whatsoever to do with the image that's supposedly there.

    A compromise might be to allow users to open the referring page in context immediately, open the cached page (with live content) after a 2-second delay, and allow users to grab the full-sized image directly from Google's cache after a 10-second CAPTCHA-guarded delay. Then, users would have every incentive to try viewing the page in context, falling back to the cached page if the original page ends up being down/borked/whatever, and being able to grab the cached image if all else fails.

    Going a step further, Google could come up with some free digital watermarking scheme that allows a 48-bit (give or take) payload to be encoded into the image at a user-selected strength (allowing him to balance robustness, file size, and visibility... pick any two of the three).

    The upper few bits (let's say, 4) would indicate the version. Initially, it would be 0001.

    The next 40(give or take) bits would be globally-unique, and allow somebody who knows the value to obtain meta info about you in a sensible manner. If they're all 0, it means you're using a generic permissions watermark that doesn't identify ownership, but simply restricts use.

    The lower 4 bits specify explicit restrictions

    * do not contextually-index
    * do not cache full-sized image
    * do not perform face recognition of any kind
    * do not index for similarity to other images

    A value of "0000" would allow search engines to index the image, unless you restricted them in some industry-standard way via metadata referenced to your unique id. For the generic value with all 0s, 0000 means "go ahead and index this".

    A value of "1111" would indicate that the image, when encoded with a 4-bit watermark, should not be indexed in any way, shape, or form, regardless of future extensions to the standard that might define additional permissions, and regardless of what any indirectly-referenced meta-info might or might not say. Let's call this the "Stop Facebook from Permissions Creep in a GPLv3-like manner" anti-permission.

  • Re:does not compute (Score:4, Interesting)

    by Ford Prefect ( 8777 ) on Tuesday February 05, 2013 @11:07PM (#42804753) Homepage

    Dear "Webmaster", nobody cares about your shitty website packed full of annoying ads. Get over it already.

    If someone clicks the Google Image Search 'high-resolution' link for one of my photos from Flickr, they get a medium-resolution version [staticflickr.com] with no description, attribution or copyright information. (Example search page here [google.com].

    If they go to the ad-free Flickr page [flickr.com], they get links to much higher resolution versions, associated images and also get informed that it's under a super-open Creative Commons Attribution licence.

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...