Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Google

Google Announces Image Recognition Advance 29

Rambo Tribble writes Using machine learning techniques, Google claims to have produced software that can better produce natural-language descriptions of images. This has ramifications for uses such as better image search and for better describing the images for the blind. As the Google people put it, "A picture may be worth a thousand words, but sometimes it's the words that are the most useful ..."
This discussion has been archived. No new comments can be posted.

Google Announces Image Recognition Advance

Comments Filter:
  • ...is an offline app that compares two images, and if they scale-match, keep the higher resolution one and ditch the smaller one. And runs the comparison over several thousand files (or even hundreds of thousands, or millions) - time is not a factor.

    (a scaling deduplicator, if you will).

    Is there already such a beast? Anyone?

    • by Sowelu ( 713889 )

      If you had a program that could generate a scale invariant hash of an image, and given a command line tool that could tell you the resolution of an image (which exist, just don't know the names), I'm pretty sure you could do that in a single line in bash.

      Wouldn't be surprised if there was a program that generated an image hash. Even better if it generates a value where stronger features are higher bits and smaller features (that could be lost in scaling) are lower bits, so you could truncate and compare?

      • findimagedupes builds a database of fingerprints (basically a scaled-down monochrome image) and can call an external program with the matching duplicates. You could read the resolution in the external script using jhead or exiftool.
    • Comment removed (Score:4, Informative)

      by account_deleted ( 4530225 ) on Thursday November 20, 2014 @07:36PM (#48430755)
      Comment removed based on user account deletion
    • However, because of lossy compression, you might want to keep an image that is slightly lower resolution but still has a better overall image quality.

    • And runs the comparison over several thousand files (or even hundreds of thousands, or millions)

      Ah yes... the joys of Internet Art collecting!

  • Pretty nifty ... wondering if I describe a scene such as from the TFA:
    "Two pizzas sitting on top of a stove top oven (with a glass of wine)"
    if it can generate an image algorithmically ... rather than just display an image from the library that meets those criteria ...
  • I wonder if you could make a Google Glass assistant for the blind using this technology? Like a little earbud that describes stuff in front of you, and distances, and whatnot.

    "Describe my surroundings."
    "There is a lamp post directly ten feet in front of you. A lovely pizza parlor is off to your right (four out of five stars). There is moderate foot traffic, seven people in the immediate vicinity. There is a man walking towards you smiling. It looks like your friend Greg. There is heavy traffic to your
  • by Anonymous Coward

    http://xkcd.com/1444/

  • Especially considering a 1 mega-pixels image in 8 bits gray-scale. That's 1 MB worth of information. Considering 8 letters in average per word (including the various punctuation characters) and 250 words per page in whatever-16-bits character encoding, the image weighs the same as a book of 200 pages.

  • by slew ( 2918 ) on Thursday November 20, 2014 @07:59PM (#48430897)

    Not as "advanced" in image recognition as advertised.

    Basically they took the output of a common object classifier and instead of just picking the most likely object (which is what a typical object classifier looks for), it leaves in in a form where multiple objects are detected in various parts of the scene. Then they train a neural network to create captions (by giving it training pictures with associated captions).

    According to the paper [arxiv.org], it sometimes apparently generates a reasonable description. Other times it reads in picture of a street sign covered with stickers and emits a caption like "refrigerator filled with lots of food and drink".

    Actually the most interesting thing about it is the LSTM-based Sentence Generator that is used to generate the caption from the objects. LSTM's are notoriously hard to train and they apparently they borrow some results from language translation techniques to attempt to form intelligible sentences.

    This is all very googly-researchy in that they want to see what the limits of pure data driven machine learning are (w/o human tuning). This is not a however much of an advance in image recognition as it is an advance in the language for caption construction.

    • by AmiMoJo ( 196126 ) *

      Other search and map engines must be worried by this kind of thing though. One of the reasons Google Maps is so good is that they do image recognition with Street View photos. Google's search engine is better than Bing's because it understands the web more like a human does.

  • Right now the technology is well behind human cognition. It may say, "two men playing tennis" where a human would notice a few things, do some research, and say "Roger Federer practicing with his trainer for the upcoming Davis cup"

    But the cool thing is that the machine will eventually reach, then surpass the human. The computer of tomorrow will say "Federer practicing for the David cup, but his injury will prevent a win. He also needs to start using a nitrogen fertilizer on his lawn."

Solutions are obvious if one only has the optical power to observe them over the horizon. -- K.A. Arsdall

Working...