Google Announces Image Recognition Advance 29
Rambo Tribble writes Using machine learning techniques, Google claims to have produced software that can better produce natural-language descriptions of images. This has ramifications for uses such as better image search and for better describing the images for the blind. As the Google people put it, "A picture may be worth a thousand words, but sometimes it's the words that are the most useful ..."
what would be useful (Score:1)
...is an offline app that compares two images, and if they scale-match, keep the higher resolution one and ditch the smaller one. And runs the comparison over several thousand files (or even hundreds of thousands, or millions) - time is not a factor.
(a scaling deduplicator, if you will).
Is there already such a beast? Anyone?
Re: (Score:2)
If you had a program that could generate a scale invariant hash of an image, and given a command line tool that could tell you the resolution of an image (which exist, just don't know the names), I'm pretty sure you could do that in a single line in bash.
Wouldn't be surprised if there was a program that generated an image hash. Even better if it generates a value where stronger features are higher bits and smaller features (that could be lost in scaling) are lower bits, so you could truncate and compare?
Re: (Score:2)
Comment removed (Score:4, Informative)
Re: (Score:2)
ooh, this looks like it might be just the ticket! Thanky! :D
Re: (Score:3)
However, because of lossy compression, you might want to keep an image that is slightly lower resolution but still has a better overall image quality.
Re: (Score:3)
And runs the comparison over several thousand files (or even hundreds of thousands, or millions)
Ah yes... the joys of Internet Art collecting!
But can it generate an image from words ... (Score:2)
"Two pizzas sitting on top of a stove top oven (with a glass of wine)"
if it can generate an image algorithmically
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Yes, it's called "Googling for images". (You didn't say "original".)
Anything to help us find cat videos on YouTube (Score:1)
you know, for the cats. on the internet. (Score:3)
Then you can finally tell if someone on the internet is a dog.
Re: (Score:3)
You are eaten by a Grue.
Maybe help the blind? (Score:1)
"Describe my surroundings."
"There is a lamp post directly ten feet in front of you. A lovely pizza parlor is off to your right (four out of five stars). There is moderate foot traffic, seven people in the immediate vicinity. There is a man walking towards you smiling. It looks like your friend Greg. There is heavy traffic to your
Re: Maybe help the blind? (Score:3)
You are standing in an open field west of a white house, with a boarded front door. There is a small mailbox here.
New stupid mandatory XKCD (Score:1)
http://xkcd.com/1444/
Re: (Score:2)
Also:
http://xkcd.com/1425/ [xkcd.com]
"A picture may be worth a thousand words..." (Score:2)
Especially considering a 1 mega-pixels image in 8 bits gray-scale. That's 1 MB worth of information. Considering 8 letters in average per word (including the various punctuation characters) and 250 words per page in whatever-16-bits character encoding, the image weighs the same as a book of 200 pages.
actually automatic picture caption generator (Score:4, Interesting)
Not as "advanced" in image recognition as advertised.
Basically they took the output of a common object classifier and instead of just picking the most likely object (which is what a typical object classifier looks for), it leaves in in a form where multiple objects are detected in various parts of the scene. Then they train a neural network to create captions (by giving it training pictures with associated captions).
According to the paper [arxiv.org], it sometimes apparently generates a reasonable description. Other times it reads in picture of a street sign covered with stickers and emits a caption like "refrigerator filled with lots of food and drink".
Actually the most interesting thing about it is the LSTM-based Sentence Generator that is used to generate the caption from the objects. LSTM's are notoriously hard to train and they apparently they borrow some results from language translation techniques to attempt to form intelligible sentences.
This is all very googly-researchy in that they want to see what the limits of pure data driven machine learning are (w/o human tuning). This is not a however much of an advance in image recognition as it is an advance in the language for caption construction.
Re: (Score:2)
Other search and map engines must be worried by this kind of thing though. One of the reasons Google Maps is so good is that they do image recognition with Street View photos. Google's search engine is better than Bing's because it understands the web more like a human does.
The really cool thing about this (Score:1)
But the cool thing is that the machine will eventually reach, then surpass the human. The computer of tomorrow will say "Federer practicing for the David cup, but his injury will prevent a win. He also needs to start using a nitrogen fertilizer on his lawn."