Shutterstock shows machine learning smarts with reverse image search for stock photos
ANALYSIS:
Shutterstock is flexing its AI muscles with the news that the stock photo giant is introducing new computer-vision search smarts to its platform.
The company, which is headquartered in New York’s Empire State Building, went public back in 2012 and now offers more than 70 million images for bloggers and media outlets — which can make searching for specific assets challenging. Of course, the trusty old keyword search tool is effective to an extent, but what if you want to find images that are similar to one you have in your possession? Or what if you want alternative images based on color schemes, mood, or shapes? This is where Shutterstock’s new reverse image search comes into play.
Computer vision is essentially an arm of artificial intelligence that lets machines analyze and understand images by breaking them down and processing them on a pixel-by-pixel basis, rather than by meta data (such as keywords and descriptions that rely not only on human actions, but on human accuracy too). Shutterstock put together a computer vision team more than a year ago, and this is the first fruits of its labor.
The main search box in Shutterstock now offers an option to upload or drag and drop an image.
Choose any image from your PC….
…then Shutterstock starts analyzing the pixels to find matches.
And what you end up with is a compendium of snaps that resemble the original photo, in terms of not just content, but also look and feel.
The underlying concept here is nothing new, of course. Reverse image search is used for a number of ends by a myriad of services, including Snaplay, ImageBrief, and TinEye, while the mighty Google also offers a useful reverse image tool.
But companies that provide what seems like a fairly straightforward technical service on the surface now seem to be moving into the machine learning realm to build better recommendation engines for humans.
Predictive typing keyboard company SwiftKey was recently snapped up by Microsoft — not because it has a popular little app for Androids and iPhones, but because it’s building a sophisticated back end built on artificial intelligence and machine learning. This includes artificial neural networks (ANNs) that are more directly based on the structure and workings of the human brain.
Similarly, Shutterstock developed its own convolutional neural network for its reverse image technology, something that’s also being used to improve its “similar image” option, which is available at the bottom of each image result.
For example, you can see the old keyword-based “similar image” options at the bottom of this English bulldog photo. Some of the results are bulldogs, sure, but others are simply dogs, and some of them are clearly silly. A dog with a wig is cute, maybe, but is it usable? Actually, don’t answer that…
The new visually similar images, while not necessarily matching the original exactly, are more in line.
Though Shutterstock is better known for its stock photos, it also has millions of video clips, and the company will soon be expanding this visually similar search technology to those too.
“With a collection as vast as Shutterstock’s, the importance of being able to surface exactly what a customer needs with advanced search and discovery tools is essential to our continued success,” said Shutterstock’s founder and CEO, Jon Oringer. “Doing this in video is a breakthrough, and as the technology continues to learn and recognize what’s inside an image or a clip, it promises more possibilities. We know we’ve only scratched the surface in how we use this deep machine learning to better understand and serve our customers.”
From enterprise software and drug discovery through to predictive typing and now stock photography searches, machine learning is less of an abstract research field now and more of a reality. It can’t be too long before a machine finally beats a human player at Go… wait a minute, Oh.
Comments are closed.