Most of us create increasingly more digital personal photos and videos. Advances in technology mean it is now easy to store them on the home PC or on the internet, but how do we find the pictures again that we have taken? Current search engines for photographic databases rely upon word searching, meaning that one can find at most those images that were painstakingly annotated; for the rest there is trial-and-error browsing.
Dr. Stefan Rueger and his team at Imperial College London have developed a visual search engine that uses the image content as provided by the camera; it structures large image and video datasets automatically so that they can easily be searched exploiting visual image similarity, time lines, similar location for GPS-enabled devices and, if they exist, annotations.
In contrast to traditional search engines, users can drop images into a search box: these images are used as visual search terms in the same way as words can be used in a traditional search. Users also benefit from a novel exploration mode, termed "lateral browsing": Using the automatically generated relations between images one can effortlessly navigate based on a similarity of structure, colour, texture, annotation, location and time. A third feature that sets this image search technology apart from traditional image databases is the use of clustering algorithms that generate a summary of visual datasets.