Most of us create increasingly more digital personal photos and videos. Advances in technology mean it is now easy to store them on the home PC or on the internet, but how do we find the pictures again that we have taken? Current search engines for photographic databases rely upon word searching, meaning that one can find at most those images that were painstakingly annotated; for the rest there is trial-and-error browsing.
Dr. Stefan Rueger and his team at Imperial College London have developed a visual search engine that uses the image content as provided by the camera; it structures large image and video datasets automatically so that they can easily be searched exploiting visual image similarity, time lines, similar location for GPS-enabled devices and, if they exist, annotations.
In contrast to traditional search engines, users can drop images into a search box: these images are used as visual search terms in the same way as words can be used in a traditional search. Users also benefit from a novel exploration mode, termed "lateral browsing": Using the automatically generated relations between images one can effortlessly navigate based on a similarity of structure, colour, texture, annotation, location and time. A third feature that sets this image search technology apart from traditional image databases is the use of clustering algorithms that generate a summary of visual datasets.
Editors' Note: Business Lead news items are published as an exclusive service to our subscribers. We select them solely on their news value, in our independent opinion as journalists. Subscribers seeking more information on any of these items are invited to contact the researchers directly, with the hyper-links provided. We welcome suggestions of more business opportunities for us to publish, and word of any past business lead that has resulted in a deal. By helping information flow about deals done and to be done, we aim to promote enterprise in science. You may contact us at [email protected].