Christoph Posch, co-founder and CTO of Chronocam, discusses with Science|Business’ Daniel Echikson his company’s new sensor technology.
Q. Could you explain the sensor technology your company is developing?
What we’re doing is building eyes for machines. Our research has implications for technology such as self-driving cars and drones. Our cameras are actually inspired by biological eyes, by retinas, and by how the human brain processes visual information.
Our sensors are different from traditional cameras because they acquire visual information in a much more efficient way. They take in only the important data, producing less data for computers to process, allowing them to compute on visual information much faster. Our cameras reduce redundant or otherwise unnecessary data that an intelligent machine would not need to make decisions, anyways.
Q. What do you mean, your cameras “reduce redundant data”?
Traditional video cameras take lots of pictures very quickly and combine them to acquire and analyze motion. The faster the camera can take pictures, the better the representation of the dynamic parts of the scene. However you also acquire unchanged parts of the scene at the same rates. These data are redundant and useless. In our camera, each pixel gets to decide for itself how fast to sample (take pictures), instead of all of the pixels sampling at the same rate, as is the case in a traditional camera. This means we optimise each pixels’ acquisition process continuously.
In a traditional camera, the frame rate, or call it sampling rate, is the same for each pixel. In ours, this is not the case; individual pixels sample as fast or as slow as necessary; each pixel decides. Our camera is frame-less.
Q. How will your technology affect ordinary people?
Our cameras will have many applications. They could be used in any machine that needs to ‘see.’ Such machines include smart mobile devices, gesture-controlled devices, self-driving cars, robots, drones, or vision restoration devices for the blind.
Our camera is better than traditional ones because it reduces the power consumed (it’s more energy efficient), reduces the amount of data produced (our data require much less bandwidth to transmit and much less space to store), and these less data—only the important data—we can process much faster, meaning that smart machines can make quicker decisions. This is especially important for self-driving cars, which need to make potentially life-or-death decisions in milliseconds. To sum up, our cameras will make devices smarter.
Q. Besides already extant applications, can you think of any new things that will be made possible by Chronocam’s cameras?
We can do things no one else can do because we can combine real-time machine-vision with high-speed acquisition. Our cameras will have a huge impact on robot navigation—helping drones move around obstacles.
To get the same sort of precision as our sensor, you would need to run a traditional camera at tens of thousands or hundreds of thousands of frames per second. And if you were to do that, you would produce so much data so quickly, you would run out of storage almost immediately, let alone do any real-time processing on these data.
Q. How is your company currently being funded?
We are a startup. We’ve been around since 2014. We are venture-capital funded by Bosch venture capital and CEA (the French energy-research agency). We are currently in the process of securing more VC funding.
Q. Why did ATTRACT interest you?
Before this, I was a scientist at CERN. This is the perfect event for me. I was a scientist, and recently, I co-founded my own company. ATTRACT is all about bringing science and business together. In a certain sense, I represent ATTRACT.
Q. What is your plan if and after Chronocam attains funding?
Grow the company and get into the market as soon as possible. We are talking to many potential customers. (I cannot say exactly who just yet.) There is lots of interest, e.g. from automotive, industrial automation, and drone companies.
Our camera technology is already in a commercial product made by PIXIUM Vision, which produces vision restoration systems for the blind.
I want to add that the data our sensor produces is so different, we have been forced to rethink image processing completely. Instead of processing frame by frame, we now have to process a continuous stream of pixel-individual data. This is similar to how the human brain processes visual information. The brain inspired us. We are going to have to use our brains to continue to think up novel uses for our camera.