Canon's demo, though, adds additional restrictions by limiting the distance from the camera at which objects can be recorded. The server used information from the depth sensor to work out how far away part of the image was, "ghosting" it if it was outside a volume roughly one meter square and two meters high. Stand at the front of the square and lean towards the camera, and the server makes your torso disappear as you leave the monitored zone; stand outside and lean in, and your head suddenly reappears, atop a ghostly green body. People passing by, whether in front of or behind the pink square, are not recorded at all.
Canon modified this surveillance camera to also provide information about the distance to people and objects in its field of vision at the Canon Expo in Paris on Oct. 14, 2015. Credit: Peter Sayer
Staff on the stand wouldn't say how the depth detection was performed, but the sensor on top of the camera had the distinctive red-and-black form of a SwissRanger SR4000 time-of-flight sensor from Mesa Imaging. This bounces small bursts of infrared light off a target, measuring the time taken for them to return. It has a relatively low video resolution of 176 by 144 pixels, accounting for the blockiness of the green ghosts on Canon's HD video screen.
Time-of-flight video sensing has become something of a must-have technology for game console manufacturers in recent years. Microsoft already has it in the Kinect component of the Xbox One, while last week Sony bought Softkinetic Systems, a Belgian vendor of depth- and motion-sensing cameras.
In Canon's demo, the camera and depth sensor are separate components, with the camera output filtered externally but, "Ideally, the processing would be done in the camera," with the ghosted video stream as the only output, ensuring respect for local privacy laws, said Canon spokesman Yoshinobu Kitamaru.
Another system he worked on captures images from a network of cameras, extracting as it does so metadata about anyone in view, such as the color of their clothing, the length and color of their hair, their apparent age and the characteristics of their face. So far, nothing new: IBM has used technology from Milestone to do something similar for years. But by constructing a 3D model of a person's head from images taken with a battery of four cameras, Canon's prototype uses image analytics developed in house on Milestone software to locate images matching that person, regardless of the angle from which they were seen.
The images required to create the model might be taken following the arrest of a suspect, or captured for the creation of future forms of identity document.
The system might make it easier to track a specific person, but the privacy flipside of this is that investigators would not need to view images of thousands of other people going about their private business.
Sign up for Computerworld eNewsletters.