If the robot recognizes the position of each camera actuation and compiles that data with the video (which would be aided by currently existing RealD 3D technique) you could produce a much more stable and information-rich model of what you're filming.
This would be an advantage for next-generation holographic display.
Multiple units such as these could act in synchronization in capturing a scene for 3D use.
The green and blue regions (x and y axis actuations) outline where the AI system would receive the position information from. Also note the Segway/PUMA design, for maximum stability.
Tuesday, April 14
Autonomous Cameramen Continued
Posted by John at 4/14/2009 09:04:00 PM
Subscribe to:
Post Comments (Atom)
0 comments:
Post a Comment