[OpenNI-dev] kinect videoconferencing & Matching point clouds with multiple kinects
This is a very cool video showing how to use kinect for videoconferencing... http://www.engadget.com/2011/06/02/researchers-hack-kinect-for-glasses-free-3d-teleconferencing-vi/
What is impressive is how accurate they merge the information of multiple kinects. They use 4, create a mesh and then do some color adapting algorithm in order to reconstruct the scene very accurately.
I was wondering if someone has calibrated more than one kinect in order to match the point clouds. I have tried with simple stereo calibration taking into account only de rgb image, but I have not achieved a very good matching. Does anyone succeed using OpenNI in matching point clouds from multiple kinects ?