[OpenNI-dev] kinect videoconferencing & Matching point clouds with multiple kinects

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

[OpenNI-dev] kinect videoconferencing & Matching point clouds with multiple kinects

david
This is a very cool video showing how to use kinect for videoconferencing...
http://www.engadget.com/2011/06/02/researchers-hack-kinect-for-glasses-free-3d-teleconferencing-vi/

What is impressive is how accurate they merge the information of multiple kinects. They use 4, create a mesh and then do some color adapting algorithm in order to reconstruct the scene very accurately.

I was wondering if someone has calibrated more than one kinect in order to match the point clouds. I have tried with simple stereo calibration taking into account only de rgb image, but I have not achieved a very good matching. Does anyone succeed using OpenNI in matching point clouds from multiple kinects ?

Cheers,

--
You received this message because you are subscribed to the Google Groups "OpenNI" group.
To view this discussion on the web visit https://groups.google.com/d/msg/openni-dev/-/UlM1Umh5ZGRnQXdK.
To post to this group, send email to [hidden email].
To unsubscribe from this group, send email to [hidden email].
For more options, visit this group at http://groups.google.com/group/openni-dev?hl=en.
Reply | Threaded
Open this post in threaded view
|

Re: [OpenNI-dev] kinect videoconferencing & Matching point clouds with multiple kinects

rusu
Administrator
The problem that videoconferencing usually has is bandwidth... Maybe they should use our point cloud compression
techniques to get ahead ;) (http://www.pointclouds.org/news/compressing-point-clouds.html).

Cheers,
Radu.
--
Point Cloud Library (PCL) - http://pointclouds.org

On 06/02/2011 06:01 PM, david wrote:

> This is a very cool video showing how to use kinect for videoconferencing...
> http://www.engadget.com/2011/06/02/researchers-hack-kinect-for-glasses-free-3d-teleconferencing-vi/
>
> What is impressive is how accurate they merge the information of multiple kinects. They use 4, create a mesh and then do
> some color adapting algorithm in order to reconstruct the scene very accurately.
>
> I was wondering if someone has calibrated more than one kinect in order to match the point clouds. I have tried with
> simple stereo calibration taking into account only de rgb image, but I have not achieved a very good matching. Does
> anyone succeed using OpenNI in matching point clouds from multiple kinects ?
>
> Cheers,
>
> --
> You received this message because you are subscribed to the Google Groups "OpenNI" group.
> To view this discussion on the web visit https://groups.google.com/d/msg/openni-dev/-/UlM1Umh5ZGRnQXdK.
> To post to this group, send email to [hidden email].
> To unsubscribe from this group, send email to [hidden email].
> For more options, visit this group at http://groups.google.com/group/openni-dev?hl=en.

--
You received this message because you are subscribed to the Google Groups "OpenNI" group.
To post to this group, send email to [hidden email].
To unsubscribe from this group, send email to [hidden email].
For more options, visit this group at http://groups.google.com/group/openni-dev?hl=en.

Reply | Threaded
Open this post in threaded view
|

Re: [OpenNI-dev] kinect videoconferencing & Matching point clouds with multiple kinects

david
that sounds very interesting :) and may be very useful for a lot of applications. I will definitely take a look at it.

thanks,

--
You received this message because you are subscribed to the Google Groups "OpenNI" group.
To view this discussion on the web visit https://groups.google.com/d/msg/openni-dev/-/d0FaN2RZQXN6ZmtK.
To post to this group, send email to [hidden email].
To unsubscribe from this group, send email to [hidden email].
For more options, visit this group at http://groups.google.com/group/openni-dev?hl=en.