Nicole Kennedy Design

Ideation, Creation & Everything In Between

Depth Kit – The Process

There are various stages of calibration and visualisation and me and Mary got together and carried out a whole new calibration in the process of trouble shooting to figure out why we could not get renders and this is the process of setting up the depth kit.  Firstly we must calibrate the lenses which calibrate the camera lens in relation to the Kinect lens and once this has been done for a location it does not usually have to happen again. Calibrating the lenses is done taking the 9 images of the checker board and then the four images of the quadrants.

Calibrating the correspondence involves taking four images at different distances from the checker board which sets the distance. This process works as Mary would take a small recording of a few seconds of the checkboard at one distance and I click on the box on the left the left to capture the same with the Kinect as what Mary can see in the camera the next stage is Mary will cover the infrared lens on the Kinect and I will click on the middle image when a box covered in red dots appears this will calibrate this distance this process is then repeated another three times at different distances from the checkboard.

Screen Shot 2016-02-19 at 11.52.07.png

After the depth correspondence has been calibrated we are then able to see the depth readings shown by the dots in the images below we want these dots to be displayed in their most solid state. What we want to happen here is to have a Red, Blue, Yellow and Green selection of dots displayed the more dots the more successful amount of depth has been captured.

Screen Shot 2016-02-19 at 11.55.10.png

After the depth correspondence has been worked out which is done when clicking on the regenerate button the program then runs a random algorithm to work out the depth of each distance however what we want to do here is switch on and off different distances and try all different combinations in order to get the best reading from this algorithm.

This slideshow requires JavaScript.

After calibration of the lenses and correspondence has been achieved we are then good to record once we have a set of different depths with at least 3 good depth readings. The above image shows a recording of a take from toggling the record which starts and stops a recording. We clicked this button multiple times in order to get more than one take to see if this was possible as we did not yet know. It was great to find out that we could do this more than once so that we are not restricted in having to get Colin’s interview all in one take.

A recording creates a colour output movie which we then have to create a smaller “offline” version of and this is done by using the MPEG streamclip – movie exporter. This video is then exported in a smaller resolution and in Apple Motion JPEG A and it is named the same adding the naming convention _small to the end.  This allows the visualisation in the visualise app to be smoother and more performance efficient.

Screen Shot 2016-02-21 at 17.43.45

After the capture process has been achieved the next stage is moving on to visualise which is able to open the take and the smaller version of the recording. We are able to play about with camera here which is essential in order to get a render by having camera in and out frames which as it turns out cannot be done on the windows machine that we were using but can now be done on the mac we are using.

Screen Shot 2016-02-19 at 13.36.30.png

Within visualise there are various settings we can use in order to adapt and change the mesh for example there is a wireframe option, a way to edit the size of the mesh. A way to have a wave and warp distortion show on the mesh and also ways to sync up the mesh and the live footage so that there is a finer outline around the object.

After we are able to render from visualise we then are able to bring everything into Unity. A video output has to be created using ffmpeg in terminal in order to get the sequence working in Unity. This however would not work on the mac that our renders worked on and I have to do it on my own macbook. There seems to be a lot of technical issues that we are having to avoid by using multiple different computers.

 

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Information

This entry was posted on February 19, 2016 by in Final Year - Major Project and tagged .
%d bloggers like this: