Nicole Kennedy Design

Ideation, Creation & Everything In Between

Troubleshooting – Depth Kit (Not Rendering)

In the process of using the depth kit we encountered a bit of a problem with the visualise app. At this stage the content is ready to be edited and rendered but we could not get past this stage on the PC we were using in the university. The renders appeared to be moving very fast and there was no content showing within the folder structure that would relate to any of the render jobs we had requested. No matter what we tried we could not get a render from the machine and we needed to work out why this was in order to proceed and progress.

We began to work on troubleshooting this problem to figure out what was wrong and why we couldn’t get past the render stage. This was an ultimate block for us and we needed to figure out how to get past this. We started with the most basic things first that could be wrong such as folder structure. From the calibration to capture and visualise stage there are various folders that are automatically generated along the way and some that you must create yourself.

Although the folder structure is pretty basic if it is not done down to a T then the content will not go to the correct locations and errors will occur. We wen’t through all of the instructions and re made the folder structure from scratch and made sure at every stage the programs could locate the files that they needed to source.

A complete re calibration was also something we needed to try not only to get better readings from the camera but to achieve better depth readings and see which factors make up for a better recording. We need to figure out what these variables are is it the light or the distance of the camera etc.

Various depth takes (calibrating correspondence) are required to take place in order to get the best readings and the depth is calibrated from the four different distances that are in the capture app. The depth then uses these four images and through a random algorithm calculates the depth of each one which is visualised by little dots shown on the screen the more of these dots the better the reading.  Here you can see the red dots are not in force and this shows the depth reading is not very good so we will need to do multiple takes to find a better reading we found that correspondence closer the camera and in good light provided a better depth.

This slideshow requires JavaScript.

After spending the day trying to crack why we could not render together we worked out that an In and out frame for the camera could not be created on the PC therefore we could not render what we were seeing until we moved the app on to the mac it took various hours to realise this and solve this problem. We were then able to get a recording and some render output to bring into Maya and eventually into unity.

A smaller preview video or offline video is required to be made for each recording that we get so that it will provide a smoother preview this is done as follows using MPEG streamclip movie exporter to export a much smaller resolution video of the colour output.

Screen Shot 2016-02-21 at 17.43.45.png

We bought the render png images into Maya to experiment with them and see what kind of a look they were getting which can be shown in the images below. You can see the colour and depth information within Maya.

This slideshow requires JavaScript.

The next stage is to create a video from the colour and depth output that the render provided for us this was the next process in order to get the sequence into Unity. In order to create the video this needed to be done in terminal. A video tutorial was provided to us to get the DepthKit Unity prototype shader to playback RGBD sequences in Unity. The next post shows how I was successfully able to get the content into Unity.

Xcode was required for the creation of this video so that I may bring this into unity but I was having various problems with the App store trying to get Xcode on my laptop. The image below shows the error I was running into when I tried to download the Xcode problem various times.

Screen Shot 2016-02-19 at 10.13.52.png

Homebrew was sourced for us as something that would be needed to get ffmpeg installed on my laptop so that I could create these videos to get content into Unity. I installed ffmpeh through Homebrew and I was able to create the video output needed to bring the content into Unity.

ffmpeg to create video output of render for unity (Attempt failed) This was not available on the mac in the university but we decided to run a few tests to get it working on my own macbook.

Multiple captures on the mac in capture were also tested so that we knew if we were able to record multiple takes/captures or if we had to only have on recording and how would this affect processing power. We discovered that we were able to record multiple times and each recording would be a take and that we were not capped as to how many takes we may have and in order to make processing and rendering easier we may record a take per question.

 

 

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Information

This entry was posted on February 19, 2016 by in Final Year - Major Project and tagged .
%d bloggers like this: