Ideation, Creation & Everything In Between
Audio & Visually Syncing the sound takes with the encoded video visuals in Premiere –
Now that we have all the 14 encoded videos it is time to add the sound to them before they can be bought into Unity. An earlier test indicated that when a encoded video with a sound file was imported into unity the sound was available as a child of the parent video which still allows us to enable and disable it as and when required.
Again we used the production timesheet for this process which allowed us to know which audio file needed to be included in which of the encoded output video files (movie texture mp4) .
In the image below you can see the output video and the audio file name as well as the frames in, which could be used as a rough estimate to establish when in the audio file that particular part of dialogue started. A dialogue section had also been added from the transcript so that we could tell what piece of audio we were listening which aided the syncing process.
Each of the 14 encoded video mp4 files were imported into Premiere as well as the audio files that were now in working order. Originally the sound files we recorded had a buzz in them that was consistently heard throughout the course of the audio.
Fiona was able to collaborate with her housemate Chris to find out how to remove the buzz a process of profiling the buzz was used so that this sound profile could then be removed from the audio files which in turn removed the buzz.
A premiere sequence for each of the 14 scenes was created where we included the relevant video and audio file from the production timesheet. I was then able to use the frames in on the audio file which gave us a rough guideline of where the audio we wanted would be.
We would then clip the audio file to the relevant size that contained only the dialogue that was relevant to the scene we were working on. The difficult part was to then sync the visuals and the audio and this process was a very precise difficult process that required extreme practice of patience.
The easiest way to go about this was to move the visual track up the timeline away from the beginning so that I could move the audio in either direction to match the visuals. Zooming right in on the timeline made this process easier and we could edit right down to individual frames and we discovered that even if the audio was off by one frame you could see that things didn’t look right in the visuals.
Again this process needed to be repeated 14 times and it did take quite a while in order to get all of the videos perfectly matching. I found that it was better that I take my time on these and get them looking 100% rather than having to bring them back into premiere to re-sync if they did not look right in Unity. The production timesheet was so helpful in this process and Im glad that all of us were able to make use of it.
After all of the premiere sequences had been audio and visually synced it was time to export the videos as mp4’s matching the settings of the original videos that went into premiere.
Importing & Unity Setup –
Finally we were now at the stage of importing the videos (movie texture) now with audio, The depth properties (xml file) and the png image (poster frame) files for each of the scenes into Unity.
As the content was already in a folder structure we then had to replace the movie texture files with the ones that now had sound. (However we have learned from our past experiences so we made copies of the originals without sound before we did this.)
Once this was done we then imported the 14 scene folders which already contained everything we needed from the poster frame and movie texture to the properties files per scene, into unity which took some time due to the sizes of the movie texture files. After a period of waiting for this process to complete I referred back to the tutorial included on the previous post which not only included how to encode the video renders but also showed how to import the depth kit sequences into Unity. The sound was not mentioned this was a process I had to figure out that we worked on in earlier experiments for the project.
All of the content is now in Unity and ready to be setup and each of the scenes will require a material to be made in order to apply the poster frame image to. To create a material in Unity you would right click on the project window select new material and you are then given the window on the right of the image below that then allows you to select which shader this material will use.
I selected that all 14 materials would use the DepthKit/DepthKit-Surface-Shader which is a shader that James George created for the content. The next stage was to drag the poster frame (png image) of the relevant scene to the matching material in the little box to the right.
In the game hierarchy 14 empty game objects were then created to be named from scene_001 to scene_014. Each game object requires a few components to be added so that we can begin to include our content on them. A transform is included by default but we then drag the Depth Kit script onto the object as well as a mesh filter and a mesh renderer an audio source is also required if we wish to plugin the sound file.
On the Depth Kit component we are required to tick on the Start On Play and the Loops check boxes as well as dragging the movie texture file which is the encoded mp4 file, The poster frame which is the png image and the properties file which is the depth properties xml file.
On the Mesh renderer selecting the arrow to show more for materials there is an element property were the material for the relevant scene can be dragged on to.
Finally if an audio source component is added you can then drag the audio file which is found as a child of the movie texture file in the project window into the audioclip section of the audio source.
This process was completed for each of the scenes so that the depth content for each scene of the interview can be viewed in Unity.
Once this process had been completed I tried to play one of the sequences in Unity but I found that upon pressing play the Colin that I could see in the editor disappeared from the scene and I could not see anything. The process of pressing play also caused Unity to lag out quite a bit so something wasn’t quite right here as playing a sequence in earlier process was quite an instantaneous process.
This led me to try various sequences in case I had made an error in just that first one I tried but all of the sequences were behaving in the same way. This was cause for alarm as at this stage I couldn’t work out were in the process something was missed or carried out incorrectly. All of the sequences were troubleshooted bringing them into unity in different ways, setting them up differently and even trying fresh projects and scenes in case something was bugging out in Unity or the import was not carried out right.
After a considerable amount of time and trying different things it dawned that maybe something had gone wrong when the videos had been exported from Premiere so I went back and even though I had matched the settings of the original encoded videos that came into unity I selected the manually match settings check box which exported the video as mpeg. However I needed the videos to be in mp4 format so I used Miro video convertor to convert the video into an mp4.
One video was re exported from premiere, converted and then replaced the old movie texture file that did not work in Unity and what would you know it worked perfectly. I could see Colin in the scene before and after I pressed play. Also Unity did not lag out when play was pressed either so this confirmed that there was something wrong with the way the videos had been exported from Premiere.
In order to correct this the remaining 13 videos were re exported from premiere with the new settings and then converted in Miro and the old movie texture files in Unity were replaced with the new ones.
In the scene I was able to place all 14 instances of Colin on different game objects in the scene and press play and all 14 were capable of playing at once without crashing Unity which was interesting but this also created a Colin inception type effect but at least now we now all of the sequences work with sound may I add.
The content has now been packaged so that it can be easily imported to any scene without having to setup each of the scenes again. This will come in handy as the interview sequence package can be imported to whichever Unity scene we decided to be our master previs scene. The next stage is to sync the interview sequences to match the animatic and have the old ones disable as the new ones are enabled and vice versa as well as a fade in and out that is in progress.
In the video below you can see all of the 14 sequences playing and working in a Colin inception style example. This video was to test that all of the sequences not only work but I wanted to see out of curiosity if all 14 would be able to run at once without lagging out or crashing.
Fade In/Out Script Control –
The fade in and out script is a script that myself and Estrella had been working on which would fade in and out a game object by a key press. Various experimentation was trialled on a colin sequence but did not work in the end as the object that is being faded requires a material on it that would support transparency such as the legacy shader/transparent/diffuse and this was not possible to integrate on to the Depth kit shader that James created even though we did try it.
A substitute of instead fading an object in front of a Colin sequence to make it appear as if it were fading in and out was tested and used. The fade in and out script was edited by Estrella so that we could control it using an Adventure Creator action using the action object: send message custom fade in or out instead using an engine: wait action to time the fade in and out. How long the fade takes to fade in and out can also be edited.
In the image below you can see an example with an actionlist that is using these actions in order to make the fade in/out controllable and possible. The method name controls if we want the fade to be in or out. The integer to send which is required to pass the integer to the method controls how long this method whether it be fade in or out takes to complete.
In Unity this actionlist was tested on the cube that would fade in and out in front of the colin sequences. The actionlist was set to be the onstart actionlist that would fire from the very second I hit the play button so I did just that and the results are shown in the video below.
You can see that the Colin sequences fade in and then they fade out again. The next stage is to time these sequences with the animatic in the previs as a play interview scene however I only wanted the interview section that should be playing to be enabled and only to have one sequence enabled at a time so I will enable and disabled the game objects as and when required using the object : enable/disable action.
The purpose of only having one sequence running at a time is to optimise the scene and not have too much running at once. Im interested to see how these sequences play in the Oculus as we have not yet had a chance to test it out with these particular sequences.