Session 1 Thursday August 1 2013: The crew, the equipment and 3D basics

012A5092

The first 3D Production Initiative took place at Victoria University’s Kelburn Campus. Leading the session, Paul Wolffram introduced an enthusiastic group of over 40 newly recruited stereonauts to the principles of shooting in 3D as well as the camera rig that will be used for the duration of the project.

In his introduction he explained that the main objective of the Initiative is to film objects from Te Papa museum’s collection. Te Papa has many fragile objects that it would like to make accessible to the public but must be careful not to put in danger. By creating stereoscopic images of these objects, the Initiative hopes to offer clear representations of the objects’ materials, textures, contours and three-dimensional constructions that could not otherwise be seen. These objects will range from small kiwis to large pieces of samurai amour. Many of the qualities of the objects such as transparency and scale will pose questions about how to successfully film them in 3D. Although we don’t yet have the answer to these questions the aim of the project is to find solutions through the production process.

Paul then went through some of the main considerations of stereoscopic vision that we have to take into account when filming in 3D. Many of these principles are new to us so we welcome feedback on the way in which we articulating or using them. We also realise that we are setting up numerous ‘best practice’ rules which will inevitably be broken as we see where our experiments take us.

Much of this information comes from stereographer Alister Chapman so check out his videos for more information http://www.youtube.com/watch?v=UOnqoC-dJcg. Also have a look at http://www.dashwood3d.com/blog/beginners-guide-to-shooting-stereoscopic-3d/

Stereoscopic vision:

  • Distance between the centres of our eyes (interocular distance) is around 65mm
  • It is hard to perceive stereoscopic depth in objects that are more than 35 – 40 meters away
  • Stereoscopy is just one of a variety of depth cues that we use. Other depth cues include
    •  Colour: objects that are farther away look bluer
    •  Shadows and shading give objects depth
    •  Scale tells us a lot about depth
    •  Occlusion (the way in which objects overlap) can help to determine distance.
    • Perspective: as we see lines converging in the distance this helps us determine depth
    • Motion parallax (not to be confused with negative and positive parallax): the movement of closer objects in relation to objects farther away help determine distance between them
    • Stereoscopic depth relies on the brain fusing two slightly different images (one from each eye) in order to use the differences to determine depth.
    • Our eyes use both focus (accommodation) and convergence to determine depth stereoscopically.

    stereo cues 2

Human 3D perception is very complex and uses all of these sophisticated means to determine depth in our world. 3D filming tries to make use of as many of these cues as possible, particularly the creation of two separate images (one for each eye). Nonetheless, 3D filming only ever creates an illusion:  we are not creating a 3D image but the illusion of a 3D image.

Problems in stereoscopic filmmaking:

The problem with stereoscopic 3D is that if it’s not done right (not shot correctly) it can quickly become unwatchable. When we shoot in 2D and the focus is bad or white balance is off it is still watchable but if the 3D of a stereoscopic image is done badly then it can create physical problems in the viewing process. If the viewer’s brain is forced to try to realign incorrect images to make sense of them this can cause fatigue, lead to headaches, even nausea  or your head could explode (Paul hasn’t yet been able to verify sources for the last claim but is certain it may happen). So far Paul’s experiments in 3D filming have shown it is really easy to shoot bad 3D so the next few weeks will be about trying to find out how to shoot good 3D.

012A5096

How to create left and right eye images for 3D: positive and negative parallax, interaxial and disparity

  • When objects appear to exist on the screen they are said to be on the “screen plane” or at “zero parallax.” Objects that appear to be behind the screen are in “positive parallax space.” Objects that appear in front of the screen are in “negative parallax space.”
  • If we have a tree that we want to appear in positive parallax space we can separate the cameras in a parallel manner. Our eyes looking at something in the distance will be parallel so we can to separate the cameras to mimic their parallel separation – 65mm. In this case it looks as if the screen plane is a window that we look through, with the tree in the distance.
  • If we want the image of the tree to appear on the “screen plane” then we have two options. We can simply use a single image on the screen (which can be created by one camera or two cameras overlapping perfectly) but this will create a flat image.  The other option is to set the cameras apart and then converge (angle) them until they create an overlap (this is also called toeing in).
  • By converging the cameras in this way we can create layers of depth (on the screen plane, in negative parallax space and in positive parallax space) which has a slightly different, and often deeper feel, than the parallel method.
  • You can change your convergence point in two different ways. Firstly, by angling in or out the cameras so that the convergence point takes place nearer or further away. Secondly, by changing the interaxial of the cameras. When you pull the cameras apart the convergence point will also change.
  • If we return the tree to the screen plane by converging the cameras, we can then put an object such as a ball in front of it (in negative parallax space). The greater the interaxial distance between the cameras, the greater the difference in images (disparity) and the more the ball will seem to come forward. This is termed negative disparity.
  • If we want to place an object such as a wall behind the tree then we will see that in “positive parallax space.” Again, changing the interaxial distance between the cameras will increase or reduce the difference (disparity) in images and thus reduce or decrease the perceived depth of the object. The difference in images is called positive disparity.
  • Although it can be fun to play with the interaxial difference, making it significantly different from the interocular distance of the human eye can create strange, unrealistic effects such as miniaturisation.

Screen types and disparity calculations:

  • One of the biggest problems with stereoscopic productions is that the disparity changes with the screen size. So while we might shoot something with 100mm of disparity on a small screen, when it’s played back on an IMAX screen it could be a meter of positive disparity. This can become problematic as it asks viewers to fuse images that are too different.
  • So this means that the positive disparity on the screen for which we are shooting our film should never exceed 65mm.
  • It also means that we need to calculate disparity when we shoot so that we don’t exceed what may be comfortable to watch

Issues with 3D:

Other issues that we have to watch out for are:

  • Time to process the stereoscopic image: it is preferable to give viewers a longer time to process the images and so editing should be slower.
  • Retinal Rivalry:  this can occur when one eye sees something that the other one doesn’t and can produce an uncomfortable experience.
  • It is important to match the depth in the image with the focal length otherwise there may be problems with roundness. Generally the lenses used for 3D should be analogous to our own perspective. 35 – 50mm lenses are ideal.
  • Realistic scale: if we try to place a large jumbo jet in front of the screen plane it can confuse the eye as we know it should not be able to fit in the cinema.
  • Frame edges can be a problem with 3D because 3D objects get cut off in strange ways when they expand beyond the horizontal and vertical edges of the frame although it is usually more problematic with vertical rather than horizontal edges.
  • Changing depth rapidly from shot to shot can make it difficult for the viewers’ eyes to adjust.

012A5076

Our equipment:

  • Two Canon C300 cameras: It’s essential that the same type of camera is used with identical lenses so there are no differences in the images other than their positioning. These cameras will be linked together in a special rig.
  • The beam splitter rig: we will use a camera rig with a mirror that can split light into two different cameras. The mirror set up means the cameras do not have to be side by side. Instead we can place them at virtual distances using the beam splitter rig and in this way make the centre of the lenses seem to come closer together. This allows us interaxial choices from 0 – 120mm apart. This is important because if we are shooting for a large screen we sometimes need to have small interaxials.
  • As a rule of thumb, the closest you can bring your subject to the camera is 30 x the interaxial (remember interaxial = distance between lens centres). 100mm apart = 3 meters distance. So there will be many occasions when we will need small interaxials, particularly if we are going to have foreground elements.
  • On the other hand, there might be occasions when shooting landscapes and cityscapes where we need much wider interaxials than 65mm. The greater the interaxial, the greater the depth.
  • Genlock system: In order to synchronise the two different images produced by the cameras we will be using the Genlock system that produces timecoded metadata to help us match the images frame by frame.

012A5090

c300

Disparity calculations and the Derobe Method:

  • Disparity Limits: Positive maximum 4% average 2%; Negative maximum 2% average 1%
  • Calculation tools: there are ipad and other apps that help you calculate what the onscreen disparity should be, in which you enter in the screen size, focal lengths, distances, etc and it will calculate what your interaxial distance should be.
  • If you know the desired percentage you can also work it out for your screen by measuring the length of the screen and marking off 2% or 1% marks.
  • When you are working to a known percentage you can use a method called the Derobe Method to figure it out. Check out this video for the information we are using http://www.youtube.com/watch?v=ugkfBPukpZk
  • To shoot the Derobe Method you start by bringing your interaxial to zero and align both cameras so you have a 2D image. With the cameras aligned you take the most distant object in the scene and then converge the cameras until you have measured 2% disparity (or whatever your maximum will be) on the screen.
  • Then you move the cameras apart until they converge on whichever object you want to be placed on the screen plane.
  • The beauty of this method is that no matter how far apart the cameras are you will never exceed the maximum disparity.
  • You then keep the convergence the same for the rest of the scene and just change the interaxial to keep it converged on the subject in the shot.
  • Using this method the shots cut together nicely. It also produces a good sensation of depth

derobeJosephine Derobe: co-founder of the Derobe method

All in all this was a lot of technical knowledge coming from Paul and we are going to have to put it into practice to see how it all works.

The next sessions will involve filming at Te Papa’s space on Tory Street. Because space is limited, only around 12 participants will be able to attend each session and so the current group of stereonauts will be divided into smaller numbers. We will keep reporting from the following sessions as to how we get on and where our stereo experiments are taking us.

Miriam Ross also spoke briefly about the potential for 3D Production Initiative participants to undertake some of their own low-budget experiments in 3D. If anyone has the chance to undertake such work, please keep the 3D Production Initiative up-to-date by emailing Miriam.Ross[at]vuw.ac.nz

Advertisements

One thought on “Session 1 Thursday August 1 2013: The crew, the equipment and 3D basics

  1. Pingback: The 3D Production Initative Sessions Underway | 3D Production Initiative

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s