2D to 3D conversions
2D to 3D conversion is the process of converted a film from 2D (normal film) to a 3D (or stereo) film for viewing with a stereographic viewing system. (polarized glasses, shuttered glasses, special screen, etc)
Conversion had gained a bad name after Clash of the Titans and a few other films that had been rushed through at the last minute. Just as you can’t do a 2000 shot vfx show in 6 weeks with great quality, you can’t do a quality conversion of an entire movie in 6 weeks.
Most people think of a 2D to 3D conversion as a factory type of operation but it’s far from it if it’s to be done right. Much of my past year was spent working for Legend3D dealing with some interesting and very technically challenging problems. Legend had completed work on Alice (Tea party sequence and Drink Me sequence) and done the conversion on the first 3 Shrek films to be released for 3D TV.
Legend was starting on Transformers: Dark of the Moon and would need help with the extensive visual effects shots. Legend 3D ended up converting over 77 minutes of footage for Transformers. Most of the vfx shots were converted in a 4-5 month time span. The converted footage was a mix of visual effects shots (from ILM and DD) and non-vfx shots. In most cases these would be shots intercut back and forth with original stereo footage shot on location. This raised the complexity much more than if the entire film had been converted. Just as doing vfx in a live action film is a different task than creating an animated film (or virtual sequence).
This was also more difficult than doing a library title or a film that was completed. The same issues vfx companies face of ever changing edits and creative directions was in full force. In addition the vfx shots were changing and so we had to follow very closely and track all changes such that we could turn around the final converted shots within a day or two of deliver of the last vfx shots.
While at Legend I helped them to develop the pipelines required to do this type of work efficiently and at as high of quality level as possible. Legend had some proprietary software and techniques for different stages in the process. I wrote a number of specialty Nuke plugins and scripts to help leverage these proprietary software packages with existing software. I worked with a number of artists and developers at Legend to review and analyze the shots as they were delivered and worked closely with ILM and DD to get the proper elements that would be required. Both technical and creative issues had to be examined to get the best possible quality. Overall this represented a number of challenges that were unique to the world of stereo conversion and we were able to raise the quality of conversion to a new level.
In what follows I’ll cover the basics of the conversion process. I won’t be able to go into specific detail to due the proprietary nature but hopefully this will provide an understanding of the process.
Conversion versus shooting stereo
Many people are under the mistaken belief that shooting stereo is superior to conversion in every way. The key to getting great 3D is to design, shoot and edit for 3D, even if it will be converted. If you shoot stereo but don’t consider the 3D world then you will have problems.
In its current form 3D is a stereo process, which means creating imagery for each eye. It’s actually a big cheat when projecting 2 different images to simulate the look of the 3D because your eyes are always focused on the screen but objects appear at other depths. This is a fairly unnatural process.
The following are some of the basic pros and cons. As with everything there are tradeoffs to be made. Shooting stereo does capture the images in 3D but many of the parameters are locked in at the time of shooting. Shooting stereo also requires a fair bit of post work to actually get the images to match. Films that are shot in stereo will still require a certain number of conversion shots just due to problematic shots or limitations while shooting. This was the case even with Avatar.
Shooting Stereo – PROS
• Capturing stereo images at time of shooting
• Capturing nuances of complex 3D scenes. Smoke, reflections, rain, leaves, etc.
• Preview stereo images on set and on location
• Footage can be edited and reviewed in stereo context with the correct equipment
Shooting Stereo - CONS
• Requires specialized camera rigs
• Camera rigs contain 2 cameras so are larger than non-stereo camera setups
• More time required to shoot (higher shooting costs)
• Requires shooting on digital cameras (no option of film)
• Special stereo monitor and glasses required on set to check settings.
• Requires a certain amount of adjustment, alignment and cleaning.
• Requires special rig technician to handle these adjustments.
• Restrictions on lenses that can obtain good looking stereo
• Requires locking in stereo depth at time of shooting. This cannot be adjusted in post.
• Shooting with ‘toe in’ (non-parallel) camera systems requires a convergence puller similar to a focus puller. Having the correct convergence point is critical when editing shot to shot. This convergence point can only be adjusted so far in post. (With parallel camera systems this convergence is all set in post.)
• Lens flares and shiny reflections can appear different to both cameras and will require post work to correct to prevent viewing problems of final film.
• Because of the polarizing through the beam splitter, water and even asphalt with a sheen will appear different to both cameras and require post correction
• Post work required to align and fix footage. Since one camera is shooting through a beam splitter and another is shooting the beam splitter reflection there is typically an imbalance in terms of color, contrast and alignment.
• Visual Effects require more work and more rendering (added time and cost)
Stereo Conversion - PROS
• Ability to shoot with standard process (Composing with 3D in mind still recommended)
• Choice to shoot on film with any film camera or on digital with any digital camera
• Unlimited choice of camera lenses (Extreme telephoto still not recommended for best 3D results)
• Ability to add depth and volume even on telephoto shots
• Ability to set the 3D depth in post on a shot by shot basis
• Ability to set the convergence as desired for the edit
• Flexibility to adjust the depth and volume of each actor or object in a scene as a creative option not available when shooting stereo.
• Visual Effects are handled in the standard way
Stereo Conversion - CONS
• Extra time required after edit to properly convert or sequences need to be locked during edit to allow conversion to take place while in post.
• Added cost of process
• Reflections, smoke, sparks and rain more difficult to convert but not impossible.
The 2D to 3D process
Each company handles conversion in a slightly different way, sometimes with proprietary tools and techniques but what follows are some of the fundamentals.
The first step in the conversion is to have a stereographer review the shot and determine how it will work in 3D space. How much depth budget there will be (how much total depth in the scene) and where the convergence point will be. The convergence point ends up on the movie screen and imagery in front of the screen is said to be in negative parallax and imagery behind the screen is positive parallax. If you have two images (left and right eye) of a circle and it’s in the exact same spot for both eyes then it will appear on the screen surface. If you offset the images of the circle toward or away (left/right) from each other so they overlap but don’t match then you’ll have the circle float in front of the screen or behind the screen depending on the direction of offset. The amount of offset will determine how far forward or back it appears.
Convergence decisions are based on the action and edit as well as what is in the scene. Frequently the key actors eyes are used for the convergence point so when cutting from one scene to another the audience doesn’t have to visually look back and forth constantly.
With this information the image is broken down. How much detail depends on where it is in space. Stereo eyesight primarily works within 20-25 feet. As objects get further away there tends to be less sense of 3D stereo. Breaking down the image may require roto or some form of image area selection. If you’ve done roto or frame by frame extraction you know how difficult and time consuming it is. But in this case everything in the images is broken down, not just a foreground actor. In vfx if you roto a group of people you may be primarily concerned about the group areas that overlaps the area to be replaced. With 2D to 3D you have to isolate each person and each object on a different depth plane.
So imagine the degree of difficulty extracting every 3D surface from a 2D image and making all the frames and transitions match perfectly and smoothly including wild camera moves and explosions with thousands of sparks. Depending on the exact 2D to 3D process used you may have build actual 3D models or at least replicate that process to some extent. If there is a table in the image the table-top will have to be split into top, front and side.
The amount of work and complexity means that if there is a change to the image (edit, reposition, speed up, slow down, etc) it may require redoing the entire shot.
With traditional roto you can get away with things like losing a few fly away hairs on an actress. If they’re clipped off and a new background is composited no one will know. With conversion however any hairs that have not been extracted may end up at the wrong 3D depth. Imagine a character in the foreground with long flowing hair but having the hair stuck to the mountains in the background. Each company leverages what techniques and software they can to help with this process and compositors can also help the process by leveraging various extraction methods.
3D settings
Once the image is actually broken up it is necessary to create the depth. A conversion stereographer may use propriety software or heavily modified Nuke or other application to help with this step. This can be in a true 3D world or it can be simulated 3D with gap offsets. The stereographer usually works with a stereo monitor and adjusts and animates the very shapes, objects and planes to move in 3D space. This needs to match the surrounding shots and the 3D relationships need to be correct. In the case of Transformers the volume and depth of the objects has to match to the original stereo photography. The show stereographer works with the director as well to provide both creative and technical feedback regarding the depth range and convergence planes. Cory Turner was the show Stereographer on Transformers.
Viewing stereo footage and making adjustments requires developing an eye for stereo. Not everyone can do it and different people have different aptitude for judging stereo. With most vfx work it’s usually obvious when there are matte edges or a color balance problem between elements. Seeing these types of problems is consistent across a number of individuals. Yet viewing stereo footage is much more subjective. Even stereo experts commonly disagree on the latest 3D film. Some may say it’s the best stereo they’ve ever seen and others will say it’s the worst they’ve seen. Technical issues and creative choices by the stereographer and director will affect the final results.
But even at this stage if all you’ve done is cutout the objects you’ll end up with flat images set at different depths. What you need to do is add volume and real 3D shape to the image that matches. This is one of the areas that 2D to 3D conversion provides an advantage over shooting stereo. Telephoto lens images can be provided more volume than would be possible shooting stereo. Different objects can be given more or less 3D emphasis. An actor needs to have their head rounded, their eyes sunken in and their nose coming out. This relationship needs to match even as the actor turns his head and the camera moves. There’s a real art to the process and much of this can not be automated. The skill of the artists combined with the tools at their disposal has a lot to do with the final results.
The end result of this process is 2 images – a left and right eye image. If this is a full 3D process there are essentially two 3D CG cameras setup with an InterAxial offset (lens separation just as if shot with a stereo camera system). Some companies use the original image as one eye and generate only the other eye. Legend tends to convert the original footage as if it’s the center image and so they actually generate both eyes. This tends to make sure both eyes match and the image itself isn’t always offset from the original framing.
Because the original image was 2D when sub-images are offset they create gaps where there is no image. The simple way of thinking of this is to start with 2 identical images and cutting out an actor in each. Offset one cutout of the actor 10 pixels to the right and offset the other 10 pixels to the left. Now there’s no image on the edge where the image was moved from. This gap needs to be filled in so you don’t see a black edge in the finished shots. If the gap is really small you can cheat it but these artifacts can be quite noticeable on the big screen. To do it right in most cases requires treating the edge of each object as a full on rig removal. Creating or shooting a clean plate can greatly help with this process.
Keep in mind all of these steps, adjustments and cleanup have to be done on every single shot in the film if the film was originally shot 2D.
Reflections, lens flares, smoke and other problems
And as if that wasn’t tough enough imagine the main actor wearing glasses. The glasses have to be set at the correct depth to relate to the head. Any thumbprints or dirt on the lenses have to be on that same plane, however the actor’s eyes have to be pushed in. More than likely there is a reflection on the glasses of the room, which may be 20 feet away. And the actor may have a highlight on his eye that reflects something 5 feet away. Each of these different images have to be extracted and removed so you end up with a clean eye with no reflections, the glass reflections, the eye highlight and the thumbprint. This is from live action so requires the compositor to go through a process along the lines of rig removal but splitting in to each depth layer. Each of these layers then has to be placed and positioned in 3D and then composited back together. If this isn’t done then it will look like the actors eyes and the reflections are simply painted on the surface of the glasses. Needless to say this looks is very wrong in 3D.
Same issues if the shot has glass window, mirror, chrome or other reflective surfaces. The issue of seeing partially transparent layers at different depths also comes into play if you have smoke or fog in the scene. If you have partially transparent smoke in front of a building you don’t want the building to be pulled forward to 10 feet from camera nor do you want the smoke to be pushed all the back to the front of the building. Same exact problem happens with lens flares. Each of these has to be removed into a separate element and then added back in at the right depth, usually near the screen plane for lens flares.
Transformers of course was full of fast action with glass windows, smoke, explosions, sparks, and lens flares.
Conversion of VFX shots
VFX shots are usually tough enough without having to deal with two slightly different views for the left and right eye. Working in stereo takes more time and adds an additional level of complexity. When the footage is shot as 2D then the vfx can be finished as 2D shots and approved by the director, which tends to be quicker and easier. But it’s usually best to take advantage of the vfx process when doing the conversion to produce better conversion quicker.
In vfx shots it’s not unusual to add smoke and lens flares among other things. Obviously once these are baked into the shot the conversion artist will have to re-separate them. It makes more sense for speed and quality to simply breakdown the composites in such a way to keep key layers separate and use those layers to set the depth in the conversion stage. Some of the other vfx elements are also useful such as clean plates and potentially some of the roto or mattes.
For shots with CG elements it’s better if the CG elements are rendered separately so there’s no need to extract them and fill in the gaps if only a finished composite was provided. Taking this further the vfx company can render depth maps so the conversion company can leverage it and provide a consistency to the creature (or in this case, transformers)
The majority of 2D vfx shots in Transformers was done this way. The sheer complexity of the Transformers, the number of moving and changing surfaces, really took advantage of this process and allowed them to be consistent with the stereo vfx shots. A few shots were done where the vfx company rendered 2 views of the transformer and then Legend created the 3D world to go around the rendered stereo image. In a few other cases Legend converted the background to stereo and the vfx company worked from that.
Tips for VFX companies
If the film project you’ll be starting is to be converted make sure to work with production and the conversion company. The conversion company should provide a list of requested elements and the ideal formats. This isn’t simply a matter of turning over a Nuke script. Chances are your company has proprietary plugins and scripts. It’s also likely that the composites may be very complex yet the conversion company only needs the layers that make sense in depth. You may not typically render depth maps and potentially other elements independently. The cost and time to do this re-purposing of composites and elements will have to be accounted for either in the vfx bid or as a separate bid. If it’s vfx heavy show it’s likely there will be a small team of people prepping images for the conversion process.
Ideally the production company hires a show stereographer who oversees the process from shot design through to completion.
As with the original photography it’s important for everyone to consider the work as if it were in true 3D space. No more cheating where animated characters are placed or using one element that is supposed to represent multiple depths. These problems will show themselves once converted to real 3D space.
Compositors need to consider building the composite based on 3D space. When holdouts are built in or there are elements with an odd layer order then it becomes difficult to re-build the composites to work in a dimension conversion.
Work out a naming process for these special conversion elements to make it easy for the conversion company to rebuild the composites. Example: RL045.layer_1.over.00128.exr
It’s also important to provide any special lookup tables or color specific nodes
Summary
Conversion can create successful stereo images that match images shot in stereo as long as experienced artists are allowed sufficient time and control to do this complex and vfx heavy process correctly. Just as with vfx companies, the selected conversion companies will determine how successful the final results are. Neither visual effects nor conversion are truly commodities that can be randomly assigned to other companies with the same result.
Legend
Legend3D also did at least some of the conversion work on the following features while Transformers 3 was being done: Green Hornet, Pirates of the Caribbean: On Stranger Tides, Green Lantern, Priest, Smurfs3D. They were able to leverage many of the improvements created for Transformers 3.
[Update: Forgot to post a link to a VFX Guide Podcast that touches on the conversion. I believe it's touched on at about 45+ minutes in.]
[Update: I was reminded when responding to a comment below there was a video of Michael Bay and James Cameron discussing shooting stereo. This was done after the film had been shot and within a few weeks of release. Video of stereo conversation of Bay and Cameron]
[Update: I got a few tweets from @philwittmer that I thought I'd address here since there's still seems to be some confusion.
Phil:
Sorry but that all seems biased towards 2d conversion. With no mention of roto time. granted its an interesting read I never thought of it like that but.... a couple of major points here need to be made
1. U get limitless depth if shooting 3d as opposed to designated planes
2. Moving foc point in post just means bad directing
lastly and most importantly. The conversion will only be as good as the budget and time. Which unfortunately is usually bad
Me:
Yes, any conversion takes time. Roto takes time, extractions take time, setting depth takes time.
That's why I stated you can't do it in 6 weeks and that was listed as one of the CONs of doing conversion. It is a time consuming process.
If the film has a large number of vfx shots then this conversion can be done (especially on the non-vfx shots) while the vfx are being worked on. If the vfx shots are locked and can be worked on as you proceed it still takes time but that post time is already used up by the vfx work. That means it doesn't necessarily mean a large delay after the vfx, depending on the show.
Keep in mind all stereo shoot footage needs to be processed as well. Every shot will have to go through a process to be adjusted for color and alignment and other corrections so the left and right images match correctly. This is not an overnight process and so you need to budget time to do shot by shot corrections of the entire film, like DI but can be much more intense. If any shots are truly broken they may need 2D to 3D conversion or serious vfx work to repair them so expect even more time. This will be less than doing a conversion of the whole film but it's time that should not be discounted.
Unlimited 3D - When shooting 3D you have to determine the amount of depth you want. And that is not unlimited. You can't have something 1 foot from the viewer and things pushed to infinity without compressing that space. If you have too much separation in the background you'll get divergence where people's eyes will go walleye (each looking outward). Not pleasant. Too much negative parallax will cause eye strain as well.
Now you can get nice in camera stereo of complex things in the mid distance such as tall grass, leaves on a tree, smoke, etc. That's why that's a PRO for shooting stereo: Capturing nuances of complex 3D scenes. Smoke, reflections, rain, leaves, etc.
Designated planes - Just to be clear this isn't like a multiplane camera when you have flat cards at different distances and there are 10 designated planes. It's not like the process that seemed to result in some viewmaster displays (i.e. all images flat to camera at different depths). Any flat surface can be at any angle. It has to be to actually work in a 3D movie. That road has to be angled away from camera, that building on the side going away from camera, etc. And more importantly you're creating a dimensional space, at least with the better processes. You have to create the sense of roundness and volume as if you built it as a real 3D model. A shot of a head is a good example (a complex sphere with nose, eye sockets, ears, jaw, etc). You can not get away with having a person on a flat card and expect that to hold up for the audience. Whether the company uses real 3D models, depth maps, complex meshes, etc. you can be sure it's not made up of just designated planes. And yes, that make it very hard, especially for the items that I listed as easier to shoot. Numerous, complex shapes that are changing in 3D space are very difficult.
During shooting of stereo you have the issue of setting the IA (interaxial) to set the overall depth. And you have the issue of setting the convergence. Either of these may be manually changed during the shot depending on the end result. Is it bad for a director to find out later a different edit works better? What if in planning the shots the director originally thought the shots would cut together in a specific way and shot with that in mind. But once the cuts are changed the conversion may have to be adjusted in post so there won't be as much eye strain. Wide shot to closeup, the timing of the convergence wasn't right, etc. What if the initial IA was high but the fast action caused more strain? The director and stereographer have to really do some planning and testing before shooting. And even then there may be an edited version that doesn't work and needs to be adjusted. And some things like IA can't be adjusted cleanly in post currently. So some 3D decisions made on set are what will be.
>The conversion will only be as good as the budget and time.
This goes for every element of a film. You can't do Avatar in 10 days for $10. You can't record sound for a feature film with your iPhone. If the studio doesn't allow the correct amount of time and money to do the conversion correctly then they will have a movie they spent a lot of money on looking terrible in 3D. And this will cause turning more people away from 3D. The studios are starting to understand the correlation between good 3D and a happy audience willing to pay for 3D movies. And many of these 3D films (shot or converted) are large tent pole movies so there is already a large budget so the studios don't want to gamble and go super cheap.
In the end there are pros and cons to both methods. One of the main points of this article is both can produce good results. Which one to use and what balance of the two depends on the specific project. If production and the director can commit to shooting 3D with a real stereographer and take the necessary care and attention then shooting stereo may be their best choice. Just don't rule out 2D to 3D conversion as one of the options to consider simply because it's been done poorly on some films.
Update: Opensource 2D/3D conversion Gimpel3D for those interested.
Having done simple 3D conversions that has few elements I can understand the huge complexity a shot in the TF3 movie might entail. This rundown is scary and also a good prep for people working in the business to make it more cost effective, less of a pain and of course within a time frame that works.
ReplyDeleteScott, how exactly do you extract smoke from a scene onto a separate layer without bringing pixels from the background in?
My guess is a new rendered particle and maybe the luck of getting it as a separate alpha, normals and depth map file to work on top. Pixel tracking scripts?
It is literally in my view impossible to create a perfect 3D composite if all these semi transparent elements are in the scene. I would guess if you get to fool the director into thinking it was done in a stereo rig then you have reached the impossible and then some.
Congratulations!
you did a fantastic job as conversions go...it's incredible what you achieved with the limitations you faced...
ReplyDeleteHowever, one CON that you failed to mention for the Conversion method is the quality...Having seen Avatar in the IMAX and Transformers there is a big difference and currently the conversion process is nowhere near as good as the true thing...
my thoughts as an avid IMAX fan are that 3D will need the kind of attention that the rest of production gets if it is to survive, (especially higher ticket fares) bolt on 3D (baring some amazing advances) will reduce the audiences of the future...
As I point out there were conversion shots even within Avatar. Did you spot them all?
ReplyDelete77 minutes of Transformers was converted. The rest was shot in stereo ('the true thing') or were virtual shots. Could you pick out which was which while it was running, knowing that they were all intercut? Realistically I doubt you could. The director and stereographer would lose track at times. We would intercut them and loop them to make sure they were the same.
There's also a big different between the design of the stereo, the technical issues shooting stereo and the technical aspects of conversion. You may like more depth in scenes. You may like more negative parallax. The person next to you may want the opposite. This happens even with the so called 'experts'. Don't confuse the 3D design choices with the execution.
Scott, great article... Have you seen or heard a recent interview with Michael Bay and/or Amir Mokri (DP) about the outcome on their shot pre-selection criteria? It would be great to know what would they do differently as far as which shots get 3D capture vs conversion in the future. Thx...
ReplyDeleteI haven't heard any new comments from either of them. Michael Bay and James Cameron did sit down for a partial presentation before the film was released and talked about a few thinks.
ReplyDeleteLinks: Michael Bay and James Cameron talk about stereo
Michael Bay likes film for the richness and likes telephoto lenses for shooting closeups of actors/actresses. This is the norm for 2D (and still photography) since it tends to provide less depth of field (bg will likely be soft) and the compression a telephoto offers reduces the size of the nose and some other facial features. Notice when you shoot with a very wide angle lens you get an exaggerated comical look. ('Hey Vern" shots)
Now if you try to shoot telephoto using a stereo rig you will end up with something that looks like cardboard cutouts very close together from that same optical compression process with no chance to fix it in post. That's why these were shot as conversion shots so they could be provided volume. I still don't suggest shooting extreme telephoto but it's better as conversion if you wish to shoot telephoto.
If you plan to shoot stereo today you have to use digital cameras. Some directors and DPs prefer film so they can look to conversion.
The other issue as noted in the podcast you have multiple cameras when shooting large films, especially action sequences. Trying to get 5-6 stereo rig camera setups with stereo operators would have been a problem.
Stereo rigs are large. Trying to get POV shots from a radiation suit had to be a conversion. Same with the camera placements for some of the others shots.
Thanks, Scott. This is a fantastic article that I am going to recommend to all of my students.
ReplyDeleteExcellent article, Scott. Great job. I would add that Legend's work on Conan the Barbarian was excellent and we clearly benefitted from their experience on Transformers.
ReplyDeleteHello! You have amazing understanding of this stereo 3d conversion thingamajiger, I've never been able to understand how the vfx companies put together their shots, good job!!!!
ReplyDeleteGreat article. Thank you. Looking at the pros and cons it's hard to be convinced that shooting native stereo is going to survive... especially as the conversion process becomes faster and better and (possibly?) cheaper.
ReplyDeleteAs with anything there are always tradeoffs. New rigs are being developed all the time. Some research cameras can capture true depth information with live action so it may end up being a merge of special camera system (but better than current stereo) combined with a post process to put that information to use.
ReplyDeleteThere are also some prosumer cameras for those that wish to dabble in 3D but these tend to be very limited.
Fantastic article, Scott!!!
ReplyDeleteare companies in far-away places such as India capable of doing the type of stereo conversion (on library/catalogue titles) that you describe? if not, will conversion of those older titles alienate an already skeptical audience?
thankxs
Jimmy
Legend, like some of the other conversion facilities, uses both U.S. Labor and labor in places like India. They had a few hundred working near San Diego when they had a number of projects. Whomever does it still needs people with a developed eye and a full understanding of 3d. Quality 3d will encourage audiences, bad 3d will turn them off.
ReplyDeleteScott,
ReplyDeleteI know it's a little late to be adding comments, but the article is spot-on. I was the stereo supervisor for TF3 for Digital Domain, and worked closely with Scott and his team at Legend on about 100 VFX shots that were converted to stereo.
Scott is right that these one-camera film shots had to cut back-to-back with 2-camera digital shots. It is a credit to a huge number of people that this just works. The biggest giveaway that a shot was converted is that the alignment is perfect on the converted shots. If you go through the film frame-by-frame, you can see some edging on some of the converted shots, but at speed it is completely invisible.
Scott didn't point out, too, that converted shots give the director and the stereographer much more control of stereo. You can have every element have a "roundness" and depth that is appropriate; as long as they don't overlap in depth. With things shot in stereo, the roundness is correct only at one depth -- things closer will be stretched in depth, things further will be compressed (cookie-cutter style)
I went into TF3 skeptical that the conversion would work so well -- I left thinking that shooting stereo makes sense only in cases where there is smoke, explosions, transparency, or very complex objects (trees, for example.) For the huge majority of shots, they'll look just as good or better if they are converted, and they'll be far easier to shoot.
Thad Beier
Thanks for the comments Thad.
ReplyDeleteHi Scott,
ReplyDeleteThis is by far the best article I've read yet on the whole 3D process!
I have two questions, relating to the consumer end:
Cinemas/DVDs/Blu-rays usually have a 2D option of a film that was 'shot' in 2D, but planned for 3D conversion. Is that 2D version effectively the 'original' one or is it still likely to differ in some way from a 'regular' 2D-only film?
Secondly, can you describe what is done to make a 2D print of a film originally shot in stereo 3D & how again, it differs from a 'traditional' 2D print, if at all?
Many thanks & keep up the good work!
Brent
I'm no experty on dvd and blurry releases. 2d version of film that was shot 2d originally - likely just original, which is what would have been shown in non-3d theaters on release.
ReplyDeleteFor 3d once again this is what would have been in theaters. Most of the time they would likely use just one eye. Easy, cheap and fast. In some cases they may select the other eye if there was a problem. Studios don't do a conversion to 2d since all current 3d, even shit 3d, is just 2 strips of 2d to start with or one 2d as in the case of all 2d film.
Aha, that clears it up perfectly - thanks v much!
ReplyDeleteAlthough I try to maintain a grasp on current & emerging technical trends, I'm primarily interested in being informed well enough to make the best decisions regarding seeing individual films in 2D or 3D & for 'blurry'* purchases for my home cinema.
*Like it! Was it intentional?
I've been experimenting with 2D-to-3D conversion recently -- the conversions being done for single photos rather than for video sequences. The primary reason for this is NOT to jump on the "everything's going 3D" bandwagon. In fact, I've been toying with the problem on and off for almost a decade before hitting on solutions recently. The main reason is that you need 3-D models of objects in photos if you want to (a) change the lighting, (b) add or remove objects to what's in the image or (c) move in the scene provided by the photo.
ReplyDeleteIn my trial project, one of the video segments will be a panning perspective on Mount Rushmore as a looming shadow creeps over it Independence-Day style, leaving behind in its wake "Mount Rushmary". In a second shot we'll see the shadow creeping across a panoramic view of all of Chicago seen from the south. There are about 500-1000 buildings involved in the foreground, and the view goes all the way up to Wisconsin on the horizon in the background.
To put this in perspective: video is easier to convert than a single photo because the motion of the camera can be used to do "inverse stereoscopy" with, using multiple frames to triangulate 3D -- something I've experimented with on a trial basis in the past. The matching algorithm for lining up objects frame-to-frame is basically the same already used in doing motion estimation by video compressors. Once you have the objects matched in the frames, and the base motion for the camera set, then you can triangulate their locations.
(If the camera is stationary, however, you're out of luck and you're down to single-photo conversion described below.)
In contrast, single-photo conversion offers no such clues. Apart from trying to solve the "inverse reflectivity" problem (i.e. the inverse of the problem of converting Depth to Lighting/Reflectivity), or trying to find a way to measure focal depth, the only good solution I've found so far is to use a proximity/color based algorithm for depth-estimation and depth-interpolation in conjunction with a small number of depth markers placed on a picture.
Decent results for the 3rd coordinate can be obtained with about 30 minutes of up-front for a single photo. With experience, this can probably be brought down substantially. The important part is the color matching, which is critical for objects with highly jagged boundaries, like trees.
For rendering, I've basically ignored the occlusion problem altogether. What I'm working on right now will simply leave occluded areas blank in the first pass and may (in a second pass) use interpolation on the first-pass frame to fill in the blanks. Alternatively, I may add in extra imagery by hand and then re-run the rendering with the extra information. I'm hesitant toward adopting that approach since it gets away from single-photo extraction more towards multi-photo and video sequence extraction.
Though I haven't thought about trying it, it may be possible to adapt the color-matching process to handle semi-transparencies, like smoke, or to undo the effects of anti-aliasing on boundaries.
I'll send a YouTube link to the demo, once I get it complete. It comes out to a 1 minute commercial-length video segment. I'm already 12 days into the project from conception and the notes found on this blog and elsewhere are proving helpful. Thanks for setting up the web page.
Thanks a lot for this article Scott. I've been at Legend since right about the start of the Shrek projects and am still at the time of writing this. I think your pros and cons are spot on. There are some things that will always present a challenge to convert, but many of the things that were viewed as impossible 2 years, even a year ago have become things the depth artists and compositors don't even blink an eye at now.
ReplyDeleteCurrently we're working on our most ambitious project from a complexity/difficulty of conversion standpoint. Similar to TF3 which was a blend of native and converted footage. It will put our smoke/rain/dust/foliage/transparency-type skills to the test and allow us to show the improvements made to the process. We are always striving to improve. I would be pleased if it passed your test.
Directors still seem squeamish to let the public know that any conversion work was done on their project, no matter how high the quality. Many thanks again for shedding a little light onto the invisible workforce of 3D conversion, and for dispelling some of the fantastical assumptions about the native stereo capture process. Both have their definite pros and cons and it behooves directors to know and consider them when deciding how to add depth to their project.