Site icon

Digital Film Making & CG Trends

In depth conversation with King Kong DoP Andrew Lesnie, StarWars Digital Supervisor Grady Cofer,Arri’s Elfi Bernt and Autodesk’s Philippe Soeiro & Pankaj Kedia

null
(From left to right: Autodesk M&E's Pankaj Kedia, Arri's Elfi Brent, Autodesk M&E Philippe Soeiro, ILM's Grady Cofer and King Kong DoP Andrew Lesnie)

The cutting edge of Technology, the threshold of Art…. Renaissance Ideal?

Film Making has always been a field where artists and technicians would give their lives to produce even one perfect frame…. the passion for story telling, finds in technology, a magic wand that helps execute the vision better…

The magic wand of technology, though requires great magicians (technicians) who know how to use their tools and thanks to the rapid advancements that technolgy is making everyday, there’s a lot that needs to be kept abreast of…

Hyper Realistic CG. CG Lighting. Motion Capture. High Definition. Digital Intermediate. Tapeless Cameras…. the buzz words of today and tomorrow…

At the recent Digital Film Tour conducted by Autodesk Media & Entertainment, Animation Xpress.com Editor Anand Gurnani got the ‘once in a blue moon’ opportunity to have an in depth conversation on these technology related topics with stalwarts such as LOTR Trilogy & King Kong DoP Andrew Lesnie, ILM Digital Supervisor Grady Cofer, Arri’s Elfi Bernt and Autodesk’s Philippe Soeiro and Pankaj Kedia, that too all at the same time.

Animation Xpress.com: It is interesting to observe Digital Intermediate and the way Cinematographers approach it. While there are cinematographers who are all for DI and realize its potential, there is a breed which resists it. As someone who’s directed the Photography for the LOTR movies which were in fact responsible for the evolution of DI, please enlighten us about how DI helps enhance the cinematographer’s vision?

Andrew Lesnie: All the tools that are used to make a movie are just to serve us. It finally is all about telling the story. DI is just a few years old, while films are being made since a hundred years. It is only that earlier films would end up going through the chemical process and now we have the opportunity of taking it through the digital process.

We are still on the same medium but the point of going through the digital process is just to exploit more tools for the benefit of telling the story.

Lustre which is the grading system from Autodesk is something which I was indirectly involved in the conceptualization for. I actually mean the system that was designed in the making of The Lord Of The Rings which later on evolved into Lustre. When we designed the system in 1999 the concept of DI was very new. Very few films had been done with DI.

Peter Doyle was the expert who designed the system. We had a lot of discussions on the interface, the idea was to simulate the process of how the DoP worked on a normal chemical film while making it as intuitive as possible.

The tool was created initially to bring to life the vision that Peter Jackson and the LOTR creative team had for the trilogy. The weather in NZ is very unpredictable and changes on a staggering basis. We have like six seasons within a single day, right from Sunny to snowy.

While shooting you have an obligation to maintain some sort of continuity, you have hundreds of crew and cast on the set. Besides once he starts, Peter (Jackson) likes to keep the camera rolling and halting shoot just to get the weather and things like that right, gets difficult especially in NZ.

So we definitely needed some technological assistance on the trilogy to help maintain continuity and grade the colors. In terms of the creative, the grading system is great to design a special look like we did for the elves or a special look for the prologue. Once everything comes together on the DI we can design the look in one go. It helps when you have an idea of what you want but the system allows to experiment in look development.

I know some people are resistant to the technology, but I have now used it on two major projects (LOTR Trilogy and King Kong) … and I think that once people have used it they will realize that it increases the artistic potential of their work. Digital Intermediate extends the DoP’s creative contribution to post production.

This is a new technology and it is bringing about a drastic change in the industry what with everything shifting to the digital environment. Whenever there’s a new technology that is as hi-impact as DI, people are going to have a lot of questions which are unavoidable. They will question the benefits, is the technology mature enough, does it deliver what it claims. I think we are still in the process of demonstrating the potential.

Also there’s another aspect, it is natural for certain directors of photography to perhaps worry that this type of technology is going to deny them of their ownership of the imagery because all of a sudden you have all these tools that allow you to complete the image. And of course the industry needs to be mature enough to understand that it is actually the director of photography that should be seated next to the colorist to complement his vision… to actually make it better and make it complete.

I think we are still in that transitional phase where people have to understand whose responsibility it is to do what in this process.

Animation Xpress.com: So basically what you are saying is that DI doesn’t limit but it actually enhances …

Andrew Lesnie: Oh yes.

Two things are very important, the production or the producers should involve the DoP in the post production process. There is a danger that people will suddenly decide that the DoP is no longer relevant. Most colorists that I work with are very clear that the presence of the DoP in post production facilitates the smooth production of the DI.

The other thing is, I think that like every one in the business, the Cinematographers too have a responsibility to keep, that of re-educating themselves about the different tools that are used to tell stories. Some people say that if you don’t know DI, how can you know what it does?

DoPs who have done commercials etc. and have gone through a post production house would have used various grading systems. In the TVC space there have been lots of productions and lots of different types of film making in the last 20-30 years that have gone into telecine. And a lot of the tools that are used in telecine have been moved into the Lustre system.

The fact is that most of the tools that are part of Lustre and DI have existed in one form or the other in various packages all over, but they haven’t been used in an integrated manner like in DI.

When I was starting out, there was no Lustre, there was no DI, but there was an obligation on my part to learn something about the Telecine process. So I took it upon my self to stay back at night when the colorists were putting stuff down and I learnt what the terminology was and things like gamma and how things work and I think everyone has this obligation to learn how things work.

To have good ideas is great, but to actually implement them is what’s critical and that’s why you have people like Visual Effects Supervisors, these are people whose job it is to translate the idea and make it some sort of reality.

Animation Xpress.com: Does Autodesk have any way in which cinematographers and other concerned people can get familiar with the new technology?

PANKAJ KEDIA: This Digital Film Tour is certainly part of that effort. It definitely contributes to building awareness that the technology is available. I think that it is an on going process. We at Autodesk Media and Entertainment are willing to contribute to local events which can be organized by technical bodies and committees and contribute to actually build that awareness. There have been constant efforts and initiatives towards this in the past and will continue to increase in the near future. It’s an ongoing process but nothing is going to change the fact that at the end of the day it’s the experience that actually counts.

I think it is important to see and actually experience the Autodesk Lustre tool, you’ll see the incredible amount of control that it gives the Director of Photography and the colorist to manipulate and enhance an image.

Grady Cofer: It doesn’t have to be things which are really large corrections; it is also about the really subtle corrections which you can do to an image to kind of redirect the viewer’s attention.

Sometimes an important thing for the DoP is to guide the audience to what’s important in the frame, where to look and what is the subject of the image. So you might darken an area, or lighten up someone’s eye. You also have the ability to precisely calibrate tone within the shot.

Animation Xpress.com: Please explain the importance of scanning and its relation to DI?

nullElfi Bernt : Well it’s the beginning of the process so if you don’t get good scans you can’t do any good to it. Film is an extraordinary medium, it is capable of holding enormous amounts of information and therefore if you are going towards DI technology and your digital edition of the original film is not correct, that’s not going to do any good to your project. Scanning and recording (going back to film) are absolutely essential steps. I guess that’s why it is also essential for us to present something that gives an overall perspective of what the chain might be … and it is essential that companies like Arri explain the challenges of creating good digital versions.

We started as a camera manufacturer. What has been true for cameras has been true for lighting and now in the latest days it has been true for digital systems… so what should be designed as the absolute film scanner and recorder is the question?

Well we are doing a lot of work to capture image on negative, now additionally we also got to have it transferred loyally into the digital world. The eternal challenge is to capture as loyally as possible all the resolution and data and the whole exposure that is on the negative and bring that to digital where creative artists are working together with the DOP’s.

I am elaborating on the Arri Scanner and recorder in my presentation here in India and I have got very good response from the Indian DoPs. They believe that this is exactly the system that we need for the post production process.

Philippe Soeiro: Elfi actually mentioned something really important: scanning should be a non creative deterministic process which guarantees the acquisition of the complete dynamic range of the negative. This process should not only be lossless but consistent over time. In other words you shoot an sequence on camera negative and plan to put it through a scanner, you should be able to scan once, twice, or scan any number of times at any time interval, and still get the same results: the same geometry, the same colors, in a completely accurate digital representation of what is on the negative. This ties in very nicely with the Lustre system because Lustre “understands” and works with film densities. It was the first grading system which allowed users to keep the same language of traditional film and work in units such as printer lights or F-stops. This ensures that parity can be maintained between the analog world of film and the digital world.

So having elements in the system, which are accurate digital transcriptions of the original film densities, greatly impacts the consistency of the pipeline.

Animation Xpress.com: Talking about VFX, and you being from ILM, one of the world’s most renowned studios for VFX, what are the new trends and Challenges?

Grady Cofer: Some new trends that I get excited about are in the area of motion capture, either facial motion capture or full body. Traditionally of course you have to capture on a motion capture stage where it’s a very well defined small area and it’s a very controlled environment. In some cases it can be a difficult environment for a director to work in. So some of the new technology that you will see is a way where you can get performance capture on location, in a set environment with multiple actors. I think it is very exciting and it will give directors flexibility when they direct performance capture for applying to CG characters.

Animation Xpress.com: In terms of photo realism could you talk bout the essentials for a strong pipeline?

Grady Cofer: Some of the Autodesk tools are integral to our pipeline at ILM. We use Maya for a lot of our modeling tasks, and in simulations.

There is a shot that I will present at the DFT presentation, that shows a Star Wars ship called the Federation Cruiser which has just got back from battle and it is freefalling and going to crash into the planet Courusant. Once it crashes you have smoke billowing from the back of the ship, and all of the particles used to generate that smoke are computer generated in Maya. That’s a very important part of our pipeline.

Again in War of the Worlds there was a lot of modeling and complex rigging tasks. It was crucial to rig the tripods in order for the animators to have a lot of control. For these tasks we rely on the internal tools that we have at ILM, in conjunction with Maya. We have a tool called Zeno which is very powerful and the kind of pairing it gives along with proprietary Maya tools is critical to the CG department.

Photorealistic digital environments are a significant aspect of the VFX work in Star Wars, and that’s why we have an incredible team of guys that are digimatte artists. They create these worlds that George comes up with and many of them use Autodesk 3Ds Max in order to create those environments.

I work in a department called Sabre which is the compositing group that uses Autodesk Inferno and Flame. Inferno has this incredible set of tools that we use not just to composite images but also to do a lot of design work.

Animation Xpress.com: Are you aware of the developments taking place in Indian CG and FX?

I know that there are a lot of facilities in India that are doing digital effects. Some of them are doing digital effects for films being developed in India. Others are working on shows coming out of Hollywood. I met a lot of fine arts students when we did the Autodesk DFT presentation in Chennai. They are extremely excited about VFX and they are learning all these tools.

So I think the trend is that we are going to see more of the creative VFX work come in to India. Already there is a lot of outsourcing in terms of rig removals, rotoscoping. But after having interacted with some of the students and the seeing the kind of enthusiasm they have, I think that a lot of creative work is going to come in too.

Animation Xpress.com: Lucas film has opened in Singapore…

Grady Cofer: Lucasfilm Animation has been established there, a digital animation studio for producing animated content for films, television and games. But we here at ILM have all moved to a new facility in San Francisco called the Letterman Digital Arts Center. This is the first time in the company’s 30-year history that Lucasfilm, ILM and LucasArts have come together under the same roof. The collaborative opportunities should benefit all the companies as we forge ahead in the digital world.

The Letterman Digital Arts Center is a state-of-the-art digital studio. It’s located on a 23-acre campus in the Presidio Park, overlooking the San Francisco bay. It consists of four buildings, connected by 600 miles of fiber-optic cable, and housing a massive data center of roughly 5,000 processors. For digital dailies, the artists either meet in the 298-seat theater, or one the two 65-seat screening rooms. It’s truly an incredible place to go to work to every morning.

Animation Xpress.com: What’s the USP of ToxiK? Is it integrated with Lustre?

Philippe Soeiro: Not for now, it is a possibility eventually. Toxik is currently a shot based compositing environment using a flow-graph representation of its processing pipeline. What it offers is a collaborative environment where people connect to a common data-base and work on the same project while keeping track of what others are doing on other shots or even on the same composition.

You can imagine situations where someone is working on a specific aspect of a shot, let’s say a background matte painting, while somebody else is working on a foreground element requiring a blue screen matte extraction, all these tasks can happen in parallel in the Toxik environment while feeding into a common final composition.

A lead artist can be warned when a new element is made available to him. It is then up to him to decide whether or not he should use it. This is what is called a collaborative workflow.

Animation Xpress.com: Is it in real time?

Philippe Soeiro: Well its not entirely real time: we are relying on an explicit publishing mechanism where people make a conscious decision to submit their work. Others will then choose whether or not to use it. Making this a real time self-propagating updating process would leave the artist with no choice but to accept upstream modifications.

The other interesting thing about Toxik is that it is Autodesk’s first compositing solution offering a complete HDR (high dynamic range) floating point pipeline. This opens up new opportunities especially for people who work heavily with CG. It is a natural choice for CG to work in a floating point scale. Toxik is capable of handling that sort of data and maintain its integrity all through the pipeline. Of course you could still chose to work at 8 bits if you wanted to, but you would be missing on one of the key benefits that Toxik brings into the equation.

We see a trend, where more and more people are embracing higher bit depth in their work. In the earlier days when I first started working with film images, it was practically very difficult to work with anything else than 8bit lin or log data. With increasing computer performance, this is now shifting towards higher dynamic range pipelines giving the end user a lot more headroom to work with, so that if any important printing up (or down) of your shots is required, you can still get a plausible image. This can subsequently help you adjusting to the artistic intentions coming from the DoP or the Director after compositing.

Floating point pipelines make it possible to create something that is whiter than white or blacker than black if you want to. The range is available to go beyond usual boundaries in a non destructive way (without losing colour precision). You can then comply to the director’s or DoP’s demands such as ‘Can you make the sky a lot clearer than that?’ You have the actual dynamic range to extract that from the file in a very non destructive way. Toxik was designed to handle these types of images. The interesting thing is that even for TV Post Production and Commercials, we are noticing that a lot of companies that are doing CG were more and more inclined to migrate towards floating point reners. That’s definitely a trend that we see happening in the industry.

Animation Xpress.com: Can you comment a bit on HD?

Grady Cofer: Star Wars: Episode I was shot on film and then scanned in entirety, digitally graded and then filmed out. Episodes II and III were shot on HD but on different cameras. Episode II was shot on the first generation of 24p HD cameras which had traditional broadcast YUV output, or 4:2:2. Episode III however used a newer technology with RGB output, the difference between 8-bit and 10-bit. So the result was a much richer image.

Philippe Soeiro: YUV is the video color space. Everything that is video is handled in that color space designed to separate the color information (chroma) from lightness information (luma), Y stands for Luma while U and V carry the chroma.

This separation was introduced because the human eye is more discriminating and acute to Luma than to Chroma. It historically allowed engineers to spend less signal bandwidth on colour by dividing the spatial resolution of the chroma component by 2 without impacting the overall perception of the image by the human eye.

Animation Xpress.com: That’s how JPEG was formed…

Philippe Soeiro: No its not, but it stems from the same need to control your bandwidth and file sizes while keeping it perceptually unnoticeable. But you’ve potentially lost a fair amount of data in the process. If you think about it, YUV was the first analogue compression method that was used by broadcasters. In its YUV form, HD is a compromise from which the industry is already shifting away by introducing 4:4:4 RGB capturing (as used in ep3). This isn’t really HD anymore; it’s an RGB signal which has the same spatial resolution as HD, and eventually travels through video connections. This shows that the future of capturing does not lie in HD strictly speaking, but certainly lies in forthcoming developments of digital acquisition.

Grady Cofer: In terms of compositing, green and blue-screen extraction on Episode III was easier to accomplish, there was much more range, and HD cameras are still progressing. There are a lot of new HD cameras that are coming out that will give you even greater dynamic range.

Andrew Lesnie: Well, I haven’t shot a motion picture in HD as yet. The cameras are definitely getting better.

Arri is coming out with the D20 Digital Camera which I am looking to get my hands on. It hasn’t come to Australia as yet. Arri has been listening to DoPs a little more in some ways because DoPs have expressed need for a filmic element in the HD camera. So the D20 actually accommodates that.

The latest Superman has been shot using the Panavision Genesis HD camera and it is going to be out soon and we will all get to witness what the latest generation of HD technology can do. As far as the future goes I don’t know whether it going to be HD technology, but it will be digital of some sort.

I have no desire to replace film by a technology that’s inferior.

I will be happy to move from film into a fully digital environment only when its better quality. Having said that, the process of DI which is using the Lustre, will still exist. Images will still need to be graded whether they are going to be film or on some digital exhibition in fact that process will still be essential as long as cinematographers are still required to point to subjects.

The fact is that there is not one single standard HD technology at this point. HD means a lot of things and nothing really at this point depending on the format on which you are. Today your have HDV cameras which are from Sony which can be labeled as HD and have a highly compressed signal. People want to use that to sometimes shoot film but it can’t be compared to the technology that was used to shoot Superman or Star Wars Episode 3.

Its not one frame rate, its not one color space it’s a very confusing thing. And of course in the long term these things will certainly become very clear to people but at this point in time it is very confusing around what HD is about and what it can actually deliver and how and when one should use it. It’s also important I guess to be aware in very generic terms.

Grady Cofer: One great thing about HD though is that if one realizes that there is an element that is missing, like if the shot has some crashing ship and you need some smoke, you can take the camera and go out on one of the sound stages and just shoot it and get in online in a couple of hours and start working on it. The wait time is so short, its very fast in that kind of a work environment and is great to work with.

Animation Xpress.com: Could you elaborate on some of the challenges that you had while working on Star Wars?

Grady Cofer: One of the great challenges that the Star Wars movies present is the sheer amount of work. I mean there were over 2000 shots in Episode III. About 400 of those shots went through our Infernos or Flames, and much of the work was more complex than what we had accomplished on the previous films.

For Episode III we bought Autodesk’s distributed rendering solution called Burn and what it does is that instead of rendering locally, you render on a farm. For example, during the Mustarfar sequence, there is a massive, complex establishing shot on the lava planet.

There are hundreds of elements in this shot, practical and CG. A lot of those elements — fire and lava plates, digimattes, 3D environments — when you composite all that, the actual processing can take some time. So traditionally on the Inferno system, you couldn’t work while you render, but with distributed rendering you hit render and it goes out to Burn and Burn can have tons of CPU’s out on a render farm and it renders in the background. That frees you up to start working on your next shot. It is a fundamental change in the way the Sabre department works.

War of the Worlds would have been nearly impossible because we had just 12 weeks of post production to accomplish the work, and the scope of the VFX work was immense. To turn around those shots in that amount of time we relied heavily on our Infernos because of their very high speed. And it was Burn’s distributed rendering that made it happen.

Animation Xpress.com: As a DoP, how is it to shoot something like King Kong which had so much of mix of live action and CG in it?

Andrew Lesnie: On King Kong, I spent a large amount of my pre production time with the pre visualization artists and the conceptual artists because that is where a lot of the concepts were frozen on how the film was going to be produced.

When doing live action cum CG everyone tends to use a pre-viz sequence or a conceptual artist. So if you have a different idea on the set it becomes very hard to implement. Now it is almost become like a manifesto.

There were so many scenes which we would really have to work hard on for getting the shadows and lighting right. When we shot scenes which had Naomi interacting with King Kong we had to do a lot of pre shoot R&D on where Naomi would run into Kong’s shadow and where she would come into the sunlight and things like that.

Then for example the scenes where the Dinosaurs are running amock trampling the sailors, we obviously did not exactly know where the Dinosaurs’ feet would land and that made calculating or getting the sailors’ shadows right extremely difficult. That meant we would have to do CG lighting.

The thing with conceptual art is that you need to be able to communicate how you want the sequence to look in the first place. So if the concept art communicates that the digital characters are in a big valley and the sun light entering it was hitting the far side and the characters were in the shadow then on set you are only doing the foreground, the background is going to be done in miniature. Your characters are probably not lit in the most interesting way because they are in the shade but you constantly have to imagine what it is going to be like in the finished product.

So some times you will not be able to make much of your rushes and dailies but in context it will be fine.

And then the decisions… You will make some decision knowing that the early morning sunlight is raking across that valley and some of that light will bounce into the foreground. Now that affects color temperatures, it affects everything. So that’s why I’ll say that the DoP being evolved in the creation of the conceptual art actually became quite an issue. Because sometimes I could see conceptual art being created but I could see opportunities that weren’t being utilized. Which is why I got involved and Peter Jackson was happy for me to be involved.

Animation Xpress.com: Do the CG lighting artists collaborate with the DoPs. Tell me something about CG lighting and how it works for DoPs?

Andrew Lesnie: It’s about having a natural interest in light. I have been involved in some digital lighting in Lord of the Rings. The beauty of digital lighting is that you can park a lamp directly in shot but you can’t see it. I have spent all my life on film sets thinking of how to hide the physical lights.

But I have to say that CG Lighting has impacted my work on commercials recently. I just did a commercial which was set in Hong Kong. It was a set, night time, young men are looking out of the window at the Cityscape. So I said OK what I’d like, would be a very soft night ambience coming through the window. Traditionally to light that there would be a blue or green screen outside the window so they could put a map and the screen needs to be lit as carefully as possible. Then you would try to hide the lights if you wanted them to come in through the window. With the VFX supervisors on the set, they asked me to put the units where I wanted and they would deal with it which meant it would increase the number of rotos they would have to do. But I had the assurance that they’d pull it off. They’d maybe use a garbage map or luminance map and would succeed 70 to 80% and clean up the rest frame by frame. But ultimately what it does is the line that’s entering the apartment feels more correct.

(…to be continued..)

connect@animationxpress.com

Exit mobile version