Gear Seven/Arc Studios/Shift
I Like Music
Contemplative Reptile
  • International Edition
  • USA Edition
  • UK Edition
  • Australian Edition
  • Canadian Edition
  • Irish Edition
  • German Edition
  • French Edition
  • Singapore Edition
  • Spanish edition
  • Polish edition
  • Indian Edition
  • Middle East edition
  • South African Edition

NORTON: “Push Your Cynicism off a Cliff; this Is Going to Revolutionise Filmmaking”


Fresh from shooting a groundbreaking spot for McDonald’s, the Great Guns director tells LBB’s Adam Bennett why a new technique known as neural radiance fields is changing the game for filmmakers

NORTON: “Push Your Cynicism off a Cliff; this Is Going to Revolutionise Filmmaking”

What’s the biggest challenge facing modern filmmakers? Although it’s a competitive field, the astonishing proliferation of content - and the demand for it - is leaving production crews to contend with ever-tightening timelines. As a result, the ability to turn projects around on a dime, without cutting creative corners, is an increasingly valuable one in 2023. 

Technology, however, might be riding to the rescue. We’ve heard a lot recently about how virtual production tech is providing imaginative solutions to tough challenges, and turning ‘fix it in post’ into ‘fix it in pre’. Now, however, neural radiance fields - or NeRF - is beginning to offer us a fascinating glimpse into what might well be the future of film craft. 

For a case in point, check out a recently-released spot for McDonald’s directed by NORTON (aka José M. Norton). In the ad, we see a camera zoom around a table containing a family which is static but - somehow - full of life. Thanks to NeRF, we’re able to observe the scene from angles which otherwise would not have been possible. 

Happily, NORTON himself is on-hand to demystify the magic of this new technology. Whilst picking through his filmmaking past and reflecting on his influences and passions, LBB’s Adam Bennett took the opportunity to ask the Great Guns director about NeRF, and precisely what it means for the future of production… 

Above: This recent spot for McDonald’s, starring digital creator and artist Karen X. Cheng, is believed to be one of the world’s first to utilise NeRF. 

LBB> NORTON, your latest work for McDonald’s is awesome - and one of the first ads ever to use Neural Radiance Fields (NeRF). Let’s start with the basics: What is this technology, and how did it help you make this ad?  

NORTON> Thank you! The NeRF approach was already built-in to the creative. We had to use it, but I felt it was perfect to represent a past memory frozen in time! So in one weekend I very quickly immersed myself in the tech, and did a ton of tests for the treatment. 

The NeRF technology is sort of the next generation of Lidar scans, but instead of thinking of the scans as 3D shapes with flat images projected onto them, they’re really more like a simulation of light behaving across 3D space. The neural part comes in after you train the AI on how light behaves in a certain scene, and the AI fills in the gaps where the capturing camera didn’t point from (the ceiling or the floor, for example). That’s how you end up with a  complete 3D scene that you can shoot from any angle. 

We captured the actors in a real physical set with a steadicam. The camera went around them at different heights and distances so that we could feed the AI with as much light information as possible. They sat as still as possible, which is crucial to generate the cleanest image. Then we fed these takes into a NeRF app (we used Luma AI, who were an  incredible ally and the best at creating NeRFs right now, in my opinion). Once we had the NeRF generated, we went to our virtual production space and I mocked up a few camera move ideas in Unreal Engine using a shoulder-mounted virtual camera. Then we shot our live action shot with Karen. 

Above: A mock take of the ad. Video credit: @karenxcheng

In post, I created a 3D camera move that was then cleaned up and used to render the final NeRFs (there were three in total: exterior restaurant, interior restaurant 1, and interior restaurant 2). The NeRFs were stitched together to create one seamless shot and then we did some more work in VFX to clean up a few things and make them cinematic.

Above: A camera move preview of the ad taken from inside Unreal Engine.

LBB> I understand that AI also had a big part to play in bringing the spot together. Can you tell us more about the role of AI in the creative process for this ad? 

NORTON> I don’t want to reveal too much about the secret sauce, but I can say that obviously AI was involved in the creation of the NeRFs themselves. Usually you capture a NeRF using stills or video and then train the AI using those images. But like all AI and machine learning, it’s sort of a black box. You have no idea what the AI is doing and the results can sometimes be unexpected. 

So we went a step further by doing ‘supervised training’, which is picking and choosing the best frames. Removed any frames with eye blinks (it’s really hard for actors to stay still for minutes at a time), or problematic lens flares. We also cheated the AI into turning our extras into blurry clouds. I wanted the extras to be blurry like in long exposure photography, to feel like our hero family was frozen in time. So while our actors were as still as physically possible during our takes, I had the extras move as much as possible. The AI doesn’t know what to do with that, so it turned them into clouds spread across the space that their movements occupied. 

After the NeRFs were generated, we used AI to tweak the renders themselves, which really helped perfect the image. We also used AI to generate a depth map of the scene (a black and white matte that translates how close objects are to the camera). Even though Luma AI  generates its own depth maps already, I found that letting a different AI generate our own from the rendered NeRF gave us more flexibility when adding things like depth of field. Our VFX artist Nathan Matsuda also used AI to add some dreamy, cloudy elements to the scene. 

Above: The ad was edited inside the Luma AI app.

LBB> Now that you have that experience of working with NeRF, how useful do you think it’s going to be for filmmakers in a broad sense? Do you have any other ideas percolating which you’re excited to try out?  

NORTON> I think this is going to revolutionise filmmaking. Earlier I said NeRFs only  represent light behaviour over space, because it’s not over time - yet. We currently can only capture frozen moments, but that’s coming. We will be able to capture NeRFs as moving scenes. And then Tarantino will be screaming that a movie shot using NeRFs is not a real movie.

I joke, but in essence you would shoot once and explore coverage later, which is MIND BLOWING. Shooting would be much more about the performances and geography of a scene as opposed to moving a bus load of people every time you want to change the camera angle. It will be similar to what Cameron is doing with the Avatar movies, except his actors are wearing tight suits with ping pong balls and he has to have thousands of people build the CG actors and the world afterwards. With NeRFs, you get the whole set built in a few minutes, in realistic CG with real-world lighting. This is the beginning of a filmmaking revolution and I couldn’t  be more excited to have been one of the first ones shooting with it. 

In the immediate future, I’d love to play around with a camera array. Shoot with a ton of cameras to get some incredible frozen moment stuff. For instance, an actor diving into a pool and water splashing around them would make for a great way to showcase the tech. 

Above: A Twitter thread from Karen X. Cheng, breaking down more aspects of the innovative shoot. 

LBB> What was the biggest challenge you encountered with the  McDonald’s ad, and how did you overcome it?  

NORTON> The biggest challenge was explaining what the technology was! I literally lost days on zoom calls explaining what we were doing and why. I think some people still don’t get it even after working on this. It can be very confusing, because they saw a steadicam moving around a table with people sitting still and so they thought that's what we were  shooting. But we weren’t. That was just us scanning the scene. 

I think traditional production struggled a little bit to adapt to the tech - we had to invent a workflow from scratch because no one had done this before. There were a lot of sleepless nights for sure! Luckily I found some key people that really got it, and helped me to achieve my vision. Everyone learned a ton by the time we were done. 

LBB> Are you a filmmaker who generally enjoys working with the latest technology? If so, are there any advances in filmmaking tech which are especially exciting to you right now?

NORTON> I do. But I try to be mindful of the pitfalls of using technology for technology’s sake (think of the Star Wars prequels!). 

Going back to reenacting scenes in my house as a kid, I’ve been really excited about what Unreal Engine has done for filmmakers like me. It’s gotten to the point where you can literally create an X-Wing chase scene in your own home. I’ve been learning it since the pandemic. 

Above: Footage from Unreal Engine is seen in the foreground of this BTS video from the McDonald's Lunar New Year spot. Video credit: @karenxcheng.

Don’t get me wrong, I love production - but our biggest obstacle is always time. Being able to spend time inside virtual sets, like we did on this spot using NeRFs, is a blessing. Even if only to explore and better prepare for a live action shoot. 

LBB> Let's go right back to the beginning - what kind of a kid were you growing up,  and when did you first realise that you wanted to be a filmmaker? 

NORTON> Growing up I was definitely a shy, nerdy kid. I’ve always loved movies and I would often reenact my favourite scenes at home. I’d pretend my hand was an X-Wing starfighter and position my ‘camera’ (a single open eye) right behind it and fly around as if in a chase scene. Instinctively I’d notice things like perspective, shallow depth of field, and  cinematic bokeh in the background. 

My brother worked in advertising and I would visit film sets whenever I could. I remember loving that magical moment when the 35mm film camera would start rolling and the shutter made the crappy, soap opera VTR image suddenly turn into a MOVIE. 

I first realised that filmmaking could be a ‘job’ when I watched the behind-the scenes of Who Framed Roger Rabbit at about eight years old. But the only actual record I have of my desire to be a filmmaker is an autograph from actress Maria de Medeiros - who played Fabienne in Pulp Fiction - that read: “Good luck with your directing career”. She shot a French movie at my parent’s house in Lisbon. I must’ve been twelve or thirteen. 

LBB> As well as directing, you’re also a colourist and editor. Do those additional skills influence your mindset when you’re sitting in the directors chair, and by the same token do you think that being a director has an impact on your approach to editing? 

NORTON> Editing has always influenced my mindset as a director. I cut in my head all the time. And I tend to previs, even if just on my phone with friends or family! 

On personal projects, I don’t usually do a lot of coverage, because I already know I’m only going to use this bit and this bit from a certain angle. In advertising it’s different, my job is to give the agency as many options as possible, so I need to shoot with that in mind. As far as  editing spots for other directors, I now look at dailies and go “how would I have directed this?”. I’d be on that person for this moment, and this tight for that moment, and I start building it that way. I think it always brings a fresh point of view to the project. 

But being a colourist doesn’t influence much of my directing, honestly. I hire cinematographers that I trust, and I let them do their thing. But one thing that has definitely impacted my directing is screenwriting, which is a fairly recent development. It’s changed how I think of and experience movies forever. Gaining a deeper understanding of story has enriched and added layers to the work I do. 

LBB> Is there a particular type of project you’d be interested in working on in the near future? If so, what would it be? 

NORTON> I’m always looking for a fresh way to create images, whether it’s through AI or other new techniques, the twisting of stereotypes, or bending story structure. Visually magic pieces have always really excited me, like some of Glazer’s work and the recent Burberry ads or Lacoste spots. But at the end of the day, it all leads to storytelling and my longform  ambitions. 

LBB> Finally, imagine you were able to travel back in time and give yourself one piece of advice at the start of your career. What would it be and why? 

NORTON> “It’s going to take time”. 

Even though it seems like success happens overnight, most successful people I know have been doing this for like ten freaking years or more before hitting it big. 

Some of these people have done absolutely terrible work that we’ll never see, and had to reset and find themselves again. It’s important to understand that perseverance, razor-sharp focus, and a hefty dose of luck plays a massive role. 

Oh, and push cynicism off a cliff. That shit is toxic.


view more - Behind the Work
Sign up to our newsletters and stay up to date with the best work and breaking ad news from around the world.

Genres: Visual VFX

Categories: Restaurant, Food

Great Guns London, Mon, 23 Jan 2023 09:13:00 GMT