Jonathan Coomes is a sound designer and sound mixer in Los Angeles and an active member of Motion Picture Sound Editors and the Motion Picture Editors Guild. He has worked on numerous films, episodic series, and video games in a variety of roles. Epic Lab had the pleasure to talk with him.

Jonathan Coomes, in terms of sound design, what is the most challenging film genre for you?

Drama. Absolutely. Horror can be as big as you want it, and action movies often have cool moments for me to play against, while comedies are usually over the top. Drama requires a lot of restraint and subtlety. It should be grounded in reality until it isn’t. Sci-fi might be a close second because we have to come up with sounds that don’t exist in today’s world.

In your opinion, what is the secret of good sound design?

“Sound design = Reverb + Reverse + EQ”. It’s an anonymous quote that always comes to mind whenever someone asks about sound design, and by sound design I’m referring specifically to the subjective drones, whooshes, stings, hits, risers, etc. It’s a formula that does often work. I frequently use reversed reverbs as sound design elements. Of course, the sounds you start processing have to be the right ones for the moment. Every project is unique, so there is not one set of sounds I use on everything. The secret to good sound design is very much in line with film composing: It should influence how you feel in the scene without taking you out of the moment. Sometimes it has to be very subtle, felt not heard, and sometimes it has to be huge.

Likewise, what makes good film music for you?

Good film music must do two things: First, it tells us how we should feel. This is an oversimplification, but the emotions elicited by the music influence how we perceive a scene as it unfolds. Second, the music shouldn’t distract from the action on the screen and allow the dialogue to be heard as intended. For instance, if the music is too busy during a dialogue-heavy scene, it’s going to be mixed low or cut entirely. Ultimately, we’re trying to tell a story, so all of the elements have to work together to tell it to the best of our abilities.

How do sound design and film music affect each other, and how do you balance them in the mix?

They are completely entangled. They exist in the same frequency spectrum and add to RMS and peak levels in similar ways. As a sound designer and mixer, I like to avoid adding drones and tones that would conflict with music or be out of key. With hits and stings I’m conscious of key as well, but I’m more concerned with timing the hits to accentuate the score. I do like to beef up big music stings with my own low frequency elements, explosions, drums, etc.

How would you describe the characteristics of really bad sound design?

Bad sound design generally is anything that detracts from the story or is unsuccessfully executed. Also, using the same sound over and over again especially in quick succession becomes very noticeable. If it’s not working in the mix, we’ll probably just mute it.

Let’s take a look at some specific problems and situations often encountered in sound mixing and sound design. For example, the sound recorded on a film set can be challenged by background noise. How do you face it as a sound mixer?

Noise reduction on dialogue is a very delicate balance between lowering the noise floor while not sucking the life out of the dialogue. Some noise reduction will reduce high frequency content of dialogue, while others will cause digital artifacts if pushed too far. I generally try to use a light touch with a variety of noise reduction tools. I have a lot of experience cleaning audio, removing artifacts and reducing noise, so I can remove a lot of undesired sounds. However, if the noise floor is too high or there are other unwanted sounds in the production track, it may not be possible to reach the quality we all want, so we might need to do ADR or look for other options. Ideally you would capture it as cleanly as possible so I don’t have to over-process the production track.

From a sound design standpoint, how do you treat a scene that emerges as a flashback or a memory of a character?

This is something I see a lot of, and the sound design choices are really determined by the editor. The type of flashback determines how I treat it. If it’s a lot of quick cuts and flashes, then I hit them hard with booms and flash sound effects. If it’s a slow dissolve into a flashback then I use something much more subtle. If it’s dreamy or scary, I’ll probably add some tones and drones throughout. Often I will let music take over in these instances, but every project is different.

Let’s talk about backgrounds for a moment. In sound design I often struggle to find good city backgrounds for example. Are backgrounds and their volume ever a subject of controversy in the editing rooms you work in?

Backgrounds are hugely important, though they are often low in the mix. Backgrounds help with room tone to smooth dialogue edits, and they can add activity that place you in the environment. I typically create city backgrounds from multiple layers of traffic of varying distances, plus birds, pedestrians, or whatever else the scene calls for. I wouldn’t say backgrounds create much controversy, but if they’re sticking out or distracting, they are probably too loud.

Are there situations where you recommend a director or editor not to use film music at all for a specific scene? If yes, what kind of scenes would that be?

There are certain instances where a lack of music is effective. Silence is also a creative sound choice. Sometimes after a big score moment or action sequence it’s more effective to remove score to give the audience a chance to breath or draw contrast with the previous or next scene. Usually though, that has already been taken into account, so I don’t often need to advocate for pulling music out.

There exist a large amount of sound effects libraries nowadays. Is it still necessary to record foley effects at all?

While it is true that there are many excellent libraries for foley, and I do use these frequently, there are still sounds that are better captured on a foley stage recorded in sync to picture. For instance, I find that I can’t get the nuance on footsteps from libraries that I can on a foley stage with a trained foley artist. Also, sometimes it’s so much faster to have a foley artist record exactly what you need than to try to create it out of library effects. With that said, libraries certainly reduce the time and expense of the Foley stage. Foley recording is one of my favorite things of the whole post sound process, so I don’t want to see that go away completely.

Looking at dialog in big Hollywood productions, people outside the film industry often wonder what the estimated percentage of on-set recorded dialog versus dubbed dialog (using Automated Dialogue Replacement, in short, ADR) is. Your experience in that regard?

The amount of ADR used is dependent on the client and the needs of the project. ADR adds significant expense paying for the stage and the actor’s time, so many projects seek to minimize the amount of ADR we do. Sometimes however, we add lines to rewrite scenes, or the production recording is unusable and has to be rerecorded. As a percentage of ADR to total dialogue it can range from 0-100%, but I would guess the average is somewhere between 5 and 10% on smaller projects and closer to 50% on big budget blockbusters.

The quality of instrument sample libraries (e.g. orchestral instruments by Spitfire Audio) has become stunningly good in recent years. Is the era of live film music recording slowly coming to an end?

Most scores delivered to me are programmed by the composers. The live recordings of score generally adds significant expense. So while some instruments may be played and recorded live, much of the score is often created from samples from these increasingly high quality libraries.

For those interested in the technical aspect of sound mixing and design, what tools do you personally use?

The industry standard in Hollywood for postproduction sound is Pro Tools, so you have to at least be able to deliver a Pro Tools session for the dub stage. For dialogue clean up, I use Izotope RX in editorial and Cedar DNS in my mix session. I also use Soothe 2 by oeksound on my dialogue bus. I use Soundminer for my sound effects database. I almost always have ReVibe 2 reverbs in all my mix sessions, but there are other reverbs that I use like Indoor by Audio Ease. These I use on almost every project, but there are several other tools in my toolbox that I use on an as-needed basis.

The film industry is changing rapidly. What is the biggest challenge as a sound designer nowadays?

The biggest challenge now that technology has gotten so advanced is doing more work in less time with lower budgets. Now that most people can afford a camera and picture editing software, the financial threshold to complete a film is certainly lower than it was 15 years ago. Plus it’s important to stay up to date on the tools that are being used in our industry to keep streamlining workflows.

Has 7.1 cinema transformed your workflow in any way?

I’ve basically been finishing most projects in 5.1 for my whole career. The approach to 7.1 is very similar. Immersive sound like Dolby Atmos and DTS-X is a game changer. The principals, however, are very similar: When editing, it’s good to provide discrete sounds that can be used in surrounds and/or overheads. Ambiences are effective in the surrounds to really give you the sense of being in the environment. Larger actions sequences may use the surrounds as things fly off and on screen. Any format that uses an LFE channel benefits from added low frequency elements. But just because you have the channels, it doesn’t mean you have to use them all the time.

Generally speaking, how much creative freedom do you feel you have as a sound designer in the post production process?

With sound design, your creative freedom is limited only by your imagination and ability. However, there are expectations that vary based on genre and the individual projects. Audiences have been conditioned to expect certain types of sounds with different genres. Plus, if it’s onscreen, there should be a sound for it. I don’t think of these constraints so much as limitations but more as parameters that shape the direction we go with sound design. Once again, we’re telling a story, and the sound design needs to work with dialogue and music.

Looking back, what are the most important lessons learned / recommendations you would pass on to aspiring sound designers and sound mixers seeking a professional career in the film industry?

I never stop learning. I learn something new on each project, whether it’s cutting a sound I’ve never created before, using new plugins, or trying new track layouts. The best way to learn something is to do it. However, the most important things to this workflow are organization and track layout. You can cut the coolest sounds, but if the mixer can’t decipher what’s going on, they may not be able to spend the time necessary to carry out your intentions. All the audio files need to make it to the stage with your delivery session. Keep in mind there is not one right way to do this, but there are certainly wrong ways.

What would be your dream project in the future?

My dream project is probably a big Sci-fi production with the time and budget to do it right. But when I get to work again with any of the many filmmakers I’ve enjoyed working with over the years, that can be a dream project in its own way whether or not the time and budget are there.

Thank you very much for the interview.

Visit Jonathan’s imdb profile here.