Community Media Discussion – Audio Editing Tips

This week in our Community Media Discussion, we’ll be chatting about audio editing using Audacity. What to look out for when editing a feature or a discussion? Are there any tips and tricks that help the process of keeping the audio sounding natural and engaging? Where does editing start? What are the features of an audio editing programme like Audacity? What can we learn to hear as we try out different forms of recording?

Join us for our regular discussion by signing up on Patreon for as little as £2.50 per month.

Last week I ran a session on podcasting, how to plan and prepare, and how to make simple recordings. We needed more time to look at audio editing, and the principles of audio mixing that make a podcast sound good and flow well. I thought it would be a good idea to run this week’s Zoom session about what we can do with audio, and how to ensure that we can get a reasonable edit from the recordings that we have collected. [I’ve added some YouTube videos at the bottom of the page that you might find useful].

Zoom Recorder

Editing doesn’t start with the audio recordings once they have been saved and loaded into an audio editor like Audacity or Audition. The editing process starts as soon as the decision is made to capture an event and speak to people about their experiences. The decisions that are taken before recording have a massive influence on the process of editing and mixing the content into a coherent narrative flow. The best advice is to edit in one’s head first, and plan to keep the process as simple as possible.

There is nothing to be gained by being lax in anticipating what forms of audio that can be captured, but this comes with experience and a more developed sense of how audio can be used in different ways to represent and shape a topic. Most of my podcasts are topic discussions, with two or more people discussing something around a table. This is the simplest form of recording I’ve developed, but I try to avoid making unnecessary edits where I don’t need to. I tend to just ‘duck’ the audio in between the sections where different people are speaking, to remove extraneous background sounds.

I find it incredibly difficult to cut material out of a conversation to meet a desired programme length. With a podcast, the length can be anything from a few minutes to a few hours. With a radio programme, there is typically a fixed length, so content has to be made to fit the intended duration, and that means making choices of what to cut out.

Because I want to avoid editing the audio I usually ask for no swearing, and I expect my contributors to not say anything that might cause offence or necessitate a right-to-reply on matters of public controversy if the material is broadcast. I tend to stick to the Ofcom Broadcast Code guidance, as this is a rigorous framework for any media content, and has the benefit of ensuring that a podcast can also be broadcast as a radio programme as well.

The most difficult part is deciding what the format of the podcast will be, and how it can be shaped into something engaging for the listener. I’ve adopted a couple of principles that help me achieve something quickly. I don’t like taking ages on a podcast, choosing the segments to include and making them flow together takes a lot of time to do properly, which is why radio and audio documentary producers have to work full-time, so they can craft a story and give balance to the voices that are being incorporated.

If I can, I don’t include myself in the programme or podcast. I prefer to let the story unfold as the people who I’ve been speaking with tell it to me. My job is to capture the thoughts and experiences of people, but to do this in a way that minimises my presences. It’s never about me, as far as I’m concerned, but it is more difficult to find a way to make the narrative flow without putting excessive amounts of signposting in. Who am I speaking with? Where are we? What’s the topic? And so on.

I always ask people to introduce themselves in as open a way as possible. Some people just give a name check, which is frustrating, as I prefer that people say hello and then introduce themselves. This makes it easier to introduce them in their own words. I also ask them to repeat the question, in their own words, wherever they can, so that any question I ask can be removed, and they are leading themselves and the listener into the topic rather than me doing it.

It’s essential to try and get as much background sound as possible between the questions and answers, as this can be useful for editing later. If the space between the speakers is too short, the edit between different people can sound disruptive and doesn’t flow. The aim with our audio mix is to smoothly take the listener with us on a sound journey. Occasionally, we can use disruptive sounds, but smooth transitions are more effective.

Walter Murch the famous sound designer and editor says there are always three sounds happening. The sound that is present, such as dialogue and speech on the topic; sound that is receding, such as background actuality; and sound that is incoming, such as a change of location. So having enough material to use for scene setting and transitions is essential.

It’s always worth recording a range of background sounds and actuality sounds, you never know when they might be needed to make an edit between two different speakers work well. Think of sound as punctuation. Spot sounds, as they are called, can be used to introduce a sense of place, and to signify the movement of people from one place or time to another. For example, a recording I did the other week was in a conference hall. I was in the same place, but people were coming and going, so the background sounds changed.

I had to think about capturing individual voices in a busy environment. I’d brought with me a hyper-cardioid reporters microphone, so it was relatively easy to isolate the speaker’s voice and minimise the general din of people talking over their lunch. I set the input level quite low, say three or four, or around thirty percent, and then get as close to the individuals as I could. If I was in a quiet space, I could set the input level at seven or eight, about seventy-five percent, and still get a good voice capture while giving a good sense of the ambiance of the space.

Audacity

The standard settings I use for recording are what’s often called ‘DVD Quality’, which is 48kHz and 16 bit. A standard CD is more than adequate at 44.1kHz, but a lot of audio that is available for stock material now comes at DVD quality because it is used with video editing. The extra data that this uses is marginal given the size of data cards and the speed of data transfers and backups nowadays.

I record in WAV format, with no compression or EQ applied to the recording on the device, I always do that later. I aim to record a neutral and clean audio file that can be changed afterwards. The standard settings on most audio recording devices are now excellent, and with a comparatively small data card, say 16Gb, this can give many hours of recording. A CD quality file records at about ten megabytes for one minute. A CD has seventy-four minutes of data available at around seven hundred megabytes.

MP3 and other compressed formats can record for considerably longer, though there is a quality loss from using compressed files. The rule is to record in the best quality that you can, and then only compress the file at the end once you are ready to upload it to the internet for streaming or sharing as a download. There are many more compressed audio formats that are now available in most audio editing applications, and depending on where the content will be distributed, the rule is to use uncompressed in all edits and mixes until the very end when the file is ready to be uploaded.

Once I’ve got the audio saved on my Zoom recorder, I can transfer it across to my PC or laptop. I always set up a folder for the project, and within that folder I add a sub-folder called ‘Source-Files’, which is where I load the original files I’ve collected. I then copy these files into the main folder, so I can work on them and get them ready for editing.

If I’ve recorded the content cleanly to begin with, and this is never guaranteed, then I usually apply a three-step process where I normalise the file to 97% of its maximum peak value. Rather than compressing the file using dynamic processing, I’ve got into the habit now of running a light normalisation. Both are effective ways of managing the overall volume levels of the files, but dynamic compression also brings up the background sounds as well.

Normalisation

I then normalise the file again, and then run an EQ filter over it, a high-pass filter set to kill the microphone rumble, which are the low-end sounds that come from traffic sounds, for example. Once I’ve done this I normalise again, but this may be a superstition I’ve developed. I do this as a standard process for all files now, even before I’ve looked at them, and I run this as a batch process when I’m editing with Adobe Audition.

Once I’ve saved the working files, I can then start to make edits to them. The first stage is to clean them up, and remove any extraneous sounds, such as handling sounds or electronic noise when plugging in the microphones. Occasionally, a cable might be faulty and create clicking sounds, so cutting them out helps. I try to leave as much ambient sound in the file as I can because this can always be helpful to manage an edit.

To start with, before we get to multitrack editing, the focus is on editing a single file. If recorded correctly, this might be as simple as topping and tailing the audio so that it stands ready to be linked with other elements of the audio. Most audio editors are now WYSIWYG, so it’s easy to do edits via cut and paste. Don’t forget, there is an undo function. I always treat Audacity and Audition as a destructive form of editing (hence the backup), so any changes should be carefully applied before the file is periodically saved.

Dynamics Processing

Non-destructive editing is much more common in both music and video processing tools, but it is still worth planning periodic saves of the session and the files to maintain their integrity. The only way to start to learn how to edit audio is to practice with material that you have recorded in different places and with different characteristics. Start with a single file to begin with, and edit this as a simple and continuous recording. Later you can shift to multitrack editing where you blend different elements, music and sound effects.

Some people like to spend hours editing their audio, and end up removing all the ‘ums’ and ‘ahs’ from what is said. I find this makes the recording artificial, and prefer to keep the flow of the conversation together. Too many removals of these moments can make the recording sound artificial. The listener wants to follow the conversation, not listen to the fantastic editing and mixing of the producer.

There’s clearly a lot to be noted about the process of audio editing, and this is a run-off from the top of my head. I always encourage people who are new to audio editing to start simple. I’ve never wanted to reach the level of competence where I’m managing hundreds of files all at once, so my level is intermediary. I’m also lazy and never want to spend more time than is necessary to allow the story or discussion to speak for itself. Taking audio to the next level is something worth considering, but most people will settle on a style and a degree of complexity that suits them. Hopefully, they have fun in the process.

We’ll come back to this in the new year.

Liked it? Take a second to support Decentered Media on Patreon!

Become a patron at Patreon!