Creative Communities of the World Forums

The peer to peer support community for media production professionals.

Activity Forums Apple Final Cut Pro Legacy Normalizing audio average volume, not peaks, in FCP

  • Normalizing audio average volume, not peaks, in FCP

    Posted by Dane Frederiksen on May 28, 2008 at 8:41 pm

    I’m a producer trying to play online/offline editor and sound designer so forgive my ignorance…

    1) I’m trying to determine best practices for compressing and normalizing audio for TV levels (my understanding is -12dBFS is a standard level average target for broadcast with a +/- 6 Db dynamic range. (Peaking at -6dBFS) Does that ‘sound’ right?

    2) Does anyone know if there a way to normalize audio by auto analyzing the average volume, not the peaks, in FCP? There is an option in Vegas to select peaks or “average RMS level (loudness)”.

    3) what is the preferred method for processing an entire sequence? Should I nest sequence then normalize/compress or just select all and then process with compression and normalization?

    4) Is there a easier/better way to prepare/process audio for broadcast?

    Admittedly basic questions but I can’t find a clear answer in the FCP manual or boards.

    Dane Frederiksen
    Lead Producer
    Future Studios

    PS: Thank you C.Cow contributers, don’t you just make the world go round!

    Edward Corter replied 12 years, 2 months ago 5 Members · 7 Replies
  • 7 Replies
  • Chris Borjis

    May 28, 2008 at 8:48 pm

    1 = you are correct

    2 = not that I am aware of

    3 = select all, normalize to -6, but go to -3 if you have
    to bump up overall levels. I usually do that then test
    in random peak areas while watching a VU Meter and Digital Meter.
    Get it up to level, but never allow it to floor zero. So many
    make that mistake thinking broadcast levels and music are the same.

    4 = absolutely, but it comes with a price. Get an audio engineer with protools or other audio app they know in and out and let them do their job. Sounds much better that way, though I understand audio is usually the first to get its budget clipped.

  • Dane Frederiksen

    May 28, 2008 at 9:13 pm

    OK, thanks Chris! So would you tend to compress then normalize, vice versa or does that not matter?

  • Chris Borjis

    May 28, 2008 at 11:26 pm

    I never compress truth be told, but if you have to, try both ways and see which one works best.

  • Bill Moede

    May 28, 2008 at 11:33 pm

    I get good results by carefully adjusting all audio levels to the overall target average level, then apply the FCP compressor limiter using 1.2 – 1.5 of ratio to take care of peaks.

  • Michael Gissing

    May 29, 2008 at 7:44 am

    A lot of misinformation in these answers. Broadcast levels do vary but there are set international levels for SMPTE and EBU broadcasters which limit peak levels to -10dbfs (digital full scale) in NTSC land and -9dbfs in PAL land.

    Reference at -12 is not the common broadcast standard. -20dbfs is SMPTE standard and -18dbfs is EBU standard. The bottom line is to ask your broadcaster what their specs are.

    Pro audio post people all use high quality compressors and limiters to achieve these and with high intelligibility of dialog by also EQing. FCP, Vegas and all the other video edit tools do not have this quality of dynamics or EQ. Normalising is not going to come near to making a proper broadcast quality sound track and nothing automated can come close to human perception. Sound post is not and never will be something that a machine can do. As this is the most misunderstood aspect of video production, a keen amateur will be unlikely to come close to a sound post pro. You might make a better fist of graphics or grading if you are trying to cut budgets.

    The price of sound post compared to the value it adds to the finished product makes it the cheapest aspect of professional program making. My advice is to cut corners in every other aspect of production before under funding sound post. Just do it once with a professional and you will see how much more value you get doing it that way.

  • Chris Borjis

    May 29, 2008 at 4:36 pm

    [Michael Gissing] “Reference at -12 is not the common broadcast standard. -20dbfs is SMPTE standard and -18dbfs is EBU standard.”

    oops my bad, your absolutely right.

    I never do anything at -12 reference and I’m still dumbfounded as to why fcp comes by default that way.

  • Edward Corter

    February 19, 2014 at 2:09 pm

    I am a broadcast electronics engineer: I write verilog FPGA’s and assembly language for microprocessors.

    I disagree with the statement that ” no machine can normalize audio ”

    I believe that an audio normal-izer can implemented in an FPGA or a micro processor.

    You can state machine or code to make the same listening decisions that the human does.

    So,, give me functional specifications

    I would want to write a module that would learn normal when a button is pressed

    sampling level over a period of time of course and learning two things:

    normal peak max db
    normal averaged audio level db

    yes averaged

    then monitor for drops or increases in these levels

    tuning of my would be required
    the variable of how many times do you want to re test this
    change in audio level

    but I think I could do this design
    I have xilinx license
    how about you buy the development board, and we get it done before NAB
    2014 not gonna happen

    almost 2015 piece of cake

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy