Norm Kaiser
Forum Replies Created
-
And you resized it on the timeline to what percentage?
-
That’s EXACTLY what I’m looking for! How, specifically, did you do it? The dotted lines are a transparent .PNG?
-
Thanks for your reply!
I largely understand the difference between fields and frames; what confused me is the 29.97i designation.
There seems to be much confusion over 29.97i on the Internet, telling from my Google adventure of it two days ago.
As it was explained — and I don’t know if it’s correct — but as it was explained on the site I found, 29.97i is just another term for 59.94i. I don’t know if that’s correct.
There is still one thing that confuses me, though. With 59.94i, you have 59.94 fields per second, yes?
So to render 59.94 to 30p, the rendering agent combines two fields together to make one frame, correct?
But are the two different fields that become one frame two different moments in time? In other words, suppose field #1 is recorded exactly at 12 midnight. Is field #2, then, recorded at 12 midnight plus 1/59.94 a second later? Or is the camera “seeing” only 29.97 frames per second and then splitting each one into two fields for storage convenience?
Does this make any sense?
Thanks!
Norm -
Beautiful! Thank you so much!
-
Ah, and another question.
Is placing the bars, tone, and countdown at 58:30:00 on the timecode even necessary if I’m rendering to an MP and delivering via FTP?
-
OK, I think I’m following you.
So what do I do on the timeline in Vegas? Place everything at the 1 hour mark on the timeline and then just render the selected region???
-
Thanks so much for your reply!
So this is why I am so confused. It’s seems like there’s so much guidance saying that you’re supposed to have an hour of nothing. See here:
For example, in a video program the first frame of action (FFOA) starts at one hour (typically timecode of 01:00:00:00 in the US, and 10:00:00:00 in the UK), preceding that, 1 frame (or the 2-pop) of tone would be placed at timecode 00:59:58:00 or exactly 2 seconds before first picture.
-
OK, awesome, as usual.
I am understanding you completely now (I think…ha ha).
So the volume control will bring the overall signal down to roughly where it needs to be, the compressor will smooth it out, and the wave hammer will serve as a shield, of sorts, to block ANY random sudden loud noise from breaking the threshold.
The volume control is what I was not comprehending. Your explanation of decreasing the volume decreasing both the volume of the desired signal AND the noise lit the light bulb for me. In a well-recorded sound piece, the desired sound source is “in line” with the noise, thus decreasing the overall volume of everything keeps everything “in line.” I was confusing that notion with the notion of increasing volume on a poorly recorded audio piece where the desired audio is very low.
I’m following you now. Makes perfect sense.
I do have a separate sound card. It’s certainly not a Lamborghini, but it’s not terrible. Can you tell me how I would configure a sound card for broadcast audio levels instead of PC levels the way you did?
And I mix with headphones, not speakers. Speakers would make my wife angry. Ha ha.
Thanks again for all your help. You are invaluable. How the heck did you learn all this? I want to go to the same school. And if you’re a beer drinker, I owe you about a brewery by now.
-
Well, I still haven’t had much success.
Here’s what I did:
– Placed a sample piece of material on the timeline
– On that sample’s audio track, I added the Wave Hammer plugin
– On the Volume Maximizer, I set the Threshold to -8.1dB and the Output level to -10dB.
– I then added the Track Compressor.
– On the Track Compressor I have the Threshold set to -24dB and the Amount ratio set to 3:1.
I then generate the loudness log, but my Integrated LUFS is still at -15.29.
I’m still obviously missing something. At this point, the only way I can get it down to -24 LUFS is by decreasing the volume on the track bit by bit until I get there.
But decreasing the volume on the track seems so counterintuitive to me. Because when I render the piece and if I place that rendered piece back on the timeline, the waveform is so tiny!
And that’s where I’m lost. Is that “tiny waveform signal” going to be “right” when broadcast? It seems to me that signal would have to be amplified just to be heard. I get that my PC is likely optimized for 0dB, but doesn’t reducing the volume in Vegas smash the signal such that when it’s broadcast bot the desired audio AND the noise floor get amplified?
-
OK, bam, I’m following you *mostly* now. Thank goodness you are here. I realize now that what I was doing with the Wave Hammer is dumb. And you were politely telling me so.
I get it. So to meet CALM my overall average must be -24dBFS but I can occasionally peak at -10dBFS, so long as my average is still -24. I gotcha. And what I’m doing now with the Wave Hammer is “smashing” the peaks that I should be allowing and effectively crushing sound quality.
OK, so I want to back up and do it the right way. the iZotope tool is very tempting, but if I can save $350, I’d obviously prefer to do that.
So here’s what you suggested in your earlier post:
If you want to do it yourself the old-school way, I would start by placing Wave Hammer Surround on the master audio bus and set it to hard limit at whatever the peak value is (i.e., -10dBFS). Start with the “[Sys] Master for 16-bit” preset and adjust the Output Level on the Volume Maximizer to -10dB or whatever their peaks need to stay under. Then use a SMPTE 1KHz test tone @ -20dB to adjust the Threshold until the peak meters read -20 dB again to compensate for the limiting so that your volume is still accurate. This is the same test tone that you would use with “bars & tone” and the beeps for your slate count down if they want those.
Let me digest this:
Start with the “[Sys] Master for 16-bit” preset and adjust the Output Level on the Volume Maximizer to -10dB
Got it. I know how to do this.
Then use a SMPTE 1KHz test tone @ -20dB…
OK, I need help. Where do I get the test tone and where do I put it? On the timeline?