Forum Replies Created

Page 11 of 15
  • Audio is not what is taxing your system. Lack of resources, lack of drive space and lack of RAM (and/or processor power) probably are.

    I import .WAV audio all of the time into my projects and, as long as I have plenty of space on my boot drive as well as my media drive, I don’t get crashes.

    Are you compressing video at all?

    What if there were no hypothetical questions?

  • Remember, with respect to playback speed without dropped frames, if you are looking at HD video that is not compressed, you may need an array to get it to play back well.

    One 1T drive will certainly be slower than a 1T array that is striped RAID 0. Such an array would consist of two .5T drives striped for speed. Arrays like that are more expensive than just one drive but the data rate that they can sustain is much higher than the data rate from one drive.

    The issue with your h.264 files is certainly a case of Premiere Pro trying to decode at the same time it is playing back and an array doesn’t help in that case because the issue is processing power. But where you have video that is completely uncompressed, you may find that 7200 RPM drives cannot keep up with the speed demanded for HD video — let alone full resolution CCIR 601 NTSC.

    What if there were no hypothetical questions?

  • Mark Hollis

    August 10, 2009 at 4:42 pm in reply to: Mixing sound in Première CS4

    The way I mix is the way I have always mixed since I made the transition from linear to non-linear editing.

    I take the track(s) that I need to hear and set them, during the edit, to a proper level. I add EQ, volume and other effects in the edit as I need them, playing the material back and watching my levels accordingly.

    To the extent that I have a show that is made up of sections, I will edit each section on its own timeline, making a new sequence within my project for each section. I listen to each section and I use the audio tool to monitor things, but he only levels I ever set on any track are overall levels, not dynamic ones.

    I do dynamic changes on the timeline, using keyframes, expanding the audio track so that I can see the waveform and see the keyframes and make changes to the sound on the timeline as I am playing it back and listening to it while I edit.

    Then, when it comes time to assemble the whole show, I simply copy and paste (or export premixed AVIs) of my sequences and place them in proper order in the show, building an open and close on a “Show Build” sequence timeline.

    Premiere handles things very well that way.

    I used to mix live for stage as well, with a mixer. And I would use tape to ID each mic or sound source and, when we did our sound check, I would trim each input level where it ought to go within the mix, so that when I opened the pot I’d take it to the correct level instantly. If stuff needed to be low, I’d set a piece of tape on my board at the stopping point and I’d have the fader right where it was supposed to be pretty quickly.

    I never had a problem mixing live that way. You just have to set yourself up to not fail.

    So I don’t do live mixes in Premiere. I do them all preset. That way I always have good mixes that sound fine and I have very few occasions where I need to change sound levels before air.

    What if there were no hypothetical questions?

  • OK, (and I’m probably hung up on the whole subtitle thing) but the timecode you generate will be a video timecode and not a film timecode. So, videotaping the result of a projection of 24FPS film (unless you are in Europe or PAL areas of Asia where film is played back at 25FPS) will cause an error in timecode — but not in time. And that will cause problems for anyone trying to do subtitles (for real) for the film because your timecode will not match the 24 FPS framerate of the film.

    So if you use NTSC or an American form of television (or PAL-M in Brazil), any timecode you create for captioning will be the wrong frame rate.

    What if there were no hypothetical questions?

  • Mark Hollis

    August 7, 2009 at 4:31 pm in reply to: Can’t capture SD footage from XLH1

    I agree with Nathan. I have done lots of capture over SDI cables where I know that audio and video are arriving down the pipeline and, if the material is in the wrong format (ie HD if your editor is set up to be SD or SD if your editor is set up for HD) the editor will tell you that there is nothing flowing down the pipeline.

    The XLH1 may well be confused and be putting out an HD signal (as its default) over firewire while playing back an “upconvert” in its own monitor. I’ll bet there is an arcane reference to this in the camera’s manual that doesn’t make things completely clear but covers this. Cycle power on the camera with it in VCR mode and play the tape and see if that doesn’t fix the problem. Else set it for SD playback in its menus then cycle it again (if that is a menu choice).

    What if there were no hypothetical questions?

  • Mark Hollis

    August 7, 2009 at 12:55 pm in reply to: do i need to render before exporting to tape in cs4?

    Give it a try — but don’t unless you want to risk needing to render anyway and redo your export.

    Two things can happen with unrendered material:

    • Dropped frames. This is the most noticeable issue when you have unrendered material that cannot play back fast enough.
    • Audio loses sync with video. This is sometimes a little harder to notice when you are doing a playout but certainly noticeable on the back end.

    Knowing your risks here, everything may work out just fine. And it may work out 90% of the time. But you have to justify the amount of time you have to take to redo anything that fails.

    What if there were no hypothetical questions?

  • Mark Hollis

    August 6, 2009 at 10:02 pm in reply to: Why do we even need a video card?

    I would agree with Brian. What threw me off was your comment about going “generic.”

    But I should caution you: The more displays you run on your computer, the more you tax the resources of that computer and that includes a monitor that is just mirroring another.

    As everyone transitions to 64-bit (including Adobe applications) this may be less of a problem, save in computers that don’t have a lot of RAM.

    What if there were no hypothetical questions?

  • Mark Hollis

    August 6, 2009 at 1:21 pm in reply to: Why do we even need a video card?

    By “video card” do you mean an I/O subsystem, like the AJA or the Blackmagic card that will control a VCR?

    There are obvious reasons for those — anyone who is ingesting video from tape will need such a card.

    But Zvi specifically talks about disabling the driver for his computer’s video card, designed to drive his computer monitors. And he’s noticing that a “generic” display works better and produces better results.

    Obviously you need a video card to drive your computer monitors. But the more expensive video cards are designed around creating and shading polygons in 3D space. As such, they’re pretty powerful but you don’t usually use this ability for video.

    Where these cards will help you and increase your ability to get your job done is in the area of effects that may take advantage of their power. If you use the simple 3D effect in Premiere (and I use a really old version) it will take advantage of your graphics card to render that material. Additionally, there are other 3D effects you can apply to video that will benefit from a high-end graphics card. But normal video playback does not benefit from these cards.

    The problem Zvi is having here is with the driver for that adapter. And it is, apparently, so poorly-written that it is actually slowing down performance.

    In a case like that, your first move ought to be to go out and get a driver update, if there is one. You should also complain (loudly) to the card manufacturer and/or computer manufacturer if they installed that card in a stock system. Zvi’s description of how he solved his problem is an outstanding way of complaining. A graphics card (or GPU) ought not function better using a “generic” driver than it does with the driver designed for it.

    But we also ought to consider what “generic” means.

    Does “generic” mean a generic Open-GL graphics adapter? Because if it does, you are getting a serious benefit from the card’s hardware for any 3D work you are doing and the card itself is helping you as the driver software is hindering you.

    Nvidia uses a proprietary model, called CUDA for controlling its cards and ATI uses DirectX (developed by Microsoft). But Open-GL tends to work with just about everyones graphics cards and if “generic” means Open GL, you’re harnessing the technology that lots of Linux propellerheads have created and refined for a number of years.

    I would like Zvi to try a 3D render using the proprietary driver for his GPU and then try the same render using “generic.” If “generic” is faster, I’ll bet it’s using Open GL calls and points to some serious stupidity on the part of the GPU programmers who are supporting his graphics card.

    What if there were no hypothetical questions?

  • Mark Hollis

    August 5, 2009 at 4:49 pm in reply to: Question on quality ….

    She’s very nice. Does she date videographiers? (not serious)

    Your animation codec looks a little more “contrasty” and if you looked at it in a scope, you might see the blacks at zero IRE and the whites kissing 110.

    Of course, for video, that may be more range than you particularly want, so the “none” “codec” would be right.

    She’s “washed out” in the DV-NTSC codec because DV means 4:1:1 (YUV) compression. DV is “good” compression because the color is compressed and, because we have more rods than cones in our eyes, we don’t percieve the compression very well. This is why S-VHS looks so much better than VHS because, despite the fact that it is “color under,” the black and white portion of the signal is as really high resolution.

    But the signal is still compressed. And, so if you compress that further, you are double-compressing video. And that’s bad.

    Use “none.” Then use a high-quality compression program like Sorenson to compress “gently” to get the desired bit rate.

    What if there were no hypothetical questions?

  • Mark Hollis

    August 5, 2009 at 2:44 pm in reply to: Question on quality ….

    …when they output from premiere they do it in fully uncompressed qt files (using either the ‘none’ or ‘animation’ codec, leaving the quality at the highest setting).

    Remember, the video they are working with may be compressed, even though their output may be uncompressed.

    And the “animation” codec is not so much a codec as it is color space. I believe Premiere Pro works in standard Video color space, unless you specify different. Animation gives you the range from 0-255 in RGB space, while video gives you 16-235, I believe. Then there are gamma curves that are typically applied to make video and/or graphics line up to desired specifications, with 709, logSRGB and linear gamma.

    “None” applies no curves or color space limitations that are not all ready inherent in the material as captured. It will also create a pretty large file size and, frequently, you won’t be able to play back the file on a standard computer without it being on a disk array.

    What if there were no hypothetical questions?

Page 11 of 15

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy