Forum Replies Created

Page 22 of 25
  • Ht Davis

    March 26, 2015 at 8:22 pm in reply to: Weird problem. Has anyone seen this before?

    Was the footage originally shot in progressive mode?
    It seems the fields are indeed a little messed up, and that’s related to Adobe’s interpolation of the MTS. Most MTS will be read as 30i or 30p, even if it was shot at 60fps in progressive mode. You have to blow up the mts into an intermediate file with a readable header that fixes the issue. I use AME to blow up the files into prores or avc intra. IF your client doesn’t know how it was shot, you could use AME to output multiple files in tandem, including proxy versions (i’d set up each different frame count as a separate job with 2 outputs, so as one finishes you can check it right away). It looks like at least half the frames were dropped and half the fields are missing. What’s your preview mode? What type of screen? What is the refresh rate in your monitor\screen driver set to (50hz? 60hz? etc). If you play a 25i video on a 60hz screen, you can sometimes see artifacts during playback; same for 50i. You can also see it going 60i on a 50hz screen. The frames are refreshing at a different rate than your screen, so interlacing will be a problem. One way to combat the problem is to set the field mode for editing to Progressive, and have your video decompress to progressive frames in AME using the Frame blending setting. Usually, you’ll have to match your Client’s output needs for your sequence, so you should set the frame rate the same, but the field mode to progressive. If you can get their field mode settings, then you can export with that later, and turn off frame blending so it will drop the unused fields.

    But seriously, it looks like your sequence is at a different rate than your video. MTS doesn’t have an adobe readable header (so it guesses). Try an AVCINTRA file, and then auto create a sequence from that clip.

  • When you created your project, where did you put the preview files location? That’s the reason for the drop in HDD space. Cache-ing as well. Set both of those to a location where the preview can fit, away from your other drive. I always have 3 esata or faster RAID’s running. 1. work files (pro-res video), 2. backup of 1 and 3. cache and preview files. Of course this is alongside the fact that I use a changes backed up internal drive to build a disk image (dynamically growing to a max size) for the main project files and occasionally have it sit on RAID drive 3. where I have a space cut out to image to a backup. This allows me to work safely and efficiently. By containing in disk images that allocate as they grow, I can store it as a single file for RAR splits, and have those files spread across archival discs (note the difference between hdd “DISK” and optical media “DISC” here) for multiple long-term backups.
    I use a matchbook pro with a 2.16ghz intel core2 duo, 4gb ram, 256mb gfx with cs6 for my main edits. I also make proxy video files for use away from the SATA drives and store them in the disk image for the project. While I’m editing I use proxies and render out my effects in previews that are i-frame only mpeg.

    I agree that you should be weary of ramping the speed so much, just not daunted; remember, you are altering the total number of frames from each clip that are in your finished product, so there will be several steps for your processing. Don’t limit your workflow. You are limiting it to one machine, one drive, etc. Try AE, and maybe a RAID drive with 2-4tb. Get the right interface too (esata is good, but usb3 or thunderbolt are better). Make a backup of whatever you store on the RAID. If you have multiple machines (computers), try render-farming the whole thing through AE (3 machines or more otherwise it will take a lot longer). Otherwise, try rendering out previews of each clip on it’s own; set the work area to that clip only, render the effects in work area, then do it for the others too. It will take several renders, but each rendered preview will take less time, and the calculation will be clip local instead of sequence local (the values can change drastically for calculating the motion and reframing for the previews). It will also produce better looking previews, and ultimately better looking output, since the estimations are cached as separate instructions for exports.

  • Answer 1:
    Multicam edits are like this—>there is an edit mark where you want to change cameras, but that edit mark is just an edit mark. 2 workflows here. Manual and full auto. Full auto allows you to hit the multi cam recorder (basically a script that plays the sequence waiting for you to click to change the cams); manual is done by enclosing an area with edit marks, right click that clip, and go to multicam>Camera# where Camera# corresponds to the video track from your multi camera sequence you nested, and Audio # corresponds to the audio in your current sequence. Because they are just handled as edits and they are AUTO run in a script, you can easily move the edit mark, remove the edit mark, or even place marks that don’t go to new cameras but allow you to add effects to an area. So of course you can go back to the multi camera container, make edits, and come back and change the edits in your main output sequence.
    Answer 2:
    See answer 1. You can make any edit mark you want. You have to tell it when you want to change cameras between marks, but it’s a right click op that’s so easy my 5 yr old nephew’s done it when he visits.
    Answer 3:
    Not really. If you know the key presses for your system that allow you to go to the next panel, you might be able to jump around to the monitor you need, but, there’s no effectively easier way natively. If there’s a key press you can come up with that would work with the adobe JAVASCRIPT libraries, you might make a request of a javascript or adobe script capable friend. Personally, I’d really like to have one as well.

  • It’s called HDCP. It’s an encryption\altering of the signal that gets undone. HDMI is a two way com, but typically allows devices to only show each other what they support. Non HDCP compliant hardware will usually be incompatible with HDCP sources, though you can bypass this hdcp with component. They are not, however, the same exact signal. HDCP encompasses Audio as well. And without a secondary audio output for full audio formats, you’ll be stuck with composite stereo (linear PCM), instead of surround sound. HDMI carries the surround encoding in the signal, and can be passed into a receiver (all of which are hdcp compliant). As the version of this protection changes, the source devices are being built with multiple specifications for it. IT’s not any one company that’s to blame for this. It’s the entire entertainment industry. They want you to pay every time you watch… …in order to make up the losses from pirating. However, you can strip the two signals apart. Since most receivers have an OPTICAL connector, you can run Surround Sound over optical connections, and then use component to send out video.

    While they are right about most HDMI capture equipment, pro equipment will allow you to strip the HDCP. Most of this type of equipment cannot be legally used for anything but connecting to a camera for output over HDMI to the strip-conversion, to a converter\ouput that allows for live view or live capture of camera video. IT is illegal to use otherwise. For HDMI viewing, get a set top receiver or other similar device with an HDMI input, and have it connect to what you want over a component. For audio, you should probably use an input hub with an optical connector, and software that allows for the surround sound input over the plug that equipment uses (firewire or usb).

  • Ht Davis

    March 24, 2015 at 7:32 am in reply to: 5.1 Surround wth Stereo Mix Export “to hot”

    5.1 surround with a Stereo mix?
    Are you taking stereo and mixing a 5.1 or the reverse?

    Nevermind it makes little difference. Just try and remember, name one as the SOURCE and one as the OUTPUT or DESTINATION.

    The output from premiere and the output from FCP are different when played back. Premiere, when playing back previews, uses a CAF file that has been processed, and when playing a mix, it is all Additive. When you want to play out to a 5.1 surround from a stereo mix, Pass the channels to a submix for each major field, and playback only those fields to see how loud they are before you mix them together. If they are still too hot, you may be getting an OVERLAY gain, where two fields are actually laying over each other in such a way that they produce a GAIN (volume gain) effect. If this effect is only an audible one (you hear it, but aren’t measuring it in any way), then the hardware you’re using for output is the culprit. If it is a measured-on-screen difference, as in the actual decibel count on screen, remember that the software will use it’s own audio read engine. The two are different software, and will probably play the files back differently (with audio, this usually pertains to volume, as some will read some amplitude information as an average, and others will play back the amplitude on the fly; Premiere uses audio preview files or .caf files and uses the contained amplitudes, not adjusted headers; FCP uses adjusted headers).
    The way you deal with the problem is to use a file type that contains a solid header (WAV is a raw audio format, MOV or other digital format will have amplitude info that forces equivalency). If it still has a problem, check for any volume effect key-marks in the clip.

  • Ht Davis

    March 24, 2015 at 7:08 am in reply to: Premiere Crashing on DSLR footage

    Where is the footage?

    If the footage is on the camera, store it in the comp on a disk image. IF it’s a folder copy, you’ve got a bad copy, if on an sd card in a reader, bad reader or error in SD card.

    Make another copy of the stream folder. Try having AME (adobe media encoder) transcode the MTS files, or use premiere to line them up together and have them transcode. IF that fails, you’ve got an error in the video files on your card and your footage is blown.
    IF it works, you should be able to use the intermediate files in Premiere and retain quality, while retaining speed in the workflow.
    Editing right out of the camera with a DSLR isn’t always the best idea. That footage is heavily compressed. Decompress the footage to a full file and a proxy version. Start with the full version in premiere to build the sequence, then relink to the proxy and interpret the proxy for the sequence size. A little prep work will save you hours of decompress-edit-recompress in your output step.

  • Ht Davis

    March 24, 2015 at 6:58 am in reply to: anyone know what this clip means?

    Did you render anything before? IF so, you could be looking at this situation:

    You relink to an incorrect file or an overwritten clip (it’s been given a new name in the filesystem and not from the project) while another file has taken the place of the original, or it has been renamed in the project manager and the link in the sequence is to another clip with the same name (could be a nested sequence or other clip that was accidentally named with the footage name.

    In any case, if the sequence has been rendered before, you are seeing the previews from that render. Render the work area to make it match the correct clip. Also, you’ll want to “interpret” the footage.

    A -100 speed is usually a “Play it backwards” or “play it in reverse” command. If that’s what’s happening, you are not matching because the frame of the backwards play of the clip does not match the original frame from the forward playing clip (imagine it this way, the last frame is now frame 1, and frame 1 is now the last frame). Try this: set the timecode display to frames, find the first frame of the clip in the sequence, play to the section you want to see, keep it in the preview panel, get the frame number (as it would pertain to the clip only; do a little math), and now subtract that from the End of the Clip in the project panel, show that frame in the source monitor panel, compare preview panel and source panel. If they are the same, there is no problem. Somebody just wanted the clip to play backwards, and the clip frame count doesn’t match your sequence, just interpret the footage, and remember, -speed is a reverse play.

  • You can use AME to do the same, just turn on “Use Frame Blending”.

    With worse OIS motion correction, you may want to use AE to interpret or even Twixtor.
    I’ve even used a Gaussian blur in for some, and for others I’ve even added a transition at that point to make it pass through without disrupting the flow of the scene. If you only have one or two frames dropped from OIS or whatever reason, then use AME and add frame blending to conform to a set rate without losing the sync.

  • Ht Davis

    March 20, 2015 at 9:31 pm in reply to: making subclip from sequence

    Hey.
    I finally got this to work in a marginal capacity. Using the sub clip key press shortcut didn’t work. But this did.

    1. You edit mark the clip area you want as a sub, and “nest” it or copy\paste it to a new sequence.
    2. Dupe the sequence, use in out markers to clip the dupe, nest wherever you want it.

    Is it listed as a sub clip? No. But it’s the function you need. Just place the dupes in a sub clip bin.

  • Ht Davis

    March 20, 2015 at 9:22 pm in reply to: Interpreting Footage, Movie Delayed

    Hey guys… Mind if I cut in on this dance?

    VFR is a relic of the old days of film; specifically slideshow style films that were run on a manual projector (the human factor made the film run at a Variable Frame Rate).
    Today, you find it popping up in HD video shot with optical image stabilization, as this actually drops frames in order to retain stability. It is also found in some effects where the speed is increased or decreased improperly (after the main render), or in some cartoons\anime where the effect is meant to apply to action or motion scenes and amplify the “Feeling”.
    In broadcast television, it hasn’t been an issue for the broadcaster because most cameras are on stabilizers and tripods and image stabilization has been moving inside interchangeable lenses (thanks to lessons learned from the m4 sniper rifle with a floating barrel and scope). However, OIS is still employed by most current cameras, and the frame rates have often been variable, even with professional cameras. Working with most NLE’s, this hasn’t been an issue, since most play the video based on it’s actual time length in milliseconds (and they allow editing in that mode), and then blend frames when they need to in order to conform output (or in some cases, it just output variable frame rate as necessary). With Adobe, the emphasis is on the output matching a more professional standard, and the sequence is what is played (the video is “Conformed” or played frame by frame, and it’s length is in frames, not milliseconds). This allows effects to be applied directly to frames as if they were simple jpegs, and this is the “old way” with the “Frame by Frame” way of working with the video.
    For Mr. Broadcast TV, VFR and VBR are different. Variable Bit rate is not a problem, it is a standard of data transfer. It means that COMPRESSED video (lossy codec, loses some quality in order to keep data small) is encoded by dropping data from the images\frames (keeping changes from the last frame and some from the next, with jpeg compression) in order to fit within a certain transfer rate range, which allows the encoder to adjust the compression (data drop) according to the scene, maintaining similar QUALITY but drastically decreasing the amount of data needed to store the file, and the time to pass the file to the destination (like with youtube; conform to 10mbps or less, and it will play awesome, but go higher, and many people won’t be able to play it effectively) when the file has more areas that are less active then others (more data can be dropped and the frames can easily be rebuilt on the fly). Try not to confuse the two measures. One is about data transfer, the other is actual pictures. They are related, but not the same; think dogs and wolves.
    If you’re having problems with the frame rate, the only way to really fix that is to interpret the footage (which will adjust the actual milliseconds), or REFRAME, replacing missing frames with blended ones. If you only have a frame or two to replace, then interpreting the footage isn’t going to hurt much, but it could still mess with your audio. So… …First, process the audio into a file all it’s own, so you have the original, just in case. Then adjust the speed of your audio as necessary in the sequence. If it doesn’t work well, you can use something like twixtor or twixtor pro to re-interpret the whole video back to its normal time as an effect, and leave the audio alone (it will guess the dropped frames for you). The other way is to interpret with After effects inside a comp, and output from there or simply place in AME and use Frame Blending to decompress to an intermediary file (like pro-resLT or similar AVC), as this will guess the frames for you.
    If your footage is really bad, which can happen with some prosumer cameras being handheld while you get bumped around in a crowd, you can try an effect… …I usually just blur that part of the video in a comp in AE, and let twixtor then blend that, which results in some heavier motion blur (as in extreme blur), but leaves the scene still discernible and acts more like an effect. IF it happens at the start of a transition or at a cut, you can use transitions to make it even less destructive visually.
    I’ve had this type of problem in the more horrid sense; i.e. the family member (hobbyist or complete novice) who tries to shoot the wedding, and comes to me to edit and process the video. I’ve used AME in most cases, with frame blending, and I just cut out the most horrid areas at the beginning and end of any clips, or blur it, run to twixtor and process it, then pass in a transition to blend to the next clip (edit mark). When it falls right in the middle of an important thing, I use edit marks to do that and add a transition between to crossfade\blur the view and keep it discernible, without letting it hurt the composition of the video. This is usually acceptable to those clients, and some even find it a pleasurable addition. Theres heavy work in it, and many effect layers, but with the right plugs or even the right combination of blur\transition, you can get around some VFR artifacts.
    I hope this is helpful.

Page 22 of 25

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy