Forum Replies Created

Page 6 of 25
  • Ht Davis

    May 11, 2015 at 8:08 pm in reply to: DCP audio workflow from After Effects problem…

    Audio is done in samples. When you export the audio for use in DCP, however, there is usually metadata attached conforming the samples to a given frame rate when you attach it in AE or audition (an option tick box). If the audio is getting out of Sync with the picture, I would suspect you have another problem, but with your video, not the audio.

    Cameras can have stabilizers or special algorithms to drop frames when motion or camera shake are detected that go beyond a certain limit. When this happens, you get what is called VARIABLE FRAME RATE video. DO NOT CONFUSE THIS WITH VBR (VARIABLE BIT RATE). Think of celluloid film… …Remember the old class projector? Good projectionists were very stable at around 24fps, but weren’t perfect. When auto projection came out, it as much better, but still not perfect. When better video technology pushed to 29-30fps, it was much smoother, and while some stuck with 24 frames, the technology allowed them to do it much more smoothly to give the memorable motion attributes from standard television in NTSC. 25 frames was used for PAL formats. With digital systems we have today, cameras have the ability to detect and reduce camera shake, to make the video “Feel” more natural and smooth, but it makes editing more difficult in professional applications. By dropping frames in a certain area, they throw off the sync. Most apps will work just fine playing it back on it’s own, but some will only play the available frames, and play them at the single set rate. I think OpenDCP is one of those last ones. By jumping over dropped frames, without changing the playback rate, they get out of sync with the audio.

    The fix:
    You have to use an encoder that allows you to force a CONSTANT FRAME RATE. The bit rate should be Variable. The FRAME RATE should be FORCED TO CONSTANT. Bit rate is data transmission and has nothing to do with the number of frames per second. Handbrake, AME, Compressor. Encode to a format that can force a constant frame rate. In adobe, look for a box that says “Frame Blending”. Export from AE with frame blending on, and it will force a constant rate, and fix the frames for you by BLENDING the surrounding frames to create new frames where the missing ones are.

  • as of cs6, you can use all manner of marker, but they will be converted to cue markers on the layer. there are scripts that will copy and paste, export etc, but you cannot elevate them to comp markers. I’ve heard that they are attempting to build the accessor for the Comp marker in CC, but I’ve really never needed that. So long as I can import other assets as layers, I can copy a layer to any comp I want, and I can copy the layer markers, I can essentially do all I need for audio. Getting disc markers to copy… …thats a tough one. I’ve never tried it. Supposedly the workflow is:
    copy paste from one to the other in AE, then send the new comp to premiere, then new sequence from comp. But… …this may as well just plant the comp as a clip, and use the markers as clip markers…
    It is possible to drag a sequence from one to the other, and I’ve not yet tried this. If the sequence already has the markers dropped on it, will they carry over when I drag only the sequence back to premiere? If so, then the markers would be there, I might just have to change their type. It would be good to test.

  • Ht Davis

    May 10, 2015 at 3:37 am in reply to: Adobe Premiere Pro CS6 – Multicam

    My situation is more limited. I’m on a MBP 2008 model, core2duo, 4gbram and 256mb nvidia… …I get 4 cameras not so bad with low res proxies. How?

    Let me explain my setup:
    I archive everything. I use firewire and esata RAID with 3 drives per raid at least (some set up with a simple differential raid spare for rebuilding if necessary, others only with RAID large storage). This lets me work at 600mb\sec from each drive.
    On macs at least, you’ll need a ton of internal space for any standard program caching (this is treated like swap data), and it needs to be internal or at least on a drive with permissions that match your root drive, so no odd formatting an external for use with this (I’ve tried externals, they just don’t work with the media cache in macs). This has to do with secure memory access, which means the files are accessed quickly and shunted to memory with a quick security pass, and if they don’t have the right setup, they will go into an endless loop, or take forever to verify. SO… …I put 2 1tb drives in my machine…
    All of my external files (from the cameras) are on external raid with 2-4tb of space (I’ve got several and I move them between machines), formatted to FAT or ExFAT (I prefer EXFAT), and they are used for Log and transport as well, but the proxies are kept on a separate drive with my project files. I make a sparse disk image for my working project files, and proxy media. Typically, though, you can just make a couple of 100gb disk images for up to 8 cams with 2-10 hours of footage in a low res proxy. You can use what you like. I mix it a bit. H264 and Pro res work well for proxies.
    I keep my audio and video previews on their own drive. That way I can recreate them as needed. This too is a RAID. But it is a firewire RAID (slower than esata). I do this so I have some idea of the playback quality at that resolution, and hints at the motion quality I’ll get with compression.

    Note–keep your proxies at DVD or higher resolution and you have better quality previews.
    Note–the more cameras you have, the more mbps you will use. Set the files up across several drives however, and you’ll be pulling less from each drive. I like RAID drives for this. It allows me to get closer to the full speed of the interface (firewire\esata). USB 3 would be great, but I don’t have it. More drives in a RAID means faster data writes and reads, combining the speeds. Since HDD’s usually have a max speed of 300mbps either way, even though SATA 1 is 1.5gbps, you’re limiting your speed with single or double drive storage bays.

    1. When you import media, start with your smallest resolution main file (as in the full format file), and then set your resolution for output and such from that. However, set your PREVIEW to your proxy resolution. This will render previews at the proxy file resolution for use in playback.
    2. Import your proxy files. Load them into your sequence. Now do an initial render. This will set up your preview files for use in playback.
    3. If you have a lot of clips or effects in one area of the video, render out previews for that work area (you can set what area to render). I always have audio render out with video for better performance and a better idea of sync.

    My initial log and transport usually takes a day or two to process with several cameras and only 1 higher end machine (I rent a unit for log and transport to my esata). I also output proxies depending on how much video (time), the frame rate, and how many cameras. With more cameras, I use h264 for proxies first. Then for a rundown with truer color and better resolution, I use Pro res and simply relink media, then delete the previews and re-render. My playback can be at full resolution in dvd quality. Sometimes, you can take a lesson from a less powerful machine and workflow. It’s all in how you allocate resources and put them to use. Start with a sequence that matches your main files, then put a proxy into it, and make sure your previews are at a resolution that matches your proxy. This will allow them to keep quality of image and motion. Then play them back at that resolution, don’t blow it up huge. You can always use an external monitor and set it to a low resolution view. I use a 50inch TV at a low resolution hooked to DVI\VGA. That’s basic dvd quality, and I play back on it to see how it looks. Granted, the image will be sharper and clearer in higher res, but most often, it’s pretty decent.

    Archival:
    Remember how I said I archive everything? DAY 3, initial archive. I set up most of my project, and proxies, then unmount the disk images, and burn to BDXL. Because I can archive to h264 without losing much quality on re-upping to full prores, I usually keep the original video in that format (mp4 h264) and store it with the project files archive. This takes up about 15gb per hour per video at high bit rate (5.0 and higher gen), or about 8 to 15gb per 2hrs video at blu-ray AVCHD 30p settings. I also keep any photos\images, or anything else I need initially, in that archive. Once I have most of my mainstream together, I burn an initial copy. After this point, I only use it if I have a truly fatal error (fire destroys the whole shebang or something like that). Every few days, I update that archive by simply burning the image to another bdxl. When I’m done with the project, everything is archived. I use a RAR split to break disk images up, especially those with my original card data, and burn to DVD’s size. The whole project, right down to the previews, is archived and ready to be reconstituted, so long as the EXFAT format can be read. The one step I left out… …I always make an ISO and restore each sparse to it before RAR and burn. It allows me to make certain the image format should still be compatible. The final archive takes about 2 days, but has been a boon. Some have come back time and again to have different projects done, then come back to have some highlights done that bring together elements from several projects. This workflow allows me the room to do that. I’ve only recently upgraded to 1tb drives in the RAID boxes. This gives me a grand total of… …13tb, minus the 2 internals for software. Plenty of space for a few hours of video. Since I only work with a few hours at a time (in general), it’s fast enough for me. When I have a larger project, I’ll make one change: a large xSAN or similar storage server farm. Then I just have to copy only the disk images I need at any one time, and go.

  • Ht Davis

    May 5, 2015 at 2:29 am in reply to: Premiere CC Saving Permission Error

    Thanks for the props, James. And yes, if your drive is in use by a monitoring app or driver, it doesn’t matter if it’s FAT or FLAT! It will still show up as write protected in many instances. Disk drill, drive genius, tech tool, even disk utility can have that effect on a disk; the program tells the os that it needs exclusive rights to the interface, and the os removes the write capability temporarily. Unfortunately, it doesn’t always return that permission in a timely fashion. Sometimes a restart is necessary, or just unplug the drive and plug back in, repair if needed, relink any files and save.

  • Ht Davis

    May 5, 2015 at 2:24 am in reply to: 105% speed video jumps frames on output

    tough one. Whenever you speed up a video, you will be effectively playing those frames faster, in essence, more fps. IF you are getting a jumpy result it’s because you may be dropping frames to fit. Thats why you’re getting some jump.
    A really slow way to lower the blur or keep from getting artifacts is to cut it up into clips of a few seconds, then send to AE, process each clip and then output it. Of course, you could try the same thing with frame blending on in premiere, which would limit the frames that are blended. Then you need to set them all to 105% speed, render the preview. Might work, might not.
    Why it might work:
    When blending frames premiere uses several before and after each dropped frame. If a longer clip is used, the frames can be dropped together, which eliminates some of the data used to blend a decent reframe. IF you use small enough clips, a single reframe will occur every clip, and it will have only a few frames to blend with, which will compile out to a sharper edge, but it is unclear whether it will actually fit in between its neighbors or not.
    Why it might not:
    With less to blend with, the program may not be able to make a clear enough blend.

  • Ht Davis

    May 5, 2015 at 1:20 am in reply to: exporting for a movie theater showing

    You can burn widescreen dvd format, just specify that in a sequence, then drop your sequence in to nest it.
    do a 720×480 sequence when you create it set the pixel aspect ratio to wide screen; you might find a preset that works this out. Technically 720×480 is slightly wide, but the widescreen version of this is 853×480 and reads in players as widescreen aspect 720×480. Once you create the new sequence, nest your main edit, then right click on the clip in the new sequence and select Scale to frame size. This will get you down to dvd in a hurry. Now export the sequence to something. I usually do a full format export or at least proresLT. Then I send the sequence to encore, and encode my transcodes on my own. You can do it in Adobe media encoder. Take your full format output, and have it code to MPEG2DVD, ambit max VBR 1 pass and drop it in a folder, using Dolby as your audio. This should create 2 files. In encore, if you sent straight from premiere, you’ll have assets to transcode. Once your MPEG2DVD is encoded, the two files should work as the transcode, so just right click the asset (premiere sequence), go to locate transcode, select dvd, and pick the first file. It should ask you for the second as well. Now you can set that as first play, and burn the disc or disc image in encore. At 9 mb max for video, your video is the highest dvd quality and widescreen.

  • Ht Davis

    May 4, 2015 at 11:22 pm in reply to: Beach ball and question marks on media

    Where are you storing Media cache?
    Where are previews stored?
    How is the RAID formatted (filesystem used, RAID–How many drives and how is it configured)?

    If you cannot open the project there are 2 things I can see as possible root causes. First, your drives may have errors on them or damaged blocks where the file resides. Repair the drive using a disk check or a repair in mac os. Second, if the settings of the file show that the media cache and caching are on a volume that isn’t your main drive or isn’t formatted HFS+ with a GUID partition table, you may be running into memory caching errors, as the files need to have the same permissions structures and values as your user account in order to be able to move to memory.

    Try the drive repair first, and if that doesn’t work, try creating a new file and set your caching to your documents folder, then import one sequence at a time and save the new file as you go. Then relink the media. When you are finished, you should be able to work.

    If you are on mac os as I suspect, repair permissions on your main drive and on the thunderbolt drive.

  • If you are moving all the clips, just rename the bin instead. If only one or two, you may not have enough screen space, but if that’s the case, you can fix that with more screens. It sounds more like an organizational problem. Ideally, you should have each main section binned, and bins inside that to hold sequences, which also have a bin inside them for the clips used in that sequence, and when using a clip in several sequences, you can just copy paste. If you have an abundance of clips, sort by date bins, or scene\section. By opening bins with fewer files in them, you shorten the visual length of the list, and can save screen space. If you aren’t cataloguing your footage properly and sorting, you are going to feel a little awkward when working. The more you work in prep, the less awkwardness you have to deal with later and the faster you tend to work.

  • Ht Davis

    April 30, 2015 at 12:07 am in reply to: how to import MTS in premiere cs6?

    Hey everybody. New to the thread.

    Listen up. There are many ways to deal with these files. It depends on your system and your workflow. Mine is a MBP with 4gbram 256mbgfx and core2duo. Also, I use canon vixia cameras on low budget projects or non profit. They metatag a 30p frame rate alongside the 60p, and my mac only pics up the 30p. If I import them, I only read 29.97fps, and the frames between are skipped. However, I found a workaround.

    Before, I just let the drop happen and dealt with it. IF I tried to encode with AME or another program, It would guess new frames, and the quality was low. But I compared an old project, using this new workflow, and using the old one. The old way dropped the actual frames and guessed new ones when coding 59.94. The new way didn’t, I could tell by going frame by frame.

    Final cut X actually has a workaround. It does well too. By importing the AVCHD, and placing it on a functional timeline that’s at 59.94 fps, it checks the file, and will check for extra frames in the feed. It has an “optimized” file (with guessed frames), but on output will use the input file, and you’ll get all the frames. However, I’ve also found that, while it’s sharper, it’s better often to use the lower rate, and have more natural motion.

    Another way to deal with them is to just import into Adobe (Yes Darren, you are doing it right), and create a sequence from one of them, and then place the others. Unfortunately, with my cameras, the timecode is not placed in the right binary slot in the header. The timecode is attached to each MTS (they have a copy of the start and end code of the whole video only). Part of the problem with this method is that you will not be able to properly process timecode information with some cameras. First, I use final cut or iMovie and recode to prores, then if I want to use a smaller file, I code that in AME (drop it down to a proxy file with frame blending on just in case) and fix any broken frames etc, then line it all up. Using FCPX or iMovie, you can get each recorded clip from the AVCHD info, with proper timecode, and you can output to the right frame rate, into one file if you recorded constant, or several clips if you recorded that way. It makes it easy to edit.

    For people using video from clients, I always tell mine: “Please allow an extra week for transcoding for up to 6 hours of video,” and they aren’t surprised by extra time used to make the video. I transcode their video, my video or anything other than an MP4, or other common computing format (i.E, a file that’s easily moved as one file and readily played\edited on a computer). This is because I need to be able to grade the files for color, and mix multi cam, audio etc. I also archive everything in a singular format, so reincarnating a project is easier, faster, and even more profitable.

    Media Drive? WTF?
    I use RAID. Firewire and ESATA. It allows faster transfer speed. I also use disk images to store sets of files. I set the maximums on them to match standard disc sizes. Then I burn each in turn. Media files all go out as soon as possible, usually to a blu-ray bdxl, but they all will fit when compressed at high bit-rates (larger files, better quality). They don’t play back fast, and need reencoding, but they encode faster, and retain quality, and can be used over and over. Projects get backed up with ACRONIS software imaging (incremental) which archives changes to 2 drives (1 main backup, and a secondary using a hub to share the port; slower but fast enough to do overnight while rendering out cache or previews). I don’t backup cache or previews. They can be rebuilt in an hour or so on a slow machine. The rest is irreplaceable. My system is backed up the same way, by the entire hard drive, incremental, and twice a month. Not a lot of changes to back up, and it is fault tolerant. This requires 8th of storage for 4 cameras using final cut output to 422proxy. 12-15 for full 422 and up to 20 for 422hq or 4444. Proxy is fast enough on a core2duo. i5 or higher can handle 422. quad core or 8core are necessary for 422hq, and add a gnu for 4444. You’ll also need more ram.

  • Ht Davis

    April 28, 2015 at 6:15 pm in reply to: Scaling Masks from 720X480 to 1280X720

    It sounds to me like you are doing something with the position because they all come out a little off. This is like placing a transparency or paste into photoshop. You have to do it one at a time on the keyframes. However, there are instances where you could move all the positioning. Scripting for instance. I’m not much for scripting myself (truth be told I’ve done it once and that was following a how to).

    I can offer you only my algorithm for what you want, not the code or script, because I stink at that part.

    Assumptions:
    the keyframes are all within the same entity\layer separate from other effects
    the keyframes all need to move both horizontally and vertically by the same amount each path but not necessarily by the same amount in both directions
    This movement can be scripted by using the position values associated with the position modifiers of the keyframes, and the keyframes themselves can be scripted into an array for use (that second part is less necessary)

    There are a number of possibilities here.
    First, if you can script the keyframes themselves into an array or a For loop\post test loop, you can process each one in the script.
    Algorithm:
    set defaults to zero
    Wait for input{
    Get horizontal move
    get vertical move
    endbutton
    change button}
    if changebutton{
    do[
    add to horizontal position
    add to vertical position]
    While there are more keyframes
    return to wait for input}
    else endscript

    if you can’t figure out how to script the test or array the keyframes, you’ll need to apply it to every keyframe separately. But that doesn’t mean you can’t adjust one of them until it’s right, then apply it to every other keyframe.
    IF you set the movement in the script and apply that script to 1 keyframe, keeping track of your adding\subtracting, you can get the values to apply to all the others, or you can note the current values of one keyframe, move it into place, and then use that to get the values to apply to the others. After that, you simply put those into the script, set the script to a hotkey, and click each key frame, pressing the hotkey as you go, which will apply the movement.

    Algorithm:
    Set x
    Set y

    with current keyframe{
    move x horizontal
    move y vertical
    }

    For a script to set the x\y you could have it ask the user for the value, and store it in an environment variable. This would be a defaults container you could reset with another script. Once set, the algorithm changes to use the x\y from the environment variable\object, and you can reset that value as you wish, with another script that prompts the user for it. Resetting the defaults for this variable allows you to set this to zero or null, and that way you don’t accidentally use old values on a new layer\comp. This last part is, however, a lot of work for little payout.

Page 6 of 25

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy