Forum Replies Created

Page 4 of 25
  • Ht Davis

    July 19, 2016 at 4:10 am in reply to: premierPro burned-in captions

    Is the video track turned on? A small eye shaped icon next to the track number indicates it is visible. If it is nested, you may need to check the original sequence, and make sure all your data is relinked. If you are not seeing the “Media Not Available”, I would first check your track activation, as that is most likely the culprit. Then click on the video and check the effects controls under opacity. Finally, unlink your video, and relink. If nothing works, try playing the video outside of premiere. If it has issues, the data is corrupting. Hope you have a backup.

  • Remember, you can export your audio to separated mono tracks easily. Then use the file for the left on both sides. To convert to stereo with distance modulation, you can do it with some busses and the right delays. Most of our directional sense comes from the timing of reflections. But there is also timing in between frequency distribution. Think about a stage during a concert. Drums are at the back to allow the front audience some of the feel of their sound (low frequencies have a long wavelength and need that distance before we actually hear them rather than feel them). To create the distance of room or widening of the sound, the first step is a bus that catches the sound in mono, gets a slight low rise and extreme high fall, to bring in the lows. Delay this by close to 38ms (no more than 50) until your playback feels just a little too punchy, almost vibrating. Now apply a high pass to the right and left normal tracks (removing the punchiness). Your mix will feel a little less powerful at first, just boost your lows slightly (3db max) and swap phase to cut out the common frequencies (the distance between lows and highs will grow, enhancing the dynamics a bit). Now add another mono bus and feed a track into it. Do a high pass and delay it by 30-50ms. Do this up to 3 times, varying the delay by adding 15 10 5 to the ms on the delay and dropping by 6db 10db 15db respectively. This is an analogue style reverb that actually helps enhance the recognition of the frequency spread. You can then get a better understanding of any noise level, and cut that out too. By adding delays, you’re really just faking a bounce of the highs. If you want some low reverb, add a delay at about 100ms on another bus and drop it by 5-12 db (6 is usually a good bet for a short back to front with few people in the room, but drop by more with longer rooms). Pan all mono tracks to center. Now apply a delay to the right side track of about 40-45ms. This will offset the left from the right. Now apply a slight panning of the early lows delay bus one side or the other, which will offset it even more, allowing a more dynamic feel to the sound. IF there is a visual, you won’t be able to tell the difference between real sound and this mix. If no visual, it will be weighted slightly right, but amp’d slightly left if you pointed the reverb that way. Point all of the tracks listed above to a stereo bus, and amp each side until they are within 3db, which is an almost mono distribution (a central balance).

    Also, In light of recent problems found with audition, consider the following:
    Make a Batch file in windows that will
    1. take a dropped file as input(your session file will do)
    2. get the parent folder (the whole structure is important)–this is the folder where your session is, along with the folder containing your audio files that you record.
    3. periodically non-destructively (as in not deleting files that are no longer in the first folder) Mirror the contents of the parent folder to another location anywhere on any drive (every 2-5 minutes works) and make sure you set this folder, or you can simply use the parent folder of the original file and add a Backup folder inside,
    4. then just continually make backups using a forever loop (Condition is always true like While 1==1). You can close it when you close audition by simply closing the Command prompt window. You won’t lose everything even if you crash, after all, audition cannot delete what it knows nothing about.

    On mac open automator:
    start by making an application–call it BackupStart. create 2 Path variables: SessionFile, ParentFolder. These will be what you use to grab your files. Create more Path variables with names similar to BackupFolder#, where # is the number of which backup folder. You can now Set Variable Value (in actions library) for the path Session file as your first action (this will catch the path of the file you drop on the app). Next, run a shell script in Bash (an action in utilities or system), and set STDIN to Arguments. Clear everything in the shell, and type: dirname “$1” Exactly as shown (don’t replace dirname, it’s a command that grabs the name of the parent folder of the file you just dropped in. Add another Set Variable action for ParentFolder. Now use an ASK for finder items action, and look in it’s options to “Ignore this items input” and check the box. Add a Set Variable for BackupFolder1. Repeat for each BackupFolder variable you have (ask for folder, set variable). Now add a GET VARIABLE action, go to options and select “Ignore input” again, make sure you are GETting the ParentFolder variable. Now another GET Variable for Backupfolder1, but DONT ignore input (you want the two to pass into one another and continue on). Repeat this last op for each BackupFolder variable, leaving the ignore input unchecked to group them all together. Now add a RUN WORKFLOW action and turn off “Wait for workflow to finish”. Save this file, leave it open, and go to file Duplicate. Rename the duplicate BackupLoop1. File >Convert this doc to a Workflow. Delete the variable SessionFile from this document and all but the very last of the actions(Run Workflow). The other variables are still necessary. Everything we add should be above the RUN WorkFlow action. Use a Get Variable on your ParentFolder, and as before, do not check ignore input. You need this to run straight through from the first document. Add the GET Variable for your backups. Add a Shell Script in Bash, with STDIN set to Arguments. Clear the script box, and type:
    rsync -vau “$1/” “$2/” (enter)
    rsync -vau “$2/” “$3/” (enter)
    The first line copies your parent folder’s contents, the second copies the first backup to the second. You can continue this until you have handled every backup in the script. Apply a PAUSE action for a few seconds. Now add a Loop Action, and set it to run 50 or more times (applies a wait time until finish) and set it to use the same input. This will continually backup all your data as you record, and when you hit stop, you should get a copy of your audio almost immediately after, done by your system, and making the RAW file data into a finished file set. Now add the GET VARIABLE set again for all your variables, ignoring the input of the first one, but keeping it for the others. Point them into the last action of RUN WorkFlow. Again, Duplicate the document, call it BackupLoop2. Change the Run Workflow in this file so it points to where you saved BackupLoop1. Change the RUN WorkFlow in BackupLoop1 to point to BackupLoop2. Change the Run Workflow in BackupStart to point to BackupLoop1 and place BackupStart in your DOCK. When you get ready to record, drop the session file onto your dock icon, pick your backup folder(s), and then let it go. Hit record, and when you hit stop, wait a few seconds for it to end the raw file tags. Now check your backup folder. You should have a perfect WAV capture there.

    IF audition crashes, you can drag the files in your backup folder to the original place and continue.

  • Ht Davis

    April 5, 2016 at 8:56 pm in reply to: Auto Ken Burns

    The position marker prevents Drifting the center point or anchor inadvertently. However, if it is readable from the comp in a script, you could set a start and end slider for it. The values could have an adjustable range that centers at normal and can drift either direction on either axis by a value no bigger than the comp’s farthest edge dimensions. They would allow you to zoom a photo to any anchor point on or off screen. Give it beginning and end keys, and viola. I’m no scripter. But your script has possibilities.

  • Ht Davis

    April 4, 2016 at 2:41 am in reply to: Adobe Premiere Pro CS6 – Multicam

    not a question. It’s an example of how even a lesser computer can be utilized to it’s fullest if you plan out your setup to utilize more of your bus’s full speed. Ram and gfx processing will be the bottlenecks, along with cooling, but so far, it’s worked for me pretty well.
    I’ve taken to rendering output on other machines for longer source and final output, which made render times unfeasible. But other machines see roughly the same access speeds, and the only real difference is their processing speed is about half to quarter the time to render out. Comparing 300 frames of a 30p video across the board, my macbook processed a frame every couple of seconds, where an imac i5 8gbram 2gb video card (2x ram, 4x the graphics card ram; two steps up on processor, 1 year newer) got about half a second per frame. Granted, that’s with several low end transitions of 1s or so, and one or two effects. Oddly enough, the laptop handled the audio samples faster. Actually moving the files, there was a benchmark difference of about 1mbps on average, running files of sizes up to 25mbytes. 2-4gb there was a difference of 10mbit, and larger than 4gb there was a difference of 12-15, averaging around 13mbit. Relatively small differences that are more akin to differences in the pathways and ram speeds. That’s all after cold-start, by the way, it took 2 weeks to benchmark it all. The differences are negligible except for processing time.

    With RAID drives, you can get close to full speed out of your connections, but for each one you maximize, you’ll have to carry the overhead as well. Most of it is heat. Get past that, and you’ll be functional (not saying it’s acceptable by all standards, but it works for me for short project or non-profit work).

    One last note:
    I’ve also added file backup versioning… …Time machine or windows 7 backup routines… …It will actually keep older versions of files as you work. Set a timer for about 10 minutes, Save your file when it goes off. I used a little tool to do that… …I run it with a unix tool or batch file, sending the keycode (command key + s) to save every 10min, and run it in the background until I shut the window. In any app I’m in, I get file versions 10min old at most to go back to if a virus or other disaster crashes my system. I get back to work and finish up. I keep the backups all external, and image them as well. Yes, it’s almost paranoid, but I like to be able to recover everything in few hours, back to where I was when everything blew up; I don’t lose much time or work that way; I just lose headaches, which I never miss.

  • Ht Davis

    September 27, 2015 at 4:29 am in reply to: P2 Progressive Footage looks interlaced in Premiere Pro CC

    Are you sure this is progressive footage? If it looks interlaced, place it in an interlaced sequence of your own setup (no drag and drop to create one; do it yourself). If it plays back normal, you have interlaced footage.

    Also, some codecs and formats use an interlaced main stream, with a compressed stack file of the secondary field footage. This makes it appear to be interlaced when playing back if the system cannot read the stack properly.

    The bottom line here is that you need to ingest the files into something readable, with the right metadata tags for formatting and playback. Absent of any evidence other than “it looks interlaced”, you either have:
    Interlaced footage
    Progressive footage with a secondary field stack that causes interlacing
    Damaged data

    Try:
    Converting with the Camera included software
    Converting with Handbrake, and setting it to output interlaced footage with 2x as many frames
    Placing the footage in a new sequence set to Lower Field First interlace, then go to program monitor, right click, and then to fields and select from the options there to get a better look at it.

    The first will probably get you a useable file output.
    The second will let you try to spread the fields out properly and see if they are interlaced
    The third will allow you to try to run a “Progressive?” video as Interlaced footage, and see if maybe something went wrong somewhere. I know that most P2 cameras only shoot in 480 or 1080 interlaced. They are Television Broadcast Style cameras, which cut down on data transfer by using interlaced footage instead of progressive. It allows them to send the same “Size” image, but run the fields twice as fast as frames, and use the refresh rates on TV sets (with field blending\antialiasing and deinterlacing) to make the whole thing seamless.

  • Ht Davis

    September 27, 2015 at 3:31 am in reply to: Color Correction

    This is typical screen gamma manipulation when moving between windows. It helps visually indicate where the user is working. Unfortunately, it really screws up any screen recordings. You could try a deflicker plug in on the track. You could try watching until you see the change, and stopping, then marking the frame where it changes and cutting at each change, then doing a match color in speed grade or in after effects. The last method…

    I like the color match method in speed grade, it’s fun when you know what you’re doing. you can do the same in after effects. There are tutorials for both. Check them out.

    Here’s how I’d work it:
    Duplicate your video file. you will have File 1 and file 2.
    place both in your chosen color corrector, match a good shot of file one to a bad shot of file 2, and apply the change to file 2 completely, and render it out. This gives you file 3.

    Place File 1 in your sequence on track 1. Place File 3 on track 2 at 0%opacity. You can do this with Clips like this: As you play through, where you see a change, stop and go back until you are at the frame of or just before the change and place a marker. Continue this through the whole thing. Now razor your file 3 into clips that match the markers, and clip your original file as well, dragging your file 3 into place where necessary on track 1. Select the first good clip, and copy. Select all and PASTE ATTRIBUTES only. Now all are at 100% opacity and you can roll back and forth to adjust a little.
    You can use KeyFrames but they can be buggy and more difficult to adjust:
    Start with the clips on tracks 1 and 2 like above, and set your opacity on track 2 with a keyframe at frame 0 with an opacity of 0%. and turn off playback of track 2 (they eyeball symbol should be off)
    When you see the change, stop and framebyframe back until it changes back, then select the track 2 clip, place a key frame with opacity zero, move one frame forward, and place a keyframe at opacity 100. Play until you change again repeat, but reverse the values in track 2. Repeat this until you finish the entire clip, and turn on the eyeball symbol for track 2. Render a preview and check it, then export to your output file.

    I recommend the first method. You can zoom in to each razor mark and roll the edit to adjust by a frame or two very easily, and it won’t hurt the clips. It’s faster to work with, but you’ll need to color correct a file copy, which can be consuming. Try those tutorials. Start with speed grade shot match or match frame. With CC you can go right to it if you already have file 2 and file 1, then just use the LOOKS file in Premiere on track 2 with file 2 instead of rendering out file 3. This will cut out a step of processing, but remember, you will still have to have it perform the operation on the clip where it is used, so you’ll want to check each one and make sure it has the effect applied after you paste attributes.

  • Ht Davis

    September 27, 2015 at 3:22 am in reply to: Importing Content from Mac to Win Machine

    What version of windows? Some can read exfat alright and some can’t.

    Also, if you’re importing PRO-RES to Windows editors, you’re SOL. Pro-res is an APPLE codec. If you used the latest pro-res, just convert it on a mac to a compatible format. Old pro-res can load on windows through ffmpeg, but not well, and you’ll have to do some haxie to get it in adobe premiere.

    H.264 is low bit-rate. even blu-ray is not more than 20-30mbps at the most extreme. Hell, DVD, even in mpeg2 is only 8.5mpbs for video.
    Pro-res is upwards of 50mbps, so of course it’s “Choppy”. It’s not compressed.
    My advice to you is to Convert the file to something windows compatible. AVI, WMV, MP4, MPG, H.264 are the types you want to look at. If you want less compression, try AVC-Intra, at 100 for quality. Then create a sequence from just that file, render it’s previews (set to iframe only mpeg), and drop that sequence where you need the video. It will allow fast preview playback, and you can play with that sequence as if it’s a sub clip, and easily make dupes with different in\out points for different sub clips.

    General error is a video format error, usually. If your system shoots up a box (windows system error at the top of it) that says error with a -1 somewhere in it, it’s a drive error or failure. Remember that, while you can read from an SSD at 512mbit\sec, your video processor can only handle about 40mbit\sec max, and less if there’s audio.

    If you can get your pro-res loaded (you probably never will), you can speed up playback by rendering previews of it in it’s own sequence (outside of your main one), which you can then place anywhere you need. Keep the Previews to Iframe Mpeg for speed, and they should look pretty good coming from a Pro-res.

  • Ht Davis

    September 27, 2015 at 3:05 am in reply to: Mixing interlaced footage in progressive timeline

    I’ve run across situations where you have several shooters with vastly different equipment and very different footage. While premiere does allow me to control a lot, I cannot tell them all I need them to buy new equipment.

    While testing is fun, it’s a huge time-suck. 50i and 25p have the same general frame rate, but different field dominance values. 50i actually has a rolling value, while 25p has a simultaneous one. Some say deinterlacing “drops half your resolution”. Technically (by number) it does. But worse, interlacing drops half the photographic information. How do we get this back?

    I’ve seen some AVISYNTH tutorials that show how you can get great results without losing your frame rate. Since 50i is 25 FrPS, just like 25p is also 50FiPS, just rolled differently, you can surmise that a progressive video rate of agreement between the two is 25 frames per second. Using this as your new frame rate, you have some options.

    First, you can interpret the 50i as 25p. Unfortunately, this gets a little funky with some cameras and it pixelates badly. The alternative is to “De-interlace”, which “drops half your resolution” (correction, it removes intermittent areas of the photograph and blurs things a bit). Both of these really stink. Some plug-ins actually allow you to pass video in, and run it through to get progressive footage. Only a few are any good, and I’d stick close to RedGiant, if you can fork over a few hundred $$$.

    another option is to pass it through programs like AVISYNTH that RECURSIVELY blend the fields in a few different mashups, Compare to the original field\frames, and “Find” a *close* rendition of the frame in progressive fashion. Using them also allows you to remove excess frame-age to have the video match your desired frame rate. This is your most preferable option. It adds an extra step, but it works.

    The last option is to prep your video by placing each clip in it’s own sequence, and export both to the same frame rate. In premiere or AME, you can also render with maximums for quality and bit-depth, and turn on frame blending, exporting to 25p. This will tend to maintain your resolution, as using the frame blending on a FIELD ROLLED video actually does FIELD blending to create an intermediate frame before comparing fields and building the new frames. It’s not as recursive as AVISYNTH, but it works alright.

    Yes you can test your options and waste hours doing it. Only the last two options here are worth using if you don’t have a lot of time to learn the niceties of the plugins. The last one is best if “Good enough” fits what you need, and the AVISYNTH is alright if you have a front-end for it. Personally I like using product based on it:
    JES Movie Tools v1.0

    for the mac

  • True-type fonts usually are more compatible. Open type fonts can have issues on occasion, based on the encoding program used. Conversion usually fixes any issues and conforms a font to the standard space.

    Also, you need to make certain you installed the fonts correctly, so you may want to check your registry for the fonts. Google it. If you can’t find the fonts there, you need to try reinstalling them.

  • Encore tends to assume a rate of 29.97fps. Change that in your encoding settings for the project, then try again. It should fit to the new frame rate. Standard encodes will out to 29.97 (30i–30 full frames with 2 fields interlacing at 60hz). IF you output 23.xx, you need to tell encore so that the audio is encoded with that frame rate as well (audio is expressed on it’s own in samples, but when synced with video, it conforms to the same value as the video; when you play audio at 48khz with a frame rate of 29.97 with a video at 23.xx, the two will not sync).

    To get audio and video in sync, you’ll need to set your project encode settings in encore. Then the audio will be encoded at the same frame rate as the video and it will sync.

    Also, I’ve had projects fail to properly associate transcodes when the audio in the premiere timeline was longer than the video, and the transcoded video lacked the last few frames where the audio stretched. Placing an empty video clip in that space in premiere fixed the missing frames and output correct transcode duration. I prefer to transcode my own in Compressor, so I also add the audio into the encore timeline as AC3 generic. It certainly saves time when I can use 4 machines to encode several transcodes, and then compare them.

Page 4 of 25

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy