Forum Replies Created
January 21, 2022 at 6:38 pm
You should check the default fourcc decoding. i believe both windows and premiere read the same fourcc and decide what video decoder to use. sometimes its missing or sometimes its the wrong one. I believe premiere uses vfw compatible codecs. All default fourcc codec tags are read as metadata and then the fourcc is compared against a list in windows registry to load a specific dynamic link library. For example: my fourcc code for Lagarith is VIDC.LAGS or just “LAGS” if reading the video with a fourcc editor. That registry file points to lagarith.dll
all these values are stores in registry key
so what you should do is download a fourcc editor to views the fourcc tag of your video, then compare it in registry and in the working computer and see if they are identical.
January 20, 2022 at 1:11 am
Walter SoykaJanuary 14, 2022 at 2:35 pm
“I tend to use UnMult from Red Giant, but you can also use native tools.
Apply the Channel Combiner effect. Set its “From” property to “Max RGB” and its “To” property to “Alpha.” This will take the brightest of the three color channels and use its value for the alpha channel.
Then apply the Remove Color Matting effect (keep the default background color of black). This will eliminate black fringing in partially transparent areas.
January 13, 2022 at 1:10 pm
“In Audition, go to Edit > Preferences > Multitrack and change the Default Panning Mode to Left/Right Cut (Logarithmic) and export the audio again from Audition.”
“in premiere, If you work with several mono tracks and pan them left/right the pan law will kick in. there is easy no way to disable it. a workaround is use the Audio Track Mixer in Premiere Pro and raise each slider to compensate until you get the output levels you need. usually 3db”
October 13, 2021 at 1:00 pm
hue/sat saturation at zero
shift effect as luma matte as alpha
effect levels alpha effect to push contrast
effect arithmetic for slice
September 24, 2021 at 10:33 pm
right click replace with any video or image; doesn’t really matter. the actual control is in the matte opacity anyway.
September 24, 2021 at 1:21 pm
1. on top of what Mark said with the vectorscope line, you may need to use secondaries.
2. or try hue v sat, then hue v lum. on top.(it’s a delicate balance)
you’ll need to have both HSL and YUV scopes open since you’re looking at chroma directly.
3. another option you can try is chroma compression using luma mattes. here’s an example:
September 13, 2021 at 9:38 pm
1. record both both mics with pink noise at the angle they were originally recorded, then use an audio program to visually compare the frequency response.
you may have to match the reverb or remove reverb.
the cheaper mic will probably have some self-noise, so you can remove it or layer room tone.
2. izotope rx match
September 3, 2021 at 1:48 pm
There is really not anything much better. topaz enhance was as good as, or better than algolith was. They both used motion estimation.
some other possibilities
a. teranax hardware
b. alchemist OD software
c. QTGMC plugin for avisynth(free, but complicated workflow)
September 3, 2021 at 1:47 pm
1. if you record both mics at same angle(originally shot at) with a pink noise, then do a frequency analysis in audition/rx, you can match them with parametric equalization across all frequencies.
2. izotope match plugin
3. by ear, EQ
note: you may have to add/remove re-verb from one or the other mic to more closely match them and also add room tone to blend them.
August 20, 2021 at 12:53 pm
Google drive is not like a traditional storage device. besides the bad latency you would get when random scrubbing, (unless render cached), you would run into a maximum bandwidth limit. not in the traditional sense of bandwidth, but if you read more than x tb per month, they can freeze your account. but for something small, it might be a fun experiment. i wouldn’t use it professionally as an online IO storage.