Forum Replies Created

  • For anyone else who is having the same problem I finally found the answer I was looking for in the adobe support forum.

    https://community.adobe.com/t5/premiere-pro/faq-changes-to-red-raw-default-color-space-in-premiere-pro/m-p/11920653

    It seems there is no easy way right now to bulk reset all your imported clips to their original color space and gamma curve – something to do with Premiere needing to use the REDWideGamutRGB space to retain all colour data (check the adobe post for more info)

    The best work around seems to be to use one of RED’s IPP2 Output Preset LUTs (https://www.red.com/download/ipp2-output-presets) in Lumetri to convert the footage from Log3G10 to REC709, save this as a preset and then apply it to all your imported r3d’s as a source effect – https://helpx.adobe.com/premiere-pro/using/master-clip-effects.html

    This isn’t perfect as the color space and contrast curve will be different from what was shot in camera, but as long as your footage was well exposed and balanced it will at least return your clips to a look that is suitable for final output without too much tweaking.

  • James Wallace

    September 3, 2017 at 3:41 am in reply to: filiming computer screen issue

    You are correct in assuming that this is a moire issue, rather than a 50/60hz banding issue, which can be solved as suggested above.

    Moire however, is caused by line skipping on the sensor. The only way to completely eradicate moire in this situation is to use a camera that uses every single pixel to capture video. This basically means using a dedicated digital cinema/video camera which natively captures every pixel or a DSLR or M4/3 camera in crop mode (Available on GH4/GH5 or on Canon DSLRs using magic lantern, not sure about others.)

    The simpler way to solve the issue is to shoot the computer screen ever so slightly soft so the banding isn’t visible and then apply some sharpening in post to recover the edges. Not perfect, but a reasonable compromise.

  • James Wallace

    September 3, 2017 at 3:21 am in reply to: Tall Grass and Compositing. Is it possible?

    Hi

    Without seeing your footage it’s hard to say precisely as every challenging composite needs a unique approach, however, a few things come to mind and forgive me if you’ve already attempted these.

    Have you tried duplicating your footage layer and pushing the contrast or saturation on the duplicated layer to the point where you can achieve a reasonable luma or chroma key, then applying refine matte or similar to smooth the inevitably hard edges and use the resulting composite as an alpha matte on the original footage. If this doesn’t work you might get better results by separating out the RGB channels and using the same technique on the single colour channel that best separates the grass and your subjects.

    Alternatively, can you cheat it? Is there a part of the image that offers better luma or chroma separation from the grass, such as against a bright sky that you can key? With a feathered mask, some colour correction and a little distortion you could use a duplicate of this area over the soldiers legs to hide any issues with your composite.

    Also, where tools such as keylight fail because of similar colour or luminosity values, going old school and using multiple instances of linear colour key with extremely low threshold and softness settings so you are removing very narrow bands of colour with each instance can give more accurate results. As with the first technique this will leave you with blocky edges that will need to be fixed with some kind of matte/edge refine

    These are all techniques that have worked for me in the past when I have been struggling to separate subjects and roto is not really an option.

    Using multiple layers that use chroma and luma keys individually to isolate different areas and then using blend modes such as ‘darken’ or ‘soft light’ etc to overlay each one is a technique I’ve had success with specifically on shots with long grass.

    Hope there’s something there that might offer some insight.

  • James Wallace

    August 7, 2017 at 1:47 pm in reply to: Premiere pro and MXF video files

    The MXF’s are straight out of a Canon C300 which I think are AVC here is Premiere’s property details.

    I tried checking activity monitor during playback on the problematic machine and this is what I found:

    Premiere’s RAM usage stays at a consistent 9.06gb whilst playing back the MXF timeline. The software settings are such that it should allow Premiere to use up to 18gb of the machines 24gb if necessary so I’m nowhere near maxing out the RAM.

    CPU usage jumps to around 80% for the first few seconds whilst playback is smooth then settles to around 23-25% when the footage is choppy and unwatchable, so again not maxing out the CPU.

    The same machine under the same conditions plays back a raw red footage timeline smoothly with a consistent 40-50% CPU usage and around 4.5gb RAM usage.

    Thoughts?

  • James Wallace

    August 7, 2017 at 11:42 am in reply to: Premiere pro and MXF video files

    Apologies for resurrecting an old post, but I’ve been having a similar issue playing back MXF’s with one of two mac pros that are similarly specced using the latest version of Premiere Pro 2017.

    MacPro 1:
    Mid 2010
    2 x Quad-core 2.4ghz Xeon
    22gb RAM
    GTX 680 2gb

    MacPro 2:
    Early 2009
    2 x Quad-core 2.26ghz Xeon
    24gb RAM
    GTX 660 2gb

    Mac Pro 1 plays MXF files perfectly smoothly on the timeline. The same files in the same project on MacPro 2 play for around 3 seconds only before becoming choppy and completely unwatchable and certainly not editable.

    Although MacPro 1 is newer with a slightly faster processor is has less RAM. The only thing I can see that might make the difference is that 1 is using a GTX 680 and 2 is only using a GTX 660. Could this really make the difference between perfect playback and none at all??? Or could there be something else going on here?

    Both machines are sourcing the footage from a very fast networked server, so disc speed should not be an issue, they are both capable of playing back timelines with multiple layers of raw red footage, so I’m puzzled about the massive difference in MXF performance.

    Any ideas folks?

    Cheers

  • James Wallace

    January 27, 2017 at 10:02 pm in reply to: 360 VR Compositing problem. (AE & Mettle Skybox)

    Hi Mike,

    Thanks for your considered response.

    I’m working in 360 comps using 360 stills as the backdrop on to which I am animating various vector graphics, text etc and outputting to 360 VR video. The assets I am struggling with need to stretch smoothly from horizon to horizon so simply angling them is not a viable solution and working on individual cube faces leaves me with the same problem as painting on the original equirectangular images did ie; the perspective just does not look quite right when composited and viewed in a headset.

    I did take a look at Canvas 360, but I didn’t see any obvious way in which it might solve this particular issue and my company is already well down a development pipeline with Mettle and would be reluctant to completely reverse now.

    After another long morning of headbutting a brick wall I finally gave up. I think this genuinely may be a case of the plugins having not yet caught up with the needs of the user in an emerging technology so I decided I needed to think outside of the after effects box and move to a software more suited to a full 3D environment. In my case Blender, but I think it would work in most 3D suites.

    My work around was thus: Using my original vector graphics as a guide I recreated the necessary map elements in blender with bezier curves and used them to create meshes. Using the original 360 image for both reference and world lighting I then shrinkwraped these meshes to the top half of a sphere. Then, by manipulating the scale of this sphere in the Z(Y) axis I was able to roughly match the horizon fall off in the original image and get my map to match the landscape with reasonable accuracy and good perspective. I then rendered each individual map element as an equirectangular alpha which I can lay on top of my 360 image in AE and animate, tint, distort and generally faff with to my hearts content without worrying about rasterising, aliasing or anything else.

    This is definitely a workaround and far from an ideal workflow (take note plugin writers) , but considering I’m working on a product that is due to go to market within a month I’m glad to have found any solution at all, hence why I reiterate it here for any future poor souls who run into similar issues.

    I’m still very open to any suggestions that can keep my workflow entirely within AE so please continue to comment.

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy