Forum Replies Created

  • Phil In dk

    May 26, 2008 at 9:00 am in reply to: Shared storage suggestions

    If you’re gonna use a gigabit NAS you can also use APACE.
    If you want something you can plug in, turn on and be ready to go, then Apace is great. We have used it for DV, DVCPROHD, ProRes and even 2k RED proxy, we have had no problems at all with it for the last 3 years for up to 6 systems on a V1000 1.5TB. They are also really well priced.

  • Phil In dk

    January 25, 2008 at 11:08 am in reply to: YUV > RGB > YUV codec question

    ‘So even though AE may render back to a native Y’CrCb, it may not hold some of the original values that the raw video file had in the beginning if the color-space values are beyond what RGB can process.’

    As I understand it, the values in a wide gamut RGB colorspace are from 0 to 255 for R, G & B. In a legal 709 YUV signal the values go from 16 to 235 for the Y, Cb & Cr channels, everything above and below is super, but can still be mapped to RGB values. RGB and YUV just describe the channels present in the colorspace and will both handle the same values at the same bit depth.
    The problem is how you ‘stretch’ the black values from 16 to zero, and from 235 to 255 on the whites – this is the shift that weevie is seeing in RGB I believe. Most software has parameters for mapping the values correctly. Question is if it was done right or wrong in the first place.

    ‘Once you hit 255, there’s no ceiling above that, even if you render in 32-bit float. RGB is RGB. No more, sometimes less.’

    You can have RGB 10bit log files, which give you the possibility in 10 bits for 1024 logarithmic values – way over what 8 or 10bit linear formats can handle that have just 1024 values.

    If you go from 8bit YUV to AE RGB, what you want to do is raise the bit depth of the project in AE to 16 or 32bit floating and then render out into a 10 bit YUV codec. As far as I understand this is the best way to keep super info, & in 32bit floating you can pull the values where you want.

  • Phil In dk

    November 14, 2007 at 8:40 am in reply to: Quick Question

    Hi Arnie,

    I don’t know how more specific I can be!

    I’m doing HDCAM SR captured with a Kona 3 card, so I’ve got component RGB at 10 bits and uncompressed. I’m not talking about other colorspaces just RGB 444. I’m not worried about storage or system performance, at the moment I’ve got 4 4Gbit FC ports striped across 32 disks, and I’ll buy the necessary host according to the platform and NLE that will support this workflow.

    So – What editing software do I choose? If I put this capture into FCP, are there any areas that process in 8 bits?. I want all 10 bits in and out, and don’t want some process I don’t know about to steal my bit depth. I need to know if I can rescale, crop, position and output mainly. Plugins or native editors that work at 10 bits would also be a plus.

    Not all systems work in the same way, some rely on GPU processing at 8 or 10 bits, some dedicated hardware, some software only processing. What will FCP do with the specific format I’m describing and a Kona 3 card? It’s a simple question – either I can or can’t keep all my data in RGB 444 and there are probably some things that I can do, and some that I can’t – I want to know what without losing quality.

  • Phil In dk

    November 13, 2007 at 1:40 pm in reply to: Quick Question

    Hi Guys,
    Thanks for the response. I am doing HDCAM SR. I just needed to ask if the software and editors are only in 8 bits? I can see they are. So no easy way around this for SDI RGB 4:4:4. Arrays etc. I have.

    I suppose it was a badly phrased question. More specifically, If I do a capture/print without touching the editors then I expect FCP will keep everything at 10 bits?

    Are there any editors or plugins that work in 10 bits?
    How does FCP handle processing on eg. rescaling, crop, positioning etc. – I expect Kona does that? or is it a software thing with FC?

    So – What’s in 8 bit and what’s in 10 bits?

    Thanks.

  • Phil In dk

    November 8, 2005 at 1:57 pm in reply to: SAN X16 iSCSI performance

    I forgot to add that the iSCSI is running from microsoft initiator on 1Ghz port.

  • Phil In dk

    November 8, 2005 at 1:42 pm in reply to: SAN X16 iSCSI performance

    Hi,

    The problem was with simultaneous capture on 2 clients, which turned out to be due to firewire and gigabit NIC sharing the same bandwidth on the PCI bus segment of one machine (66/100/133mhz).

    The ONLY way I could get this to work on a HP xw8200 was to install an Adaptec 1394 card on the 133 bus.
    All attempts at capture alone on 66/100/onboard or via a BOB on USB failed. So it was not just a question of separating these cards to each it’s own segment.

    I can now capture without frame dropping with the NIC on 100mhz & firewire on 133.

    The X16 does about 30Mbyte random writes at 512KB &1Mbyte blocks, below this it does about 20.

    Benchmark was with ATTO.

    Thanks to Loring and Nate at SNS for the help.

  • Phil In dk

    October 3, 2005 at 6:55 am in reply to: SATA and Render Time

    With a Raid you need to do it in hardware.
    Software Raids won’t really help your problem much, as they need lots of CPU cycles to control the raid.
    (Which you need for your render)

    Hardware raids are best because they do the administration. Look at the different controllers out there and you’ll see that they come in various quality and price. You kinda get what you pay for. A good controller will get you the disk performance you need without eating all your CPU power.

    It’s also dependent an the 4:2:2 aquisition codec as to how much ‘uncompressed’ is. Check

    https://www.lurkertech.com/lg/big/howbig.html

    All Sata 150’s in theory should be able to spit at least 3 uncomp layers out, but you’ll never get that performance because of lots of other factors.

  • Phil In dk

    September 30, 2005 at 12:29 pm in reply to: Music Video Filming effects

    Field dub it. Get it to look like 24P.

  • Phil In dk

    September 30, 2005 at 12:10 pm in reply to: Horizontal Shit after render

    I dont believe that the AE render engine does this, so it must be the BM codec.
    Have you tried rendering to another codec and viewing on VGA?

    I remember a similar problem a long time ago with FAST hardware we solved by changing values in the settings files for the hardware. If you only render this codec & monitor with decklink, it might be possible to get
    into a settings file for the DL or AE settings for DL.

    On PC these are *.ini files – look for a #render#shift parameter.

    Just a thought.

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy