Creative Communities of the World Forums

The peer to peer support community for media production professionals.

Activity Forums Blackmagic Design Decklink 10 bit workflow and mpeg2 question

  • Mactrix

    March 7, 2006 at 9:05 am

    Again, I am not talking about film. This is totally another
    story.

    And the article refers to recording to tape. Tape is limited
    to 8-Bit with DVCPRO HD. We captured LIVE and TAPELESS
    from the CCD straight to the 10-Bit HD-SDI output.

    And no it’s no codec issue. Mac can handle true 10-Bit
    since years. I can produce banding artefacts in seconds
    with CG but not with captured video. This has nothing
    to do with internal rendering inside applications …

    And there are 12-Bit beamers out there for 40.000 USD.
    We’ve tested some in a digital cinema … DVI connection
    with 8-Bit showed no difference. It is marketing in most
    cases. You’re running your OS with 10-Bit? Your graphic
    card support 10-Bit? You’re using the EIZO to proof it?
    Your eyes can dissolve the 1024 steps? You’re using a
    professional CRT, but no class 2 or 1?

    It sounds to much theory and brochure reveals. It makes
    no sense to discuss this in a forum without applying test
    on your equipment together …

  • Bj Ahlen

    March 7, 2006 at 4:37 pm

    You’re repeating my statements, so perhaps it’s time to take a pause.

    I know well that FCP has been able to ingest 10-bit video since FCP4, although FCP plug-ins even today can only return 8-bit RGB video. This also leads to lost bits as the original footage is converted from YUV to RGB to YUV again. With the narrow conversion used [8-bit to 8-bit as opposed to 8-bit to 10-bit] you would expect to lose 2 bits, and this could be another reason you’re not seeing a difference.

    But I wasn’t referring to FCP’s ability or lack of it. I was referring to the incompatibilities between Apple’s own codecs and the codecs of the companies that make the SDI I/O boards. The forums are full of postings from those who had problems, and I thought this could be the source of your problems (or rather lack thereof, because you are satisfied with what you have).

    Perhaps it isn’t common knowledge that the OS doesn’t get much involved in inputting or outputting video for monitoring to a 10-bit card, it’s just data shuffling. I don’t use my graphics card for monitoring, I use a BMD purpose-built card that is not OS-limited to any bit depth.

    You say you’re not talking about film, that film is another story. So are you saying that if a digital video file was recorded from a film transfer, it can have 10-bits or more of genuine latitude, but if the video file was captured from a broadcast video camera it is effectively limited to 8-bit, i.e. the output is just padded to 10-bit, 12-bit etc.?

  • Mactrix

    March 7, 2006 at 5:05 pm

    Yes, you’re right that most filter in FCP were limited to 8-Bit.
    In Version 5 the color correction filters were enhanced to 32-Bit.
    Anyway I made all my tests in After Effects and Shake. FCP was
    just for I/O … 🙂

    On mactrix.org you can get an idea of the HD video. The whole
    postprocessing was done in AE and we tried to use a difference
    key for the masking jobs … (better turn off the music, the german
    rap is horrible).

    Also we shot in cinema gamma mode to get more range for
    color correction and that’s how I’ve tested intensively the difference
    between 8 and 10 Bit inside a 16 Bit project. It was one of many
    other tests …

    And yes film transfer should be in more than 8-bit but because
    I tested DigiBeta and found out that luma is 8-Bit and chroma
    something between 8 and 10, I would prefere file transfer such
    das DPX … also because of logarithmic color space. Instead of
    DigiBeta I would try D5 …

Page 4 of 4

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy