Activity › Forums › Blackmagic Design › Decklink 10 bit workflow and mpeg2 question
-
Decklink 10 bit workflow and mpeg2 question
Mactrix replied 20 years, 2 months ago 7 Members · 33 Replies
-
Pablo2099
March 1, 2006 at 2:35 amI dont think Procoder does 25mbps – could be wrong but im sure 15 is the max.
I switched to adstream years ago as I liked their online booking interface. I think the rates are probably similar to dubsat
Pablo
-
Richard Scobie
March 1, 2006 at 6:27 pmTmpgenc won’t go above 15Mb/s in 422@ML either.
Regards,
Richard
-
Mactrix
March 6, 2006 at 6:23 pmIf your footage is in 8-Bit you won’t gain anything.
There are quite no real 10-Bit formats out there.
Even DigiBeta is 8-Bit in luma, and more that 8-Bit
in Chroma, but far away from true 10-Bit.What you mean is internal rendering in more than
8-Bit. This is another question. Capturing 8-Bit
footage in 10-Bit duplicates pixel values. No extra
information will be added … -
Bj Ahlen
March 6, 2006 at 7:14 pmThere used to be some DigiBeta equipment that had the last 2 bits set to zero, but that was a very long time ago.
DigiBeta is a pure 10-bit format, and part of the price justification for the cameras is the money that was spent on making sure the user got really good 10-bit out of say a 12-bit or a 14-bit A/D converter and an even deeper DSP.
10-bit means two extra f/stops of freedom to tweak the footage in post, even if the eventual output is in 8-bit.
It also means avoiding a lot of different banding-related artifacts, not only in post processing. If you have to get banding, better to get it in the output format rather than risk having input banding mess up just about anything you do to it in post (if the subject is susceptible to this, which of course is not always the case).
I shoot only 10-bit since mid 2004, and really appreciate the difference.
-
Mactrix
March 6, 2006 at 8:13 pmWhen you shoot only in 10-Bit how can you notice any difference to 8-Bit? 😉
I worked with real 10-Bit HD (straight from the camera signal feed, hardisk-recorded,
no tape influence) footage and there was no way to produce banding artefacts in
10- or 8-Bit. Sometimes I ask my self if people really know what banding means and
how it looks like. But it sounds great for the client … 🙂 -
Bj Ahlen
March 6, 2006 at 8:36 pmWhich camera are you using, which connection, which A/D converter?
If you are using a consumer camera or most of the semi-pro cameras, you won’t see any difference between 10-bit and 8-bit, because the camera signal itself doesn’t have a 10-bit range.
From https://www.cineon.com/conv_10to8bit.php: With 10 bits per color over a 2.048 density range, the resulting quantization step size is 0.002 D per code value. This is below the threshold for contour visibility, which insures that no contour artifacts (also known as mach banding) will be visible in images.
From https://www.necdisplay.com/gammacomp/readme.html: The 10-bit resolution provides 4 times as many levels as 8-bit, so color banding problems that occur when using 8-bit tables can be reduced.
From ” target=”_blank”>https://www.pcstats.com/articleview.cfm?articleid=1109&page=3: [Matrox Parhelia’s] 10-bit colour, as you can see in the graphic, gives a much smoother appearance due to the increased range of available shades.
This matches also my own experience. But only with a broadcast camera that has real 10-bit output.
-
Mactrix
March 6, 2006 at 9:34 pmPanasonic Varicam, 14 Bit quantization, 10-Bit HD-SDI out, 720p60
The links you’ve posted are exactly what I mean. It’s marketing
bullshit from the manufactures, besides the first link. But the
first link has nothing to do with video. It’s film, it’s logarithmic
color space – total different story and yes, working with film
demands more than 8-Bit.The graphic from the last link says nothing. It’s a typical trivial
illustration that should show how much more range 10 Bit offers.
Yes, it’s true there are four times more steps … but this is the
only theory … if you don’t have this on tape it won’t help. And
I made enough tests with DigiBeta. You don’t have it on tape.And I made tests with hard disk recording. It’s very difficult to
make the true 10-Bit visible even in HD (video noise is one
reason – aka dithering). It matters only in computer generated
graphics and gradients that don’t exist in “nature” … -
Bj Ahlen
March 6, 2006 at 10:44 pm[mactrix] “And
I made enough tests with DigiBeta. You don’t have it on tape.And I made tests with hard disk recording. It’s very difficult to
make the true 10-Bit visible even in HD (video noise is one
reason – aka dithering). It matters only in computer generated
graphics and gradients that don’t exist in “nature” …”Perhaps you are evaluating the differences between 8-bit and 10-bit on a 6-bit LCD monitor with dithering? Those are quite common, although they are disappearing fast.
I feed a 10-bit uncompressed 4:2:2 SDI signal directly from the camera (Sony DXC-D50WS + CA-D50, same camera head as the high-end DigiBeta) to a BMD Extreme card and use BMD’s 10-bit codec (yes, I tried their 8-bit also). The video noise in my camera is about as low as it gets right now, and it’s not an issue for banding in my case.
I agree that computer generated graphics represent the worst case, but it’s not the only case.
To find unlimited gradients in nature, just look at the sky near the horizon, or the cheek or forehead on a sidelit face, or any of a number of other natural very light textures with a sidelight.
Of course if you’re going for a gritty look (which is totally valid), then none of this matters.
-
Mactrix
March 6, 2006 at 11:40 pmI don’t trust Computer Monitors for several reasons:
1) Computer gamma is different from video gamma.
2) Operation system and graphic card are limited to 8-Bit.
3) Quite all CRT- and LCD-Displays can’t even display the
whole range of 8-Bit depending the monitor profile. So
there is always a interpolation of color values.Because we are talking about video we should look on
a video monitor … class 1 of course. You want to say
there a differences between 8- and 10-Bit? If one day
there are displays that can project more than 8-bit than
we would profite from that but I would be happy yet if
they would offer true 8-Bit …And the example with the horizon is again an example
you find in any manufacture description … I never had
a situation with real life shots where this mattered.
The video noise as low as it might be is one reason
why you never have these kind of gradients like in
computer graphics. Apply a dither or jitter to a CG
gradient and you will see the same result. It’s not the
technical point! 10-bit is four times more than 8 but
it’s our eye … so easy to overwit. -
Bj Ahlen
March 7, 2006 at 1:49 amPanasonic Varicam, 14 Bit quantization, 10-Bit HD-SDI out, 720p60
I’m still baffled that you can’t see the difference between 10-bit and 8-bit on a Class 1 monitor.
1. Could it be that you don’t set the SDI switch on the Varicam to EE, but to EE/PB? That would pre-filter the signal (on playback) to 8-bit, even though you are feeding it out the SDI connector. It has fooled many pros, see for example https://www.hdforindies.com/2004/06/sata-raid-hd-capture-tests-frc-dvcpro
2. The only other possibility I can think of is that you are having some problem with codecs on the Mac, that has screwed up many Mac users also, as Apple has wanted to have control over the HD codecs, and they like to do things their own way.
…It’s marketing bullshit from the manufactures, besides the first link.
But the first link has nothing to do with video. It’s film, it’s logarithmic
color space – total different story and yes, working with film
demands more than 8-Bit.
Cineon is a video format, not a celluloid format. Used primarily for DI which
simply means transfer from film to video. And working with 10-bit also demands
more than 8-bit. No difference compared to film other than that less expensive video cameras have a couple of stops less range than film (and 8-bit of course has two stops less than 10-bit).You say there are no 8-bit monitors even? Take a look at https://www.ids-healthcare.com/hospital_management/global/eizo_nanao/color_lcd_monitors/102_0/g_supplier_4.html for some Eizo Nanao certified 10-bit monitors with a billion colors. There are plenty more.
Of course it’s all “marketing bullshit”.I use a Sony professional CRT monitor fed via a Decklink Extreme card for monitoring, but I’m not the end user of my content.
I shoot 10-bit for the extra latitude it gives me in post, and I am grateful to have it. I shot on film for 30 years, now I’m grateful to not have all the worries of shooting film, even though it can look quite wonderful in 35 mm at least. I do my post work in 10-bit or 16-bit depending on if I use Combustion or After Effects, and my final output is 8-bit of course, whether for DVD or broadcast. Even with the 8-bit eventual output, shooting and posting in 10-bit or better helps me get a nicer look.
Your mileage may vary, so let’s just agree that you don’t find a difference between 10-bit and 8-bit, and I do.
Reply to this Discussion! Login or Sign Up