Creative Communities of the World Forums

The peer to peer support community for media production professionals.

Activity Forums Apple Final Cut Pro Legacy Difference between 8 bit and 10 bit uncompressed

  • Shane Ross

    September 25, 2005 at 9:54 am

    8-bit…21mb per sec.

    10-bit…28mb per sec.

    Less compression with 10-bit, but mostly 8-bit uncompressed is very acceptable to broadcasters.

  • Dan Riley

    September 25, 2005 at 4:36 pm

    Acceptable to Broadcasters?
    What does that mean TODAY ?
    There was a time when if someone said “broadcast quality”
    we all knew what that meant. No anymore.
    They are not the arbiters of what is good quality
    by any means. Just look at the stuff local stations
    and the networks are putting on their air.
    Now it’s all up to what you, your customer and/or your
    client thinks is good. It’s all a quality/price judgement today.

    A good buddy of mine works on sports trucks here
    in the Seattle area, doing loads of national games
    with the HD trucks. It breaks his heart to see how
    beautiful the pictures are when they come out of those
    Ikegami and Sony HD cameras only to be trashed
    through the mud and compression of “broadcast” transmission
    once it’s on the air. The end result is the home viewer is
    seeing only a imitation of what was really produced
    and this is causing people to say, yeah it looks good
    but not $2000 better. Broadcasters have lost the
    quality argument. The problem is they don’t know it.
    Have you seen the DVDs of television programs like
    Friends, etc. ? There is simply no comparison between
    what those pictures look like and anything you will see
    coming out of your local station once the station
    loads the programs into servers that compress the hell
    out of everything. It’s all so sad because there was a time
    when broadcast management did care what stuff looked like.

    Just try the different codecs and compression levels for yourself
    and see what is acceptable to you and your client/customer
    and what you can afford in cameras, decks and drive space.
    Don’t let anyone tell you what it “has” to look like.

    Dan

  • Alan Lacey

    September 25, 2005 at 6:25 pm

    Shane,

    Did you mean MB/s ?

    Alan

  • Drizzt_g

    September 25, 2005 at 7:49 pm

    Dan, the problem with HD on TV is that not only the network, but local stations have to be able to transmit the HD signal, but right now you need to get that signal on the special HD channels. Were getting closer and closer to a full HD program schedule across the board.

    As video editors we know its coming, we know the difference, we can see it, but try convincing the average consumer to buy an HD READY TV, the cost difference is to much especially with all the deals out there for SD big screen TVs.

  • Dan Riley

    September 25, 2005 at 8:27 pm

    I guess I got off on a rant above. Too much coffee maybe.
    Please forgive the off subject response.

    Not speaking for Shane, but the uncompressed data rate
    is approx 24MB per second. I can’t find the list that says
    8 bit is 21MB and 10 bit is 28MB but it sounds about right.
    If I remember correctly, One Inch videotape was in the
    area of 8 bit, although analog, and this was a very good
    picture which was the standard for years. I believe D-2
    digital was also 8 bit. The thing that’s good about 10 bit
    processing is the extra range of greys and colors per sample.
    This is good for chroma keys and other graphic and effects.
    The difference however, between 4:2:2 processing
    (which is used with 8 and 10 bit uncompressed cards) and 4:1:1 is large.
    DV is 4:1:1. There you have half the chroma
    information to work with and it really shows when
    doing colored text and graphics.

    Dan

  • Dan Riley

    September 25, 2005 at 8:39 pm

    Drizzt_G

    My understanding is the 45MB/s HD signal that is transmitted via Sat
    from NBC for example, to the local stations, is then compressed
    to 19MB/s to fit on the 6Mhz bandwidth the station has for it’s digital
    channel. And now I’m hearing they are compressing it even more
    so the can have an HD channel and two or three other SD 480p
    channels. Broadcast HD is fine if nobody is moving. But when there are
    pans and movement, it falls apart pretty badly. And on local
    cable it’s even worse. I believe they compress the HD signal
    down to 14MB/s or more. ESPN HD on a cable station doesn’t
    look anything like what ESPN HD is sending down the line
    on the HD Sat feed. I know it has the potential to get better
    as the compression technologies improve. But is seems like
    the instant that happens, instead of passing on a better looking
    picture to the customer (viewer to them) they opt for more
    channels. It certainly is up to them how to run their business,
    and actually it provides a large opportunity to people who
    would like to give customers the best viewing experience
    and that is where DVDs have already and DVD-HDs soon
    will be snapped up.

    Dan

  • Ed Dooley

    September 25, 2005 at 10:47 pm

    NTSC DV *is* 4:1:1, but PAL DV is 4:2:0
    Ed

    [Danrnw] “The difference however, between 4:2:2 processing
    (which is used with 8 and 10 bit uncompressed cards) and 4:1:1 is large.
    DV is 4:1:1. There you have half the chroma
    information to work with and it really shows when
    doing colored text and graphics.

  • Tom Matthies

    September 25, 2005 at 11:44 pm

    Loss of quality at the transmitting end isn’t anything new these days. Any of you old enough to have been in the broadcasting industry before satellite know this. If you could see the difference in the quality of a network signal at it’s originating studio as compared to after it traveled hundreds of miles through co-ax and microwave relay centers to the local affiliates, you would have been shocked! It was truly amazing how good a program looked at the studio compared to what it looked after being received at home. As “bad” as you think todays digital compression and transmission look, the degraduation of the signal was FAR worse in the pre-satellite, analog days. Satellite transmission cleaned up the picture a lot, especially in the audio department, but the transmission chain was still just a bunch of analog compromises set in a long string from the Network to the viewer. Digital content delivery though, still has a long way to go and will depend largely on how much compression is applied (read: how greedy is the broadcaster). Broadcast quality is an exact science and a subjective one all at the same time. With all of today’s delivery formats, I fear it will only get worse before it gets better. I now have people sending me (with alarming regularity) DVD source materials for use in productions. Yikes! To some, the phrase “it’s digital” seems to justify any piece of crap material that can be utilized in a produstion. The process of all digital production/editing is facing quality problems like never before. It’s up to us to at least attempt to educate our clients and do our best to keep things looking as good as possible for as long as possible.
    So, despite all of the problems digital transmission faces today, it’s still a step in the right direction.
    Now, if I could only say the same thing about the content od today’s programming…
    Tom

  • Marco Solorio

    September 26, 2005 at 1:32 am

    [fafounet] “What is the difference between 8 bit and 10 bit uncompressed ?”

    This thread has taken a bit of a spin, so I’ll get back to the source of your question.

    In most general scenarios, 8-bit uncompressed will be passable for what you’re doing, especially if your source is 8-bit with little to no graphics. However, if you implement a lot of motion graphics and rendering, then 10-bit may be your solution. The biggest reason is that all 8-bit codecs (but one) will cause banding or contouring in gradients (the one 8-bit codec that does not incur banding is the old but infamous Aurora 8-bit Igniter codec, which had a logic-dither algorithm without adding random noise which would otherwise reduce the image quality to 7-bit as opposed to 8-bit).

    With 10-bit codecs however, the larger bit-depth yields more “steps” between colors so that banding does not occur. A logic-dither algorithm (mentioned above) is not needed with 10-bit, which is good because as good as a logic-dither algorithm is to the eye, it does create a slightly less accurate replication from the original source. But I digress.

    If you have the drive speed for 10-bit and and the space to accommodate all the media files, then go for it. I pretty much use 10-bit about 90% of the time, I’d say. If you’re mostly cuts only, then it’s somewhat moot.

    If you want a visual comparison to see how varying 8-bit and 10-bit codecs stand up, check out my codec resource site, which although is in need of updating and undergoing new testing and content, is still relevant to the questions you’re asking…

    https://codecs.onerivermedia.com/

    Good luck!

    Marco Solorio  |  OneRiver Media

  • Bruce I weir

    September 29, 2005 at 1:16 pm

    Just had a really good example of the difference this week. I had a commercial which had a simple blue bkgd with lighter blue circles with a gaussian blur on them moving around. When I ran them through my aurora card (8bit) there was just a little bit of ringing around the edges of the guassian blur – very little but when you looked closely it was there. When we went online – with a 10 bit card – gone absolutely none. The noticeable difference in 8 and 10 bit on a day to day basis for most of us is the quality of gentle gradiations (without staircasing) from one end of the grade spectrum to the other.

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy