Creative Communities of the World Forums

The peer to peer support community for media production professionals.

Activity Forums AJA Video Systems Component or SDI, which is better quality?

  • Mitch Ives

    May 4, 2005 at 5:57 am

    Actually, I’ve been doing this since the very first day the Io shipped.

    FWIW, having a deck that has SDI out is far superior to using any converter box that starts with a firewire input. Decks like the 1500, when using SDI actually bypasses the firewire degradation, and since it comes right off the TBC it is being “psuedo” upsampled to 4:2:2. That’s one of the reasons it looks so clean…

    Mitch Ives
    Insight Productions Corp.
    mitch@insightproductions.com
    http://www.insightproductions.com

  • Michael

    May 4, 2005 at 4:50 pm

    Quote:
    >Decks like the 1500, when using SDI actually bypasses the firewire degradation, and since it comes right off the TBC it is >being “psuedo” upsampled to 4:2:2. That’s one of the reasons it looks so clean…

    Firewire doesn’t cause any degradation in and of itself. There’s no encoding or decoding going on during the send over firewire. It’s just a data stream off of the tape, much like a hard drive. The degradation comes from the codec on the receiving end. SDI recording looks clean because you’re using the uncompressed codec, Apple’s DV codec, though fine on a consumer level, is dirty under technical analysis. As I wrote, I’ve seen tests that actually verify this. An outboard converter is just doing what the 1500 is doing inside the machine. As long as the converter is of good quality, there shouldn’t be any perceptible difference. If you know otherwise, please explain!

    -mjd

  • Mitch Ives

    May 4, 2005 at 5:04 pm

    [Michael De Lazzer] “Firewire doesn’t cause any degradation in and of itself. There’s no encoding or decoding going on during the send over firewire. It’s just a data stream off of the tape, much like a hard drive. The degradation comes from the codec on the receiving end. “

    Using firewire MEANS using the codec, which means degradation. I think we can avoid the semantics arguements here. Using SDI bypasses the codec, comes directly off of the TBC and results in an upsampled 4:2:2 signal. This is why you can key from it.

    [Michael De Lazzer] “SDI recording looks clean because you’re using the uncompressed codec, Apple’s DV codec, though fine on a consumer level, is dirty under technical analysis. As I wrote, I’ve seen tests that actually verify this. An outboard converter is just doing what the 1500 is doing inside the machine. As long as the converter is of good quality, there shouldn’t be any perceptible difference. If you know otherwise, please explain!”

    FWIW, I did this before ProMax. I gave the idea to Charles. Yes, the 1500 IS doing more (as would other decks more than likely), and Charles knew this as well. It was his agreemeent that made me approach the Sony engineers to find out exactly what the deck was doing (which I already explained).

    I understand you’ve seen some tests, but I did the original tests in detail. I did blind results testing with people and got consistent results. In addition, I’m doing this day in and day out and have been since before the G5’s were even out (dual 1.25). With all due respect, this isn’t a theoretical discussion with me…

    Mitch Ives
    Insight Productions Corp.
    mitch@insightproductions.com
    http://www.insightproductions.com

  • Michael Lazar

    May 4, 2005 at 5:18 pm


    Using firewire MEANS using the codec, which means degradation. I think we can avoid the semantics arguements here. Using SDI bypasses the codec, comes directly off of the TBC and results in an upsampled 4:2:2 signal. This is why you can key from it.

    DV/DVCAM is 4:1:1 format. The use of SDI *may* provide a superior mechanism for capturing to disk but I don’t see how it can add back missing bits of chrominance data through upsampling. Any banding on tape would I suppose simply be “transcribed” into the 4:2:2 sampling.

    Michael Lazar
    okeanos/visual immersions
    http://www.okeanos.com

  • Mitch Ives

    May 4, 2005 at 6:02 pm

    [Michael Lazar] “DV/DVCAM is 4:1:1 format. The use of SDI *may* provide a superior mechanism for capturing to disk but I don’t see how it can add back missing bits of chrominance data through upsampling. Any banding on tape would I suppose simply be “transcribed” into the 4:2:2 sampling. “

    Michael, you continue to have theoretical speculations, while the rest of us are working in the practical experience world. First, we observed the reality, THEN we set off in search of the explanation. The difference in edge smoothness is visibly noticeable. It took me several levels of Sony engineers to get the explanation that it comes right off the TBC, which is why it has to be upsampled to 4:2:2. Is it the same as shooting in 4:2:2 to start with? No, but it’s a hell of a far cry from 4:1:1. It keys magnificently, and no it isn’t a codec improvement that’s accomplishing this.

    I think it’s time to stop the speculation and have you actually do the experiment. Remember, the NASA engineers speculated that a bumble bee didn’t have sufficient wingspan to support flight, even though people all agree that they do. While the technical reason is that they “swim on the viscosity of the air”, the point is they fly.

    Mitch Ives
    Insight Productions Corp.
    mitch@insightproductions.com
    http://www.insightproductions.com

  • Michael Lazar

    May 4, 2005 at 6:17 pm

    I feel so ashamed. Everyone else knows this to be the truth except me. ;-(

    Michael Lazar
    okeanos/visual immersions
    http://www.okeanos.com

  • Mitch Ives

    May 4, 2005 at 6:28 pm

    But now you do…

    Next week you’ll tell me something I didn’t know… and on and on it goes…

    Mitch Ives
    Insight Productions Corp.
    mitch@insightproductions.com
    http://www.insightproductions.com

  • Tom Matthies

    May 6, 2005 at 4:38 am

    DV25 is DV25 is DV25 no matter where it originates from. A straight Firewire transfer is just that a transfer of data. If it comes over an SDI connection is it still the same quality. It’s not adding pixels, or anti-aliasing them, simply transfering them. While some decks “upsample” to 4:2:2, there is really no additional data added. Take a picture of a pile of bricks stacked three high a pyramid shape and you will easily see the “aliasing” at the edges of the bricks. Take a picture at a higher resolution and thise edges are still there. The higher sampling rate is not going to fill in the dges and make the stack look smoother. Same principal applies to video.
    While ther may be some differences in codecs, the main reason why an uncompressed time looks better is the fact that everything else overlaying it looks better. If you add a graphic to a DV timeline, it will look a little soft at the edges. If you add the same graphic over an uncompressed timeline, it will look better, all things being equal. The underlying footage will look the same, more or less.
    While I won’t argue that there might be slight differences in the codecs involved, there really isn’t that much of a difference between transferring via Firewire or SDI. You will see a major difference, though, between SDI and component analog captures. The former is again just a transfer of data, while the later involdes converting the three analog signals into a digital format for processing.
    There are filters that can “smooth” a 4:1:1 clip giving the appearance of 4:2:2 sampling and hence the appearance of sharper video. Nattress has a pretty nifty filter that’s always evolving and getting better. I know just enough to be dangerous on the subject. If you really want to get into a technical argument (discussion?), drop him an email on the subject. He explains it beautifully.
    Just my 2

  • Mitch Ives

    May 6, 2005 at 2:31 pm

    [tom matthies] “If it comes over an SDI connection is it still the same quality. It’s not adding pixels, or anti-aliasing them, simply transfering them. While some decks “upsample” to 4:2:2, there is really no additional data added. Take a picture of a pile of bricks stacked three high a pyramid shape and you will easily see the “aliasing” at the edges of the bricks. Take a picture at a higher resolution and thise edges are still there. The higher sampling rate is not going to fill in the dges and make the stack look smoother. Same principal applies to video.”

    Have you actually done this? I’m betting not. There is a difference. More importantly, I’ve had twenty different people who’ve looked at the difference in edge smoothing etc. and they all did what I did… shook their heads and said “I don’t understand why… in theory there shouldn’t be an improvement, but there clearly IS an improvement”.

    [tom matthies] “While ther may be some differences in codecs, the main reason why an uncompressed time looks better is the fact that everything else overlaying it looks better. If you add a graphic to a DV timeline, it will look a little soft at the edges. If you add the same graphic over an uncompressed timeline, it will look better, all things being equal. The underlying footage will look the same, more or less.”

    That’s true enough… except for the part about the underlying footage.

    [tom matthies] “While I won’t argue that there might be slight differences in the codecs involved, there really isn’t that much of a difference between transferring via Firewire or SDI. You will see a major difference, though, between SDI and component analog captures. The former is again just a transfer of data, while the later involdes converting the three analog signals into a digital format for processing.”

    Again I ask… have you actually done this?

    [tom matthies] “There are filters that can “smooth” a 4:1:1 clip giving the appearance of 4:2:2 sampling and hence the appearance of sharper video. Nattress has a pretty nifty filter that’s always evolving and getting better. I know just enough to be dangerous on the subject. If you really want to get into a technical argument (discussion?), drop him an email on the subject. He explains it beautifully. “

    He and I have discussed this. So have marco and I. Everbody argues with this UNTIL they actually do it. Then it gets quiet.

    In the future, let’s have everyone state whether or not they have actually done this in their posts. Let’s leave the theorizing to another forum…

    Mitch Ives
    Insight Productions Corp.
    mitch@insightproductions.com
    http://www.insightproductions.com

  • Tom Matthies

    May 6, 2005 at 5:13 pm

    As a matter of fact, yes I have done all of the above. We work with Digibeta most of the time, but have a need to capture DV25 material as well. I have two way to do it. Through a Panasonic AJ-750 with an installed SDI card or through a small Firewire Deck. The Panny will “upconvert” to 4:2:2 color space. We tried each method, via the Panasonic deck and via the Firewire deck. No descernable difference in either case. Both were virtually the same. Actually, the Firewire transfer actually looked a little better for some reason. One less cycle or conversion to/from another format?
    There is only so much information available in a DV25 signal. It’s compressed 5:1 at the source and recorded that way. Playing back in either case uses the exact same information to begin with and does not “add” any additional information to the signal. It simply increases the sampling/data rate used to describe the information that already exists.
    A good analogy would be to take an MP3 audio file, a highly compressed format, and drag it into iTunes or Quicktime Pro and then export it as an .aif file. The result is a file much larger than the original MP3. Has quality been gained or “synthesized” in the process. Nope, the “higher quality” .aif will sound virtually identical to the original MP3. Nothing was gained. Your are simply using more data to discribe the original MP3 file. It simply is not going to sound better just because of the bigger file size.
    The same applies to converting a DV25 data stream into a higher quality datastream. You are just using more “bits” to describe the original data. No new information is being synthesized in the process. Now upconverting is another story where new information is “made” by interpolating points between existing data points. It isn’t really adding real extra “real” quality, but it does appear that way to the eye and results in both a larger file and usually a larger screen size or resolution..

    Anyhow, this is one of those topics that has been discussed endlessly in various forums here. All things being equal, there isn’t going to be much difference in either method. Now I won’t argue that all codecs are the same. There can be noticable differences there. The Apple DV codec is much improved over it’s earlier versions, but there’s always room for improvement. If there’s any difference , that’s where it might occure.
    BTW, Apple writes all the drivers for the AJA Io. They all come out of the same place.
    All I’ll add is if it works for you and fits your workflow, do it. This is simply one of those topics that everyone will never be able to agree on.
    Tom

Page 2 of 3

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy