Creative Communities of the World Forums

The peer to peer support community for media production professionals.

Activity Forums Creative Community Conversations Advice on docu workflow with SD tape footage, FCPX or not?

  • Robin S. kurz

    November 14, 2015 at 12:14 pm

    [Walter Soyka] “No two independent codec implementations are likely to produce exactly the same bitstream output”

    I was referring to one and the same source of course. I’m aware that there are different DV implementations, yes.

    – RK

    ____________________________________________________
    Deutsch? Hier gibt es ein umfassendes FCP X Training für dich!

  • Walter Soyka

    November 14, 2015 at 12:43 pm

    [Robin S. Kurz] “I was referring to one and the same source of course. I’m aware that there are different DV implementations, yes.”

    Then I’m afraid I don’t understand what you meant by “DV is DV is DV.”

    [Robin S. Kurz] “But granted, I don’t know what type of post-processing magic they’re (obviously) applying to the one or other port i.e. what the capturing hardware it’s going through could be doing. But then we’re not talking about the original, unadulterated DV stream anymore, no. In which case it’s kinda like saying that my TIFs oddly look better after I run them through Photoshop. :D”

    My point is that the bitstream is not the image. It’s irrelevant to talk about the “original, unadulterated DV stream,” because that’s not a thing that you can see, or a thing a computer can apply any image processing to. It needs to be decoded first. The fact that there are different DV implementations means that there are different ways to derive an image from the same bitstream. It’s not necessarily “post-processing magic” — though there certainly can be some of that — it’s also in-process variation.

    It’s kind of like raw video. You can’t look directly at it, because sensor data is not the same thing as image data. The sensor data needs to be interpreted into an image. This is not some kind of optional post-process outside the standard; it’s a necessary component of the raw decoding itself and there will be variation across developers. (A lossless TIFF, on the other hand, describes image data directly. Apples and oranges.)

    To keep this somewhat relevant, this discussion applies to all kinds of lossy codces, including both H.264 and ProRes, and both encoding and decoding. Different codec implementations will yield different data and visual results, with varying degrees of subtlety.

    Walter Soyka
    Designer & Mad Scientist at Keen Live [link]
    Motion Graphics, Widescreen Events, Presentation Design, and Consulting
    @keenlive   |   RenderBreak [blog]   |   Profile [LinkedIn]

  • Robin S. kurz

    November 14, 2015 at 12:45 pm

    Then I guess I’ll have to take your word for it and stand corrected. 🙂

    – RK

    ____________________________________________________
    Deutsch? Hier gibt es ein umfassendes FCP X Training für dich!

  • Bill Davis

    November 14, 2015 at 10:47 pm

    But Walter,

    If Adam Wilt was being accurate – the central fact of the original issue was that doing ANY digital encoding of a signal sourced from Low Res VHS or low bitrate (DV 25) tape is BY DEFINITION truncated as to the high frequency data. So no matter HOW you digitize or process it after this stage, you’re NOT going to get back what was tossed out in the original capture – nor can you accurately bring back bits lost along the way.

    And I just question whether no matter WHAT encoding scheme you employ – it has very much potential to improve the viewable results.

    I can see that if you have a high-res source the various methods of encoding can make a difference – but not downstream from the source.

    I don’t care how high Rez a scan you make of an old newspaper photo – you aren’t going to fix the fact that it was half-toned. Period. ; )

    Plus, if the new transcoding has ample resolution to accurately capture the already dumbed down file – is’t THAT the only thing that matters? As Robin notes, if it’s just 1’s and 0’s – and with a bitstream thus limited – does it really functionally matter how much “extra precision” is being deployed downstream?

    Know someone who teaches video editing in elementary school, high school or college? Tell them to check out http://www.StartEditingNow.com – video editing curriculum complete with licensed practice content.

  • Andrew Kimery

    November 15, 2015 at 1:36 am

    [Bill Davis] “So no matter HOW you digitize or process it after this stage, you’re NOT going to get back what was tossed out in the original capture – nor can you accurately bring back bits lost along the way. “

    Right, no one is saying that you can. What people ARE saying is that the assumption that DV is DV is DV is false.

    Mitch gave a first hand example of this (which was confirmed by the people at Sony). Adam also mentioned how different DV codecs will interpret/display the 1’s and 0’s stored on a DV tape. If the same DV tape was played back in the same DV deck and captured over the same FW cable you could get one result in FCP Legend, a different result in Avid and a different result using Matrox’s DV codec. And if you capture over SDI instead of FW that’s another set of different results. To Walter’s point, those 1’s and 0’s have to be turned into an image in order to be viewable and there are variables with regards to how the image is decompressed, manipulated and then displayed.

    I know the article I posted by Graeme Nattress is long, but you can just skim through the side-by-side image comparisons to see the different in the chroma. Again, capturing via SDI didn’t add in any additional info, but it presented the available image info in a more aesthetically pleasing fashion by smoothing the chroma a bit. If A looks better than B then most people would assume that A is higher quality than B even if the same amount of visual information is being presented.

    Adam also highlighted the perceived image quality superiority of BetaSP over DV. It wasn’t because BetaSP could retain more image information than DV but because the artificating in BetaSP was more aesthetically pleasing than DV. This made the BetaSP look higher quality even if it wasn’t resolving more image detail than DV. My point being that it’s not just about retaining the available image quality but also maintaining the perceived image quality by keeping digital artifiacting to a minimum.

    Also, your exchange with Adam only touched on one aspect of compression which contained around the amount of detail in the image. Again, I’d like to know what the bitrate threshold is for H.264 not to fall apart when a bunch of flashbulbs go off. VHS won’t fall it apart (it will just clip) but when an LGOP codec becomes bit-starved the image turns to macro block mush. Maybe H.264 has improve so much over MPEG2 that it’s not really anything to be concerned about anymore. I don’t know as I haven’t done any tests with it.

    And nobody has really talked about scaling even though that’s a big part of retaining as much image quality as possible when going from SD to HD. If someone takes an DV tape and makes an SD H.264 file how well would that scale up to HD vs an SD ProRes file scaling up to HD? The best cause would of course be to do the SD->HD scaling during the initial transcode process but if space is an issue then that may not be an option. I dunno, just brain storming.

    -Andrew

  • Bill Davis

    November 15, 2015 at 1:40 am

    [Andrew Kimery] “I dunno, just brain storming.

    Which is EXACTLY why these discussions are so valuable.
    The benefit of many perspectives.

    ; )

    Know someone who teaches video editing in elementary school, high school or college? Tell them to check out http://www.StartEditingNow.com – video editing curriculum complete with licensed practice content.

  • Michael Gissing

    November 15, 2015 at 9:45 pm

    [Bill Davis] “Plus, if the new transcoding has ample resolution to accurately capture the already dumbed down file – is’t THAT the only thing that matters? As Robin notes, if it’s just 1’s and 0’s – and with a bitstream thus limited – does it really functionally matter how much “extra precision” is being deployed downstream?”

    Short answer is yes. Long answer is that a lot of the tech problems created by an encoding technology can be addressed. When such processing is employed it needs to be re encoded at a higher precision of bit depth and color sampling in order to avoid reintroducing errors like banding. Chroma smoothing also improves DV and post smoothing, sharpening can be applied.

    So if that processing is happening via an SDI output from a DV deck then that needs to be captured at a higher resolution. Any image manipulation (and that is what we do in post) should be done with the higher bit depth and color sampling of a better codec. Regardless of what people think of digital signals and theory, post people are not deluded when they observe the benefits of capturing and processing low res digital codecs using better codecs.

    For years people wondered how I made their HDV look so much better and the secret was using a Canon camera with an SDI output and capturing the files as 10bit 422.

  • Jim Wiseman

    November 16, 2015 at 8:45 am

    Agree. I use the SDI out of my Panasonic AJ-D650 to encode to ProRes 422 and depending on what the other material is encoded in, even ProRes 422HQ if there is not that much material. I can see a difference from Firewire output of the Pansonic mini-DV VTR. Of course all of the DVCPro is SDI out.

    Jim Wiseman
    Sony PMW-EX1, Pana AJ-D810 DVCPro, DVX-100, Nikon D7000, Final Cut Pro X 10.2.2, Final Cut Studio 2 & 3, Media 100 Suite 2.1.6, Premiere Pro CS 5 5.5 and 6.0, AJA ioHD, AJA Kona LHi, Blackmagic Ultrastudio 4K, Blackmagic Teranex, Avid MC: Mid 2015 MacBook Pro Retina 15″: 2013 Mac Pro Hexacore, 1TB SSD, 64GB RAM, 2-D500: Helios 2 w 2-960GB SSDs: 2012 Hexacore MacPro 3.33 Ghz, 24Gb RAM, GTX-680, 960GB SSD: Macbook Pro Retina 2015, i7, 500GB, M370X 2GB: Macbook Pro 17″ 2011 2.2 Ghz Quadcore i7 16GB RAM 250GB SSD, Multiple OWC Thunderbay 4 TB2 and eSATA QX2 RAID 5 HD systems

  • Mauricio Lleras

    November 16, 2015 at 3:20 pm

    Hello again to all,
    my apologies if I was absent,
    as I said I live in Paris
    and as you may guess things here these past few days
    have been difficult, so I hadn’t been really available.

    Great input from everybody on this topic,
    thanks a lot!
    Especially interested by the perceived gains
    Mitch, Michael, Jim and others have noticed
    in capturing low res SD through SDI to a 422 10 bit codec
    such as Prores 422.
    While I don’t know if I’ll be able to use this method
    – we do have a sony deck that outputs SDI,
    but the card the guy has has no SDI IN
    and I don’t know if he would invest in a different one,
    knowing that budget is an issue-
    I would be interested in hearing your thoughts
    on using Prores LT as opposed to Prores422,
    thinking of course about saving some space,
    but also the question being,
    what, in this case, and having in mind what Bill and Adam said,
    would be the gain in going Prores422 over LT,
    if LT is already 10 bit 422, captured also through SDI,
    what would the extra bits/space bring, if anything?
    And keeping in mind that ProresLT’s bitrate is almost identical to DV’s
    in SD formats…
    I know we’re probably talking about some very minute details here,
    but would be interested in your answers all the same!
    Could anybody eventually do a quick test to provide feedback?
    Anyway, interesting stuff, very glad to have started
    this discussion here, thanks again.

  • Michael Gissing

    November 16, 2015 at 10:14 pm

    I suspect you will not see any difference between LT and 422 flavours of ProRes. The big gains are in the bit depth and colour space but it should be simple to do a test on some difficult footage with fast movement and see. Becasue both flavours of ProRes are I frame there should be no difference given the lack of image information in the original codec.

    That said your final grade render can be anything and at that point you may as well make your final in 422 to be as lossless as possible in that final render.

Page 8 of 8

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy