I have enjoyed this discussion immensely and DO have some questions that appear at the end of this tome. Feel free to jump to the end.
Despite having been in pro video for decades, I am always humbled when the discussion turns to IRE settings, color spaces, chroma headroom, channel interaction, phase delay, color gamuts, ad nauseam.
Frankly, I am always a but surprised when my work meets my expectations despite feeling like there is so much I don’t know. As they say, the more you know, the more you know what you don’t know! So little time to learn so much! Instead, I rely on my eyes, the relationship of what I see to what various scopes, traces, and histograms show me.
Much of my recent work has been archival in nature. After decades of various format transfers and archiving, I always try to maximize one thing – to capture every last quanta of information from the original source onto the newer format. Although this often contradicts the edicts of “convert with the viewing device in mind”, it guarantees that the maximum amount of information has been preserved. As such, you can be confident you have all the data their is/was and can perform later transformations for a specific viewing device.
For example, for my digital photo art, I use the widest color space which results in a washed out image BUT preserves the maximum range of the original scene. Only when producing a final piece for showing or sale do I adjust the image for the target medium.
In terms of this discussion, I looked at the RGB histograms. One IRE setting produced a very nice shouldering of the dark levels at the expense of blown highlights. The other IRE setting does the opposite, it accommodates the highlights better at the expense of deleting a good deal of shadow tones.
This suggests that the tonal range of the source may be greater than the transfer chain can handle. Assuming the Canopus has exposure and contrast adjustments, I’d choose the IRE level that preserves the shadow details then attenuate the brightness so that the histograms shows good shouldering at both ends of the tonal space.
This will likely produce an image that will have less impact and lower contrast BUT you will have archived the maximum amount of information.
Once you have a digitized master, you can then tweak copies of that for whatever the viewing device is. Another thing to keep in mind is that modern HD/UHD/4K displays have huge tonal ranges as compared to TV screens of 10 years ago so that which looks washed out now, will be expanded to a wider range of such displays. The term “TV screen” is a thing of the past. They are now pretty much all wide range, deep black, bright white displays equal to or better than many home PC monitors.
I am currently transcribing some Digital-8 tapes as well as Hi-8 tapes played back on a Digital-8 camcorder (Sony TRV-510) using its iLink aka IEEE-1394 aka Firewire output. Similar to what others have noted, I see no discernible difference between a direct FW connect and capturing with Premiere Pro versus adding a Canopus AVDC-300 inline with its default settings.
Looking back at my own words, it seems so moot when the quality of a Hi-8, Digital-8 or (gasp) VHS tape is so very poor as compared to today’s HD/UHD/4K displays and my HD Camcorder’s output. I still shutter over the public having chosen Beta over VHS. But memories are precious and irreplaceable – no matter the format or quality.
Despite my diatribe, I also have questions 🙂
Q: Is there any major need to have the Canopus inline when transferring a Digital-8 tape since the result appears virtually identical in all respects with or without it. As others have suggested, it seems as if the Canopus is passing through FW.
Q: I do suspect my Canopus might be useful when transferring analog based Hi-8 tapes that are played on the Digital-8 camcorder. Since the source is analog, I can see where the Canopus might help deal with phase jitter, unstable chroma signals, etc. Does that sound right?
Thanks for the knowledge and for listening.
TK