James Dubendorf
Forum Replies Created
-
John,
Ah! Thank you! The appearance of the image in my preview window after the conversion is not how it will “actually” look when displayed on youtube, vimeo, or even on dvd. This simple misunderstanding has led me down such a confusing road! I assume it is also true that the appearance in the preview window of a computer rgb video in computer rgb color space with its ends clipped (0-16, 235-255)is not how it would ultimately look in youtube, vimeo, etc.
Unfortunately, though my video is all from the same camera, there is tremendous variation between tracks- inside, outside, dark, bright. I will have to tailor my workflow to this consideration.
My attempt to summarize:
CHOICE 1: What color space do I edit in?
The goal here (at least for me) is to use the preview window (as opposed to secondary preview) as a somewhat reliable guide for color correction and levels. If editing in computer rgb, no problem. If editing in studio rgb, you must apply a studio to computer conversion at the video output fx level while editing so that the preview window is not grayed out. Most of the time, it is probably easiest to edit in the color space that matches the majority of the content. What you do NOT want to do is convert twice in the RENDER (i.e. make each event legal at the event level, then apply another conversion at video output fx).
Choice 2: What color space do I render in?
It seems to me there are advantages to having the capability of rendering to either computer or studio rgb- different jobs, different requirements. As long as you’ve done all your editing in the same color space, this is a fairly straightforward conversion as a video output fx.
Do you think I’m starting to get a handle on all this?!
Once again, many thanks for your help John.
James
-
John,
Many thanks for your extremely helpful response. When I wrote my first post, I had no idea the amount of discussion these issues had inspired on forums like this:
https://www.sonycreativesoftware.com/forums/ShowMessage.asp?MessageID=754200
I don’t have the technical background to fully understand this stuff (even the very well informed appear to have different views!), I don’t have the time to stop everything and earn multiple degrees, and my work simply does not demand (nor will my clients be interested in paying for) the highest levels of color correction and fidelity. I am simply trying to formulate a survival strategy that will let me live to film and edit another day!
All the files I’m working with, be they still images or avchd files, are in computer rgb (more on that in a second), so these comments apply to a workflow based on that fact. Also, my projects, though relatively short in length, typically contain numerous short events- therefore anything that has me working event by event is fairly time consuming.
My conclusion is that there appears to be considerable value in being able to render the same video file in both computer and studio color spaces. Youtube or Vimeo can suddenly change their process, new streaming options can come along, you might suddenly have to produce a version on dvd, etc. It seems that the higher up the chain one goes from event to video output fx, the easier it becomes to switch between spaces as long as all track events exist together in the same color space.
This ALSO assumes that if you are converting from computer to studio rgb, you are content with how Vegas applies clipping or compression uniformly to everything depending on the particular plugin and settings.
And here’s the rub. Many of the still images and movie files exist, for lack of a better description, “in between” computer and studio rgb on the histogram. In other words, some would lose crucial detail if 0-16 and/or 235-255 was clipped rather than compressed (or “mapped,” as this tutorial puts it: https://www.glennchan.info/articles/vegas/color-correction/tutorial.htm). Some would look fine if one end was clipped but not the other. Some look just fine with both ends clipped, and horrible if they are compressed. The question of compression, clipping, or some mix of the two, sometimes has to be answered at the event level.
If I’m not entirely off base so far, one could in theory clip at the track or even video output fx level to establish legal boundaries, and then decide how much and which part of the histogram to include within those boundaries at the event level. Or simply make all these decisions piece by piece at the event level.
I’m worried, however, that I’m not entirely understanding the rgb conversion tool. In almost all cases, applying computer to studio rgb conversion obviously makes my images look washed out in the preview window on my external LG led monitor hooked up to my laptop. You hate to give up computer rgb! In many cases, clipping looks a lot better than compressing.
I’ve experimented with turning on and off the “adjust levels from studio to computer rbg” option in preferences-preview device. It makes no apparent difference in my preview window. Should it? And if so, am I not seeing the conversion correctly?
Perhaps my question is this: are the levels problems introduced by conversion from computer to studio an exposure of levels issues that were already existing in the event BEFORE the conversion, or does the conversion create its own problems even on a perfectly tuned event?
If its the former, one could rest easy as long as the levels were sound before the conversion. Color correct and adjust levels in computer rgb at the event level, then throw a conversion on the whole project when needed. If its the latter, you are pretty much left with no choice but to edit two separate versions of the same project going event by event. That’s a lot of work.
Well, its late and I might well be over-thinking this. Heck, my videos were serviceable even before I knew levels or color correction existed.
Thank you for your time!
James
-
Mike,
My understanding about color issues with streaming video comes from here
https://www.bubblevision.com/underwater-video/YouTube-Vimeo-levels-fix.htm.
and here
https://www.youtube.com/watch?v=rWMX5lSvEgY
If you engage the computer to studio fx at the track level, you can always toggle it on and off as you edit at the event level within the track. I suppose that is personal preference. What you would not want to do is convert computer to studio at the event level (using whatever fx combination), then do that again at the track level- if you look at the histogram, this would squash it far within the boundaries of 16-235. You could (in theory) convert at the event level, then apply the broadcast filter at the track level to catch any stray bits- this simply cuts of illegal colors rather than compressing the spectrum.
James
Some contents or functionalities here are not available due to your cookie preferences!This happens because the functionality/content marked as “Google Youtube” uses cookies that you choosed to keep disabled. In order to view this content or use this functionality, please enable cookies: click here to open your cookie preferences.
-
Danny,
Thanks for the response!
Am I reading correctly that you’ve had good results keeping the project settings in HD while rendering to an SD mpeg for output to DVDA and viewing on DVD?
I am anxious about allowing Vegas to adjust media because I have a few different tracks carefully coordinated together involving text, arrows, etc.- if one track changes while the other stays the same, it may cause more problems than it solves. I have also considered rendering the entire project as a neoscene AVI, bringing that file into a new project, then applying certain effects such as color correction to the entire track, and perhaps zooming out a bit to correct the side pillar issue and account for overscanning.
Best,
James -
John,
When I deinterlace the Canon PF 30 footage through neoscene, Vegas does not recognize the resulting AVI as progressive- it does, however, recognize some footage from a Go Pro camera as progressive without a problem. If neoscene is indeed able to deinterlace PF 30 footage, shouldn’t Vegas see this? Any ideas on how to diagnose where I’m going wrong?
Most of this material is destined for web distribution, and it will include still photos as well as various text and graphics. To help in the decision of whether to shoot/edit/render in progressive, perhaps my question is this: if editing both 60i and 30p footage within the same project, is it preferable to set project properties and render settings to 60i or 30p (knowing one or the other will clash)?
Thanks for your ongoing efforts!
James -
At the risk of saying “just one more question” too many times, I have another one related to this workflow. My goal is to record at 1920×1080 resolution, using the vixia hf g10’s PF30 mode. This is described in the manual as shooting 30 frames per second progressive, recorded as 60i (I’ve heard it called a 60i “shell”?).
My understanding is that I can ask neoscene to “maintain source frame format” while checking the “progressive source” box, and the resulting AVI files will be progressive. Vegas, however, did not recognize them as such, and I had to manually change the properties of each file in my media window.
Is it possible I am doing something wrong in the neoscene conversion? Or am I simply running up against the limits of Vegas Movie Studio- I know, I know, I need the pro version but don’t have it yet!
Best,
JamesPS I am shooting progressive because end products will be destined for web distribution, and I would like the footage to play nicely with still photos and graphics. Is this a solid plan?
-
Thanks, John. I think I am up to speed now. Your help has been invaluable!
James
-
John,
Great tutorial! Just what I needed!
I did a bit more testing. When viewing MTS files from the camera and AVI cineform files of the same footage in windows media player, I feel as though the AVI files are of noticeably lower quality when dealing with subjects moving quickly- a bit blurry/jagged. I compared the footage side by side in the Vegas preview window, but could not see the difference there.
Am I seeing things, or could the differences be real? Are fast motion shots situations where I would want to set the render with high rather than medium encoding quality?
Thanks again for all your help- I want to be sure I understand all this before I start deleting the source material!
James
-
Thank you for the reply, John. One more question, if I may. It appears that if I wish to render into AVI, I have the option of using the “Default Template (uncompressed)” or “HD 1080-60i YUV” templates. Both allow me to choose the cineform codec as a video format in the custom settings, and both appear to create similarly sized files. Are there advantages and disadvantages to either?
James
PS Most of my source footage is 60i.
-
Scott,
I will give your recommendations a serious look! It seems HD is the way to go, no question. Such many opportunities to spend so much money…
Thanks again for your help.
James