Forum Replies Created
-
By sheer coincidence, I wrote an article on my blog outlining how groups are best used in resolve. You can access it here:
https://vanhurkman.com/wordpress/?p=692
Hope that helps.
http://www.alexisvanhurkman.com | http://www.correctionforcolor.com
-
I wrote a blog article about groups last year that should make Resolve’s group functionality clear. Hope it’s helpful.
https://vanhurkman.com/wordpress/?p=692
http://www.alexisvanhurkman.com | http://www.correctionforcolor.com
-
There’s an inobvious way of doing this:
1) Add a second serial node, and connect the key output of the prev node to the key input of this one.
1b) Alternately, use the “Add Outside Node” command to do the same thing.2) Open the Key tab, and in the “External Key” section, uncheck “Invert,” and raise the Blur Radius parameter.
That will blur the key some more, without affecting RGB. Unfortunately, the External Key / Blur Radius slider also has a maximum level, so depending on how much blur you want, you may need to do this a few times. It’d be nice if DaVinci raised the maximum levels on blurs.
http://www.alexisvanhurkman.com | http://www.correctionforcolor.com
-
Since my book (the Color Correction Handbook) is quoted, allow me to make a slight correction, and to elaborate on Mike Most’s reply. At the time I wrote that, I was going by information regarding the inverse gamma encoded by Rec 709 compliant cameras, and so the display gamma I derived was 2.5. It was difficult to find a quoted gamma standard for Rec 709, so this is what I went with. Unfortunately for me, in light of new information it appears I was off by .1 in that section.
It has since been brought to my attention via Charles Poynton’s own open letter to the industry that there, in fact, has never been a formal display gamma standard for Rec 709, as gamma was an implicit characteristic of CRT displays that was simply “built-in.” The actual gamma employed by CRT displays is quoted by Poynton as falling between 2.3 and 2.4.
In his open letter, Poynton advocates for a published display gamma for digital broadcast displays (which by themselves have no implicit gamma) of between 2.35 and 2.4 (he seems to be hoping that SMPTE will pick one), and peak white of 80-120 cd/m(squared).
Lastly, the display gamma for projected digital cinema is 2.6 (with peak white of 48 cd/m(squared)), but that assumes a completely dark viewing environment. My understanding is that higher display gamma represents scenes better in darker environments, whereas lower display gamma represents scenes better in brighter environments (which explains sRGB’s gamma standard of 2.2 for lit office/computer environments).
With that rationale, 2.4 falls in the middle for a muted “evening living room” environment. This all reinforces the importance of a carefully controlled viewing environment, where your display settings match the characteristics of the display surround, for doing color-critical work. Personally, I hope Poynton is successful and that a single gamma standard is published, as this is a confusing topic that engenders a lot of disagreement and doubt.
http://www.alexisvanhurkman.com | http://www.correctionforcolor.com
-
Regarding the first part of your question, a better approach to making multiple adjustments with a single key, window, or matte is to do the key or matte in a separate node, and feed that key to as many other correction nodes as you like via the key input/outputs (the little triangles at the bottom left and right of each node). That way you only need to adjust or alter a single qualifier, window, or matte, and the result will automatically apply to whatever other nodes you want to use to make corrections.
Regarding your second question, I’ve found node to node connections to be really wiggly myself, it would be nice if the mechanism was a bit more forgiving.
http://www.alexisvanhurkman.com | http://www.correctionforcolor.com
-
Page 134 in the revised Manual has a list of the effects that currently are and are not supported in DaVinci Resolve’s XML import. Currently, Still images and Freeze Frames are unsupported.
http://www.alexisvanhurkman.com | http://www.correctionforcolor.com
-
There’s a “TRACK” tab at the top of the node graph, in-between the CLIP and UNMIX tabs. When you open the TRACK tab, you can create a node tree that’s applied to every single clip of the session (in combination with each clip’s individual grade), and you can freely alter and add to this node tree at any time, just like any other grade.
http://www.alexisvanhurkman.com | http://www.correctionforcolor.com
-
My understanding is that the YSFX controls essentially let you invert the signal, creating negative effects. It’s cool, but I’ve not yet had a job where this type of look would be appropriate. On the other hand, this comment is likely to be followed by 12 others who’ve used this for interesting things. It is neat to have this kind of control available on a per-channel basis, however.
http://www.alexisvanhurkman.com | http://www.correctionforcolor.com
-
There’s only one ProRes 4444 codec, but it has flexibility in terms of the color space used to encode the data, similar to other media formats. For example, DPX files can be encoded as Y’CbCr, RGB, or XYZ. While RGB is most common, the other colorspaces are possible even though you’re still using the DPX format.
My understanding is that, while ProRes 4444 is capable of either RGB or Y’CrCb encoding, the way the data is encoded depends on the application doing the encoding. Thus, the data you get is not unpredictable, it simply depends on which application you’re writing the files with.
For example, Final Cut Pro renders or outputs ProRes 4444 files that are Y’CbCr encoded because that’s the color space that FCP “likes” best. Apple Color, on the other hand, will render ProRes 4444 files that are RGB-encoded, because that’s Color’s color space. I’m relatively positive that After Effects creates ProRes 4444 files that are RGB-encoded as well.
I have been told by those who know well that colorspace transformations done between the two color spaces are of high enough quality to not introduce artifacts, but I also suspect this has a lot to do with the color science employed by your application of choice.
Gamma handling is entirely up to the application you’re working in (one reason I always loved Shake, it made absolutely no gamma assumptions whatsoever). I cannot speak to gamma issues folks have been having with DPX media as I don’t have DPX-using clients. However, my current understanding is that ProRes 4444 to ProRes 4444 workflows (inputting 4×4 media and outputting 4×4 media) have no untoward gamma processing issues, unlike for example the pain experienced when using the Animation codec in FCP. However, it’s been a while since I’ve done extensive testing on this topic, so I’d be curious to hear back from anyone who’s having ProRes 4444 to ProRes 4444 problems.
http://www.alexisvanhurkman.com | http://www.correctionforcolor.com
-
Alexis Hurkman
December 6, 2010 at 3:12 pm in reply to: Is it normal to have 10 frames of latency on the monitoring output ?I’ve got consistent latency between the video-out and the onscreen GUI, it’s normal. Regarding transport controls, I find the keyboard has a lot more latency then the transport controls on the Wave. Practically speaking, hitting the spacebar to stop playback takes a second and a half, while pressing the “stop” button on my Wave is much more responsive (still lags a bit).
http://www.alexisvanhurkman.com | http://www.correctionforcolor.com