In this day and age, is it really relevant to worry about safe zones on the screen or avoid using the first or last of 256 levels for pixels? Since the demise of CRTs, I haven’t sen any flat-panel displays that don’t show every single pixel on the screen, nor have I seen any problems with displaying pixels with levels below 16 or above 235. It might have been an issue in 1950, with analog TV and projected film or screens with fuzzy borders, but it seems a bit archaic today.
A great deal of material is no longer projected on screens and is captured by sensors with working pixels all the way to the edges, so what cogent arguments remain for sacrificing a huge chunk of screen real estate to remain in safe zones, or doing without the blackest blacks or whitest whites? Are people still respecting these limitations, and if so, why and in which contexts? Are modern television sets (not monitors) still simulating overscan? Aren’t digital cinema projectors capable of aligning edge pixels with screen edges?
It just seems that it’s already hard enough to preserve image quality from end to end of any workflow, so deliberately handicapping resolution and dynamic range even further because of the way things used to be seems counterproductive.
I still respect safe title out of habit. That being said, for asthetic reasons it’s nice not having text at the very edge of the screen. I do a lot of videos for tradeshows and I’ve run into issues where the A/V techs and their system of scalers and USB players have resized my videos a little, so I was glad I had some leeway. But I do agree with you, most content is for the web nowadays so it’s a non-issue.