-
NTSC black and white signal
I’m working on an unconventional project, which I had better explain at the outset.
In 1999 the Leonid meteors stormed, producing a peak rate of 4,000 meteors per hour, or better than one per second. I was on a NASA research aircraft recording the event. We had banks of Hi-8 camcorders hooked up to image intensifiers looking in all directions. We took about 80 hours of video in that one night.
That video has never been digitized because the chief scientist could never get funding. I’m a volunteer, and I’ve been nagging the chief scientist to somehow dig up the funding. I finally lost my temper, saved up my pennies, and cobbled together a system just barely capable of doing the job. It uses an Aurora Igniter-X board.
I’ve been writing software for six months now to prepare for the big digitization push this summer. And we’ve run into a tricky little problem.
The image intensifiers are strictly monocolor devices. They show their image in green and white. The camcorders recorded this green and white image in color. We don’t care about color information, because it’s meaningless. The ONLY thing we care about is getting an accurate measure of the brightness of each pixel. However, for a variety of reasons, we can’t afford to digitize and store the data in 32-bit resolution. Besides, the luminance resolution of the image intensifiers is only about 1%, so a single 8-bit representation of the image is all that’s needed.
Now, the obvious way to do this is to digitize in black and white. That is, we tell the Igniter X software to record the video stream in black and white.
However, I thought it would be a good idea to check out alternatives before we start digitizing and archiving 80 hours of video. So I recorded some of the video in 32 bit color, then ran all sorts of statistical analyses of the results. I had some surprising results.
Let’s start with a single frame from the video. I have four versions of that frame:
A. The version that was digitized in 32 bits of resolution.
B. The version that was produced by QuickTime storing the original data in 8-bit grayscale format.
C. The version that was digitized in 8 bits of grayscale.
D. A version produced by my own software that rendered the 32 bit data from A into 8 bit grayscale by the formula (R+G+B)/3.Here’s the surprise: B and C are very similar, but very different from D. I expected the grayscale to be just (R+G+B)/3, but it definitely isn’t.
Here’s another surprise: the ratios between the three channels are not fixed; they vary quite a bit between different pixels. Now, we KNOW that the ratios have GOT to be fixed on the image intensifier. Apparently there’s either some sort of change in ratio with higher luminance values, or the NTSC color signal is so messed up that it can’t maintain the color channel ratios uniformly. From what I’ve heard of the signal design of NTSC, this wouldn’t surprise me.
My best guess is that this has something to do with the NTSC signal. I recall that the NTSC signal was originally black and white, and then had the three color signals stuffed into it at higher frequencies. This suggests to me that the black-and-white information is separate from the color information, and for black-and-white digitization the Igniter X board reads the black-and-white signal, not the color signals.
My questions are:
1. Is my hypothesis total bullshit?
2. What’s the best way to digitize this data: digitize in 8-bit grayscale, or digitize in 32-bit resolution and then write software to calculate the grayscale information from the R, G, and B channels?