December 24, 2019 at 10:40 am
Just curious really. I was reading up on Optical Printers (circa 1980s) and every article seems to skip over the intermediate step of how an image on a computer was transferred to film for optical compositing.
One article alluded to lasers and that’s about it. Was it “just” that the images were rendered at 4K+ resolutions and massive resolution monitors displayed the image? Was there some kind of actual printer directly printing line by line onto film?
If anyone knows about it, I’d love to learn more.
December 24, 2019 at 2:05 pm
I am not an expert here – just someone who (many years ago) had some of my video transferred to film.
There are several processes to transfer from video to film. One of the most widely-used was called kinescope and basically involved aiming a film camera at a “picture-tube” and then making some adjustments for differing aspect ratios and frame rates. Early network news programs sometimes used this to get “video” out to the rest of the world very quickly. It was then put on a local telecine and converted back to video for over-the-air broadcast.
Kinescopes were also used by early television series. I believe that the earliest “I Love Lucy” shows were one of the first to be kinescoped. Numerous techniques were introduced to minimize the effect of scanning lines, ghosting, and other artifacts. As color video came along, the kinescope process was incrementally improved to accommodate the color information.
There was some experimentation using an electron-beam to write directly onto movie film, but I’m not certain which systems, if any, even made it into production.
The Society of Motion Picture and Television Engineers (SMPTE) would have all sorts of technical details in their journals from the 1950s and 60s. I have not yet found a free archive for that publication, but you might try a large university library.
December 24, 2019 at 3:12 pm
Thank you Jim,
That’s given me a new term to search for. I find this stuff fascinating, as growing up, lay people talk about CGI as if it was just plugging numbers in, yet most of the time the biggest movies were inventing the technology needed as they went along.
December 28, 2019 at 3:14 pm
If your are in the mood for researching, you might try the Paley Center for Media (paleycenter.org).
JSTOR (and others) probably can provide “Kinescope Recording and Technical Considerations in Films Produced for Television Transmission” for some technical details.
January 2, 2020 at 2:26 am
There is more than one method that is/was used, just as there is more than one method used to go from film to video/digital.
The earliest and lowest quality was the kinescope, which was along the lines of pointing a camera at a video monitor and shooting it. Before video TAPE, this was the only way to preserve a live television program. While there was signal processing and various innovations over the years, the quality suffered as you might imagine. “I Love Lucy” was the first show to adopt a “shoot on film, broadcast later” workflow, still used today for sitcoms — this eliminated the need for kinescopes, improving quality.
Over the years there have been many methods of film recording, not necessarily for motion film. For instance, electron beam recording for computer output microfilm.
As digital filmmaking progressed in the 1990s, the need for recording to film necessitated a high quality method. One method used a “flying spot” type CRT with color filters that would rotate in place, as the flying spot would record the C Y or M records of the film negative.
ARRI Laser is “probably” the most common laser-type film recorder, at least here in Hollywood at the few places that still do film-out.
But as far as “early artists” how early do you mean? Do you have some examples?
VFX & Title Supervisor
January 8, 2020 at 2:10 am
The first systems used high resolution gray scale monitors with RGBW filter packs that moved in and out between the crt and the lens during the exposure to create an RGB record from the grayscale image (so multiple passes for the same image). The problem (initially) was that film was differently sensitive to different wavelengths of light so the red would expose slower than the blue for instance. It also let every pixel in the screen make up part of the image instead of one out of every three “sub” pixels as would be the case for an RGB display.
Lasers came on to the scene much later.
The biggest issue was that the gamut of the film was so much wider than the computer imagery or the CRTs being used at the time. It was really hard to get clean whites without blowing everything else out and hard to get dark shadows without fogging the rest of the image due to long (in photography terms) exposure times. (Plus a CRT is never really off even where black is, so the dark areas were constantly exposing too. Try it – turn off all the lights in a room, close your eyes to get used to the dark and then turn a crt tv off and then open your eyes. That tube will glow for minutes!)
Some systems used a moving physical mask and scanned the image on the screen line by line. This was much slower but kept the contrast up because the mask kept the constant glow of the rest of the screen from exposing the film.
Log in to reply.