Activity › Forums › Creative Community Conversations › How much longer for 720p?
-
Walter Graff
February 26, 2013 at 8:15 pm“This is not correct if you are talking about PsF recording. The whole frame – as an NLE sees – is the “same” (outside of spatial differences) in 23.98, 24, 25, 50, 59.95 P or PsF recordings. So scaling a 1080p frame to a 720p frame of the same frame rate DOES NOT introduce interlace, motion or interpolation artifacts.”
I never said with scaling. Inherent with PsF recording. Sony took a betacam transport, doubled the frequency sampling and doubled the speed of the tape. You get HD without having to reinvent the wheel. And to protect themselves from interlace they created a record system that is 2 half frames that can either be combined for 1080 or split for interlace or their fancy name PsF. Nice but what you give away is the same problems inherent in interlace in certain situations and that is mostly motion blurring and edge issues since 1080 catered to the old interlace when they should have dumped it all for progressive in the first place. If you edit the stuff as I do each week, I’m sure you are familiar with the jitter issues. It’s the reason why ESPN went with 720p over 1080. In tests… some I was involved with we had serious issues with sports motion and 1080.
“By your reasoning, should he shoot 720p and blow up some shots? Is this better? That would introduce obvious image degradation by comparison.”
You are correct. I didn’t read what he posted correctly about moving picts in edit. Sorry. So he’s shooting 1080 and putting that in a 720 frame so he can move picts around. Works. And if the 720 footage is good you can blow up your pict pretty well too.Walter Graff
BlueSky Media, Inc.
walter@bluesky-web.com
http://www.bluesky-web.com
Offices in NYC and Amherst Mass. -
Oliver Peters
February 26, 2013 at 8:28 pm[Walter Graff] “If you edit the stuff as I do each week, I’m sure you are familiar with the jitter issues. It’s the reason why ESPN went with 720p over 1080. In tests… some I was involved with we had serious issues with sports motion and 1080. “
I get what you are saying, but you are mixing apples and oranges. The issue you describe is a frame rate issue. 30 unique images as captured at the sensor versus 60 unique images. That’s what defined the motion resolution.
– Oliver
Oliver Peters Post Production Services, LLC
Orlando, FL
http://www.oliverpeters.com -
Oliver Peters
February 26, 2013 at 8:41 pm[Walter Graff] “Sony took a betacam transport, doubled the frequency sampling and doubled the speed of the tape. You get HD without having to reinvent the wheel. And to protect themselves from interlace they created a record system that is 2 half frames that can either be combined for 1080 or split for interlace or their fancy name PsF. “
PS: Don’t you think that’s a huge oversimplification of the engineering side? While it is true that you can “overlay” progressive capabilities on an engineering infrastructure built on interlace (including switchers, routers, monitors, etc.), the key difference is the temporal relationship of when the two fields (or half-frames) are captured. If the imager is capturing progressively, both fields occur at the same instance in time and therefore the complete frame is a de facto progressive image. If the fields are captured at different points in time, the whole frame is split-field, i.e. interlaced.
The key difference between Sony’s PsF “progressive segmented” frames and Panasonic’s “true progressive” frames in 720p, is how the lines are read out – resulting in the incompatibility of pumping 720p signals through an otherwise interlaced facility.
– Oliver
Oliver Peters Post Production Services, LLC
Orlando, FL
http://www.oliverpeters.com -
Walter Graff
February 26, 2013 at 10:18 pm“PS: Don’t you think that’s a huge oversimplification of the engineering side?”
Of course it is, but not untrue. Sony didn’t want to spend money on RD. Make a new system with an old transport and you make lots of profit. But the downside is you have to figure out how to make things work within limitations. And do you make it for the future or do you hang on to the past just in case. They chose the latter. They are not alone. Panasonic made a whole line of “HD” prosumer cameras based on line doubling so they would not have to use a HD chipset. No one noticed. They already lost their shirts inventing P2 and another costly mistake like M2 would have been a disaster.
“If the imager is capturing progressively, both fields occur at the same instance in time and therefore the complete frame is a de facto progressive image.”
It is mostly. Again it has some limitations which ar inherent to that method of “line doubling”. Lots of motion is not something 1080i likes. I know that all too well when houses I work at that are 1080 and I have a lot of motion and someone wants slow mo. Rhymes with on no.
“The key difference between Sony’s PsF “progressive segmented” frames and Panasonic’s “true progressive” frames in 720p, is how the lines are read out – resulting in the incompatibility of pumping 720p signals through an otherwise interlaced facility.”
Interlace is dead in the markets I work in. Took about five years. So while segmented frame served a purpose initially it’s a dead horse in terms of interlace now mostly anywhere except one horse markets. Sony should have looked forward, not backward.
Walter Graff
BlueSky Media, Inc.
walter@bluesky-web.com
http://www.bluesky-web.com
Offices in NYC and Amherst Mass. -
Walter Graff
February 26, 2013 at 10:31 pm[Oliver Peters] “I get what you are saying, but you are mixing apples and oranges. The issue you describe is a frame rate issue. 30 unique images as captured at the sensor versus 60 unique images. That’s what defined the motion resolution.
“Frame rate is part of it but that is tied to the method of capturing two “half fields” and combining them as segmented frame does as opposed to true P that captures one whole bucket at once. Not saying this for you Oliver but for those that might not be familiar. It’s basically a pulldown. Recorded independently of each other but able to make interlace or progressive as you know. Not apples and oranges more like navels and navels except Sony didn’t want to buy new ones so used old technology to make a new format.
Walter Graff
BlueSky Media, Inc.
walter@bluesky-web.com
http://www.bluesky-web.com
Offices in NYC and Amherst Mass. -
Oliver Peters
February 26, 2013 at 11:16 pm[Walter Graff] “Of course it is, but not untrue. Sony didn’t want to spend money on RD. Make a new system with an old transport and you make lots of profit. “
What you are describing is the process that got us HDCAM VTRs. Are you forgetting that Sony had at least 3 HD format before that? Including uncompressed open reel analog, uncompressed open reel digital and compressed U-matic style analog. These were horrendously expensive, so HDCAM came about, built on the previous R&D of Digital Betacam. So I think Sony put in quite a lot of effort. FWIW – the PsF solution came from a collaboration with Laser Pacific, IIRC. It was not simply Sony’s solution.
[Walter Graff] “It is mostly. Again it has some limitations which ar inherent to that method of “line doubling”. “
There is no “line doubling” in PsF. To call it that adds confusion. Odd lines are filled in with even lines. Both sets are captured at the same point in time and combined for a frame. There are 1080 unique vertical pixels’ worth of data. Doubling implies that something is made out of half the image to fill in the missing half and that’s simply not the case. If there’s any perceived “doubling” occurring, it’s a function of persistent of phosphors, lag in LCDs and persistent of vision.
[Walter Graff] “So while segmented frame served a purpose initially it’s a dead horse in terms of interlace now mostly anywhere except one horse markets. Sony should have looked forward, not backward.”
Actually they have. PsF is purely a function of recording to tape and a method to pass a signal down a cable. It isn’t a factor in the pick-up system within the camera and it isn’t a factor within an NLE until you have to pass the signal through a Kona, BMD or other I/O card. I work largely in a total file-based world and hardly ever deal with tape anymore. PsF or P makes no difference, because it’s all P inside the computer.
[Walter Graff] “It’s basically a pulldown”
Hmm…. Not sure I would say that. Again, “pulldown” implies doubling and that’s not the case. A camera sensor, like a CMOS is capturing the whole frame at once, regardless of whether the end signal is P or PsF. It’s really a matter of whether the data is read out one way or another.
It is true that if you take a 1080/29.97 signal (P or PsF) and convert it to a 720p/59.94 file, then yes either doubling (1 field to 1 frame) or pulldown (1 frame twice for two frames) does occur. So if that’s what you mean, then I guess, yes.
As an example, I work with 1080/23.98 media all the time. I can guarantee you that each individual frame is indistinguishable (in terms of progressive versus interlace characteristics) whether it was recorded on film, a 5D, an Alexa or re-ingested from HDCAM. But, when I convert that to 720p/59.94 for delivery, pulldown is added. Each frame is clean in regards to motion, but there are added redundant frames, due to the 24-to-60 conversion.
– Oliver
Oliver Peters Post Production Services, LLC
Orlando, FL
http://www.oliverpeters.com -
Walter Graff
February 27, 2013 at 12:59 am“What you are describing is the process that got us HDCAM VTRs. Are you forgetting that Sony had at least 3 HD format before that?”
And I worked on NHKs test footage in 1984 at the LA Olympics with NBC. Apples. Sony also created still to this day the best consumer format in Betamax. JVC beat them. I’m not talking about equipment. Talking about a consumer electronics company who invents professional equipment so they can make consumer equipment. And in Sony making the method of recording HD they were thinking about how they could use it across the spectrum of their products. In fact PsF was invented specifically for the motion picture industry and 24fps so that as Sony thought, they could have seamless integration across the board from pro to consumer viewing.
“FWIW – the PsF solution came from a collaboration with Laser Pacific, IIRC. It was not simply Sony’s solution.”
It WAS Sony’s solution to something it desperately wanted, the film industry. Laser P was in because they were Hollywoods go-to place. I’ll let you in on the origin as told to me by Takeo Eguchi of Sony. Back when George Lucas was the hot technology guy and a film guy, something Sony wanted to conquer with videotape, Sony whooed him to make movies with video. Lucas wasn’t interested in new technology as he knew the problems of getting there and promised Sony he’d start making Sonys digital system part of his movie making if they could come up with something with “todays” technology. They did. And that is why Laser Pacific was involved. In a way George Lucus was a big part of HD.
Morita was always jealous of film. He was desperate for Sony to own the film industry and he made it very clear that engineers were to do just that. DId you know that he thought the invention of Betacam would be the format that would replace film. I know, crazy. In fact as told to me by Michael Schulhof, the former head of Sony US, once Mortia saw a film crew in the building in Japan in the 80s shooting an internal piece on Sony. 35mm crew. He asked why it wasn’t being shot on Betacam. The project had shot 95% of the film and there was no real reason. Wasn’t good enough for Morita. The million dollar project was scrapped and reshoot on Beta. Of course after his death Sony not only got the film business but the company. One of the things that helped take Sony down actually.
“There is no “line doubling” in PsF. To call it that adds confusion. ”
Of course not, hence my quotation marks. Just saying that segmented frame allowed you to have a variety of options depending on what you wanted it to be, SD or HD. Yes again a simplistic answer, but you get it.
“Doubling implies that something is made out of half the image to fill in the missing half and that’s simply not the case.”
To use your own words “Odd lines are filled in with even lines. Both sets are captured at the same point in time and combined for a frame. ” Hence my liberty to say line doubling. Yes recorded as two sets independently, but still a interlace way of recoding for that possibility. Might be confusing to someone who doesn’t know, but you and I understand it well enough.
“I work largely in a total file-based world and hardly ever deal with tape anymore. PsF or P makes no difference, because it’s all P inside the computer.”
I too am all digital. Haven’t used a tape since the last years Oscar show were all we could get for one of the low budget entries was a HDCAM tape. And we didn’t have an HDCAM player cause it was a 720p house. Everything else came off the internet.
Today I worked on a 87 terabyte server. I remember my first non linear system was a DPS Velocity in late 90s. 10 bit before anyone knew what it meant. I had four 60 gig hard drives. I think each one cost me $700. 87 terabytes would have cost gold bars back then.
Thanks for sharing your thoughts.
Walter Graff
BlueSky Media, Inc.
walter@bluesky-web.com
http://www.bluesky-web.com
Offices in NYC and Amherst Mass. -
Oliver Peters
February 27, 2013 at 1:49 am[Walter Graff] “Thanks for sharing your thoughts.”
And you, too. Thanks.
– Oliver
Oliver Peters Post Production Services, LLC
Orlando, FL
http://www.oliverpeters.com -
Chris Harlan
February 27, 2013 at 1:53 am[Walter Graff] “Hence my liberty to say line doubling.”
Not so much.
-
Herb Sevush
February 27, 2013 at 4:30 am[Gary Huff] “I saw a PBS station yesterday in HD with Julia Child doing a black and white cooking show. Was it shot on 16 or 35mm thought? May not have been an uprez.”
Julia’s shows were all shot on tape, no film, ever.
Herb Sevush
Zebra Productions
—————————
nothin’ attached to nothin’
“Deciding the spine is the process of editing” F. Bieberkopf
Reply to this Discussion! Login or Sign Up