Forum Replies Created

Page 2 of 9
  • Dennis Couzin

    July 16, 2010 at 12:30 am in reply to: Workflow: 1080 60p from Panasonic HDC-TM700 camera

    [gary adcock]: “If you don’t understand how actual 1080 50/60p content is supposed to work in the professional sense, based on the real tools that were designed to handle this format- how do you know that what you are doing is accurate?”
    With your notions of “actual”, “supposed to”, “professional sense”, “real tools”, and “accurate” you are cutting yourself off from an elegant little camera and from the future. The future of video is file-based rather than signal-based. How file data is transported is less and less important as buffer size increases.

    [gary adcock]: “Just because some manufacturer puts something in a manual something is does not make it so, I need go no farther that P vs PsF.”
    This continues your insinuation that the HDC-TM700’s 1080 60p is not really 1080 60p. How can we ever be sure that a camera is shooting and 60p rather than:
    (A) shooting 60i and outputting it as 60i but with an instruction to the decoder to apply a nice (time flow) deinterlacing?
    or
    (B) shooting 60i and outputting it as 60p after a nice (time flow) deinterlacing?
    We can eliminate possibility (A) by examining the H.264 file, but this does not eliminate possibility (B).

    One way to be sure that the camera is shooting 60p is to aim it at target consisting of fine enough detail changing fast enough that either the camera sees with 1080 lines before there is change or it doesn’t. The target can be a single image illuminated by a flashlamp. Either all of its detail (down to 1/1080 of the frame height) is captured in a frame, or only half of its detail (down to 1/1080 of the frame height) is captured in a field. The “rolling shutter” effect of this camera’s CMOS sensors can be ignored since it is enough to examine a few neighboring lines to make the determination.

    Since I don’t doubt Panasonic’s claim that the HDC-TM700 shoots 1080 60p (which is not just printed 50 times in the manual but also printed on the camera body beside a dedicated button) I’m not going to do the experiment. Maybe some readers have a teenager seeking a science fair project.

    [gary adcock]: “Since you did not cite the Appendix in your initial reference- I mistook what you were saying.”
    The only places the White Paper gives data rates are in the Appendix and in two graphs. The graphs give data rates only for 23.976 fps and for 29.97 fps. So how did you find your 60i data rates? They look like the 29.97 fps data rates, which of course they should, contrary to your final argument. For 1080×1920 for each ProRes flavor the 30p data rate equals the 60i data rate and is half the 60p data rate. This is simple and should not be obfuscated.

  • You ask: “Have you ever worked with 1080 50P/ 60P material captured in the conventional manner using a professional camera and recording system?”

    Answer: Certainly not, and your example is irrelevant to the Panasonic HDC-TM700’s H.264 compressed 1080 60p material. Your example does not define 1080 60p. The fact that renting just a part of your example for one day costs as much as the HDC-TM700 is irrelevant to the question whether the little camera records and outputs genuine 1080 60p. Don’t you accept that there can be H.264 compressed 1080 60p?

    It is amazing that after I point out your 60p/60i error concerning data rates you still object to my original simple statement:
    “Apple ProRes for 1080 60p uses 293 Mb/s”.
    Now you object that I didn’t specify which of the 5 flavors of ProRes was meant. Apple’s ProRes White Paper of July 2009 names the 5 flavors:
    “ProRes 422 (Proxy)”
    “ProRes 422 (LT)”
    “ProRes 422”
    “ProRes 422 (HQ)”
    “ProRes 4444”
    By “ProRes” I meant ProRes 422. You prefer to call this “PRSQ”.

    OK then my original statement becomes:
    “Apple PRSQ for 1080 60p uses 293 Mb/s”.
    You believe this should be 145 Mbps. Again, you are mixing the 60p ProRes compression rates with 60i rates. They are in separate rows in the table of Target Data Rates in the White Paper, and the 60p rates are virtually double the 60i rates.

    But you assert:
    “The compression level of a codec like ProRes does not change with the type of data being sent it, and there for negates that part the argument- the codec compression only scales with the type of codec-not with the content inside.”

    Not at all. ProRes is a frame-by-frame or a field-by-field codec depending on whether the video is 60p or 60i. If it is 1080 60p the frames have 1080 horizontal rows of pixels. If it is 1080 60i the fields have just 540 horizontal rows of pixels. Then of course the ProRes compression (of a given degree, dependent on the flavor) yields approximately twice the data rate for 60p as for 60i. (I say approximately because the ProRes codec can apportion its compression between the horizontal and the vertical a little differently in the two cases.) When transcoding from, say, H.264 video, H.264 decoding produces the 60 frames or fields to which the ProRes coding is applied.

    Read the White Paper at
    https://images.apple.com/br/finalcutstudio/docs/Apple_ProRes_White_Paper_July_2009.pdf ,
    especially the Appendix.

  • Gary, I’m afraid you are confusing output/transmission (e.g. HDMI) where line doubling is reasonable, with recording (e.g. this Panasonic camera’s) where line doubling is insane. This Panasonic’s 1080 60p recording is compromised, but not insane.

    I will do further experiments to see why FCP7 does not “log and transfer” my 1080 50p AVCHD. It is likely that FCP7 finds the AVCHD improper, because it is (not in compliance with the AVCHD standard).

    I said Apple ProRes for 1080 60p uses 293 Mb/s based on the Apple ProRes White Paper dated July 2009. That paper incidentally uses the notation “Mb/s” rather than your preferred “Mbps”. The numbers you cite are almost exactly the White Paper’s numbers for 60i. We’re discussing 60p here, not 60i.

    Interlaced video is a horrible relic of early television and CRT’s, a blot on digital imaging which I’m glad Panasonic has lurched forward to eliminate at the popular end of the user scale.

  • Gary Adcock can’t believe that the $1000 Panasonic HDC-TM700 shoots 1080 60p so he makes up a story about it line-doubling. No, Panasonic isn’t so stupid to fill up memory twice as fast with “padding” which is no more than the most primitive deinterlace will achieves in playback. Fact is: the Panasonic is shooting 1080 60p; there are many passionate believers in 1080 60p among Japanese image scientists and Panasonic’s couldn’t hold back. Panasonic disturbed Sony by this mutation of their AVCHD standard and now is disturbing FCP7 users (since while the ProRes codec allows 1080 60p, FCP7 refuses to transcode the improper Panasonic AVCHD 1080 60p to ProRes 1080 60p). Clipwrap to the rescue.

    The Panasonic HDC-TM700’s 1080 60p is no cheat, but it’s a cheesy 1080 60p because of the high factor of interframe compression. Uncompressed 8-bit 4:2:2 1080 60p requires 1898 Mb/s. The Panasonic HDC-TM700 is recording 28 Mb/s. The 68:1 compression factor is the cheese. From the way it looks, the intraframe compression is perhaps around 7:1 and the interframe compression is perhaps around 10:1. When action (or camera movement) is slow the picture looks great because 7:1 intraframe compression, with all those pixels, is hardly visible. But when action (or camera movement) is fast the image is jumpy, unrealistic, and looks nasty. (These are my preliminary observations — we’ve had an HDC-HS700 for just a few days.) Panasonic wanted the camera to work reliably with all Class 4 SDHC flash cards, so their maximum 28 Mb/s was reasonable and an essential limitation of the camera. According to my image tastes they should have set the H.264 parameters to greater intraframe compression with less interframe, but something had to be sacrificed.

    Apple ProRes for 1080 60p uses 293 Mb/s. (It’s 10-bit.) Comparing this with Uncompressed 10-bit 4:2:2 1080 60p, the ProRes compression is 8:1, purely intraframe. Based on the previous paragraph’s estimates, the intraframe compression of ProRes is comparable to the intraframe compression of the Panasonic HDC-TM700. This implies that the huge 9x or 10x increase in file size when transcoding the HDC-TM700 to ProRes is due to the elimination of all the interframe compression in the former. So it is a mistake to use ProRes LT transcoding with this original, because it will compromise the I-frames. There could even be a slight benefit to using ProRes HQ.

    I’m voting with Panasonic and getting into 1080 60p (really 50p here) now, before affordable and less cheesy 1080 60p cameras become available. They surely will.

    I wish never to be in K. Darvich’s situation of having to intercut 30p footage having no interframe compression (shot with still camera) with 60p footage having high interframe compression (shot with Panasonic HDC-TM700). If the release will be 30p, the transcode from 60p to 30p should be better than that from 60i to 30p. But if the release will be 60i then the 30p footage should be transcoded 60i and the Panasonic HDC-TM700 camera can shoot 60i (in mode HA). If the release format is unknown or various, then shoot 60p and transcode the 30p to 60p, and edit so all cuts are at even frame numbers.

  • Dennis Couzin

    June 16, 2010 at 3:46 am in reply to: video noise reduction

    Innobits’ Purifier really works! It’s denoising function does not soften the image at all. It is extremely beneficial when mpeg2 compression will later be applied: saves bandwidth for the image; reduces mpeg2 artifacts.
    I’m investigating whether Purifier rescales and deinterlaces better than Compressor’s “optical flow” algorithms.
    Purifier’s user interface is a little nutty but the developers are smart and accessible.
    Thanks Michael for a great tip.

  • Dennis Couzin

    May 19, 2010 at 4:32 am in reply to: video noise reduction

    Michael, thanks for these leads. Innobits’ Purifier indeed is spatio-temporal. I’m trying the demo. Quite strange software. It might be fun (and might work).

  • Dennis Couzin

    July 9, 2009 at 7:36 pm in reply to: big 4 second video

    This is the FCP board. Nothing is quite “professional” in FCP and Mac OS-X. And now I add three dangerously undocumented elements between the image file and its display: QuickTime v.7.6; vidia 7300GT; Samsung 305T. They can do anything at all to the images. As you say: “you are telling the computer that you did not need it to play with accuracy”.

    Yes, the consumer-grade player, graphics card, and monitor can sometimes amaze us with their engineering crudity, even perversity. But they are just concoctions of men. We can study their behavior and learn to work around their weaknesses as they would affect our use. The examples you cite don’t seem to apply to my visual experiment. My “None” file will have 25 full frames per second, and I do expect QT will display 25 full frames per second. A stopwatch will show if it plays at 25 fps rather than, say, 24 fps. I don’t care if it is playing at, e.g., 24.99 fps — such small errors aren’t visually relevant. I can make a simple animation of a dot taking 100 frames to make a circle and look for any irregularities in the display. This will show that the frames are presented regularly. I can mask off a small part of the screen in order to see an individual dot tachistoscopically and verify that it is sharp as it should be. Even “professional” equipment must be verified before use in a scientific experiment. The difference is that if I find that I don’t get 25 full frames per second displayed, if for example, the graphic card asserts itself by interpolating additional pixels, I can’t complain to Nvidia.
    Your other example, raster size, is also simple to verify. When I set QT to play “actual size” there are very nearly the true number of pixels displayed on the monitor. I haven’t counted the 1500 and 2000, but I can. It doesn’t matter to the experiment if QT crops off a row or column at an edge, but I can check this too. I mostly care that QT doesn’t scale, that where my bitmap image fills exactly N pixels of height the screen does the same. I will definitely check this. (There might be workarounds to tease it back to scaling factor 1.000.)

    [gary adcock] “This is the main reason that I always talk about the hardware output, since the the software and the OS do not always tell the truth to the user.”

    I fully agree, but believe that frame rate and raster size are the (managable) tip of the iceberg.
    I am more concerned that the display be pixel-by-pixel with no spatial or temporal dithering.
    (The luminance in the display of pixel P in frame F should depend only on the value of pixel P in bitmap F.)

  • Dennis Couzin

    July 9, 2009 at 4:29 am in reply to: big 4 second video

    Make RAM Disk 1.0 by Peter Hosey works beautifully in my 2007 Mac Pro running OS 10.4.11.

  • Dennis Couzin

    July 8, 2009 at 9:47 pm in reply to: big 4 second video

    Gary Adcock,
    Your original answer consisted of three parts:

    Part 1: “Max frame size in FCP is currently 2048 x 1152 so you will not be able to output the [2000×1500] files as video.”

    This part was off due to your misunderstanding of my use of the word “video”, though I did mention QuickTime player in my original post.

    Part 2: “Did you miss a zero in that number? My calculator says that “None”file is closer to 750MB/s at that frame size.”

    This part was off due to your making a 32 bit None file although my original post clearly says my images are 8 bit grayscale.

    Part 3: “…if you are doing a scientific analysis – no- [QuickTime] will self compensate for variance in the frame rate and image size. if you need correct and steady playback you will need to modify your procedures to handle the file correctly as video (not as a computer monitor as indicated)”

    I’m not sure what this part means but will take it seriously. What variance in frame rate are you referring too? My “None” file will have 25 full frames per second. Won’t QT display 25 full frames per second? It’s a Samsung 305T monitor and a Nvidia 7300GT graphics card. What funny things can these do temporally? As for image size, I intend to display the 2000×1500 on exactly 2000×1500 pixels. QuickTime and the 305T allow that. What can go wrong there? Concerning “scientific analysis” (your term) it will be a purely visual experiment. No measurement will be taken off the screen. But even for visual appearance if the system introduces dithering or sharpening or noise reduction, or anything that is not there in the bitmaps, my experiment could be in trouble. (I am prepared to go back and play with the bitmaps’ gamma curve in Photoshop after seeing what FCP/Compressor does in the None conversion.)

  • Dennis Couzin

    July 8, 2009 at 4:08 am in reply to: big 4 second video

    Gary, what I called a video was also described as being played by QuickTime on a monitor so it was a leap to assume it to be “SMPTE video signal to be passed over dual link.” I won’t contribute to the semantical debate over what is video and what is electronic cinema except to note that the narrow usage of “video” is sure to die as video appears everywhere — even cellphones make videos.

    Concerning the file sizes and byte rates my calculations are correct. You speak of 2000×1500 RGB. That requires at least 24 bits per pixel — see below. I’m speaking of 8-bit grayscale requiring just 8 bits per pixel. As noted, Compressor implements the “None” codec with a 256 graytone depth option. That’s 8-bit grayscale. For verification I just now made a 2000×1500 8-bit grayscale clip in “None” using Compressor v.2.3. It’s 1.88 seconds long and 134.5 MB. That’s 71.5 MB/sec, exactly as was calculated.

    It would be absurd if the “None” codec produced files substantially larger than the sum of the bitmap files for its frames, providing the pixel count and the bit depth didn’t change.

    The reason your 4 second “None” clip from 2048X1566 RGB was so large is that 4 X 24 X 2048 X 1566 X 32 = 9.85 billion bits, which is 1.23 billion bytes. You made what Compressor calls “Millions of Colors+” for 32 bit pixel depth. If AE can only make 32-bit “None” files that’s a problem of AE. Compressor’s implementaton of codec “None” allows the following depths:

    1-bit (Black/White)
    2-bit (4 greys)
    2-bit (4 colors – doesn’t work!)
    4-bit (16 greys)
    4-bit (16 colors – has a bug!)
    8-bit (256 greys)
    8-bit (256 colors)
    16-bit (thousands of colors)
    24-bit (millions of colors)
    32-bit (millions of colors + alpha)

    All of this was discussed, including the calculation of file sizes in my Feb 20, 2009 post: funny codec named “None”, and in the ensuing strand to which you contributed.

Page 2 of 9

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy