Activity › Forums › Panasonic Cameras › More HVX200 info
-
Toke
November 19, 2005 at 6:06 pm[Jan Crittenden Livingston] “I have a feeling this is another one of those discussions that we will never come to agree as frankly it is not scaling. It is using information that is there and using all of it without making the pixels so small that it suffers in low light or has no dynamic range.”
You are telling why it is done, not how.
Scaling means altering pictures resolution.
Eg. when you have 2 ccd’s with 1280×720 resolution and v&h spatial offset, they will create a picture with resolution of 2560×1440. Then pixels are interpolated so that missing subpixels are filled and then picture is scaled down to final resolution.
How can you say that the last action is not done?[Jan Crittenden Livingston] “But then you do nothave to be disappointed because it does use horizontal and vertical spatial offset.”
Ok, thank you for leaking this information. Japanise advertising was misleading by saying that it has 1080p-chips.
[Jan Crittenden Livingston] “Since the initial capture from the chip set is a 1080p/60 capture and it downconverts or crossconverts from there, there is not the limitation that you suggest here.”
I hope the initial capture can also be 1080p24/25/30, since the motion blur of combined 1080p60 frames cannot match the film cadence. You also get more light into the chips with longer exposure time and that means less noise from the chips.
-
Blub06
November 20, 2005 at 1:07 amtoke lahti, you might not be aware of this but your questions (and self conscious answers) have a harsh accusatory, gotcha quality which tells me that you are young. Do you want us to know that much about you?
Chris
-
Jan Crittenden livingston
November 20, 2005 at 1:11 pm[toke lahti] ”
You are telling why it is done, not how..”Toke, the how is with a spatial offset. The Red and Blue chips see information that the green cannot see, and thus when the signals are combined their sum of information is greater. Maybe this will be one of those posts for the Tosh board cause I need pictures to fully explain this. But you can add to the resolution significantly by doing offest. We pioneered the offset back in 1987 with the introduction of our WV-F300 camera and it has been consistantly improved ever since.
[toke lahti] “Scaling means altering pictures resolution.
Eg. when you have 2 ccd’s with 1280×720 resolution and v&h spatial offset, they will create a picture with resolution of 2560×1440. Then pixels are interpolated so that missing subpixels are filled and then picture is scaled down to final resolution.
How can you say that the last action is not done?”Scaling is taking a signal and telling a 960 X 1280 to become 1920 X 2560 with no additional information. It interpolates information to do so. This is not what happens at all, nothing is made up, it is taking information that is there and using it. That is why I am saying that it is not scaling. The camera starts with a 1080P resolution and from there it is scaled down to the other formats and that is the only scaling the camera does.
[toke lahti] “[Jan Crittenden Livingston] “But then you do nothave to be disappointed because it does use horizontal and vertical spatial offset.”
Ok, thank you for leaking this information. Japanese advertising was misleading by saying that it has 1080p-chips.”Unfortunately the only way that you were able to read it was that AltaVista translation and there isn’t a good translation for Japanese into English, the nuance of the Japanese Kanji is subtle. What it said was that the images are captured at 1080P and it works into the various formats from there. BTW, I did leak information on the H & V Spatial Offset, I have stated this before somewhere.
[toke lahti] “I hope the initial capture can also be 1080p24/25/30, since the motion blur of combined 1080p60 frames cannot match the film cadence. You also get more light into the chips with longer exposure time and that means less noise from the chips.”
Well of course if the camera is at 24P capture the motion blur will match that of a 24P capture.
Best,
Jan
Jan Crittenden Livingston
Product Manager, DVCPRO, DVCPRO50, AG-DVX100
Panasonic Broadcast & TV Systems -
Toke
November 20, 2005 at 11:50 pm[Jan Crittenden Livingston] “Scaling is taking a signal and telling a 960 X 1280 to become 1920 X 2560 with no additional information. It interpolates information to do so. This is not what happens at all, nothing is made up, it is taking information that is there and using it. That is why I am saying that it is not scaling.”
Ok, maybe this is again a lnaguage issue or we should define what these words mean.
https://en.wikipedia.org/wiki/Scaling
https://en.wikipedia.org/wiki/InterpolationAFAIK, scaling goes both ways. It is resizing. Like with spatial offset camera has 2560×1440 image and it is scaled to 1920×1080.
Interpolation (AFAIK again), is calculating new samples between the old ones. So, in a way it is making new data out of nowhere. With spatial offset rgb-pixels are not on top of each other, so camera has to calculate the missing data, usually with averaging the neighbor pixel’s values (with no additional information).
Am I right or missing something, anyone?
Is this conversation boring or does not give any new or additional knowledge about camera technology?
I can stop this, if I want, really 🙂 -
Jan Crittenden livingston
November 21, 2005 at 12:00 am[toke lahti] “Am I right or missing something, anyone?
Is this conversation boring or does not give any new or additional knowledge about camera technology?”Actually you are missing something. An understanding of 1. the fact that the CCD is an analog device. 2. how spatial offset works.
You will have to wait and see it on Tosh’s blog, because I cannot explain it here well enough for you to know that neither of your look-ups degfine what is going on in spatial offset.
Later,
Jan
Jan Crittenden Livingston
Product Manager, DVCPRO, DVCPRO50, AG-DVX100
Panasonic Broadcast & TV Systems -
Toke
November 21, 2005 at 12:03 amGotcha!
You guessed wrong!
I’m really not that young any more. 34 years, but I’m trying to stay youthful 😉
And I’m sorry if I’m too harsh and that offends someone.But to be serious, when somebody says
“Without pixel shift/spatial offset, you would have vertical stripes in the pictures, which once the light became low you would see.”
, I need an eplanation.
I’ve been shooting for a decade with all sorts of cameras and almost none of them were using pixel shift and most of them give beatiful picture when used with skill. No vertical stripes everywhere.Then either I didn’t understand what Jan explained of Jan wasn’t very clear how this pixel shift happens in a bottom technical level. I thought it would be good for all of us to understand this thing a bit better by having a discussion.
Maybe that was a bad idea…
-
Toke
November 21, 2005 at 1:52 am[Jan Crittenden Livingston] “Actually you are missing something. An understanding of 1. the fact that the CCD is an analog device. 2. how spatial offset works.”
Well, this wasn’t so friendly. I’ve said many times that I know ccd’s are analog and it wouldn’t make any difference regarding the spatial offset if we were talking about “digital” cmos chips.
Constructing the final YCbCr-image from offsetted rgb-chips is anyway done in after ad-converter, isn’t it?
It would have been nicer if you would have pointed out those things from my explanation of spatial offset that you think are wrong. You are not giving any arguments or explanation to support your statement.
But I’ll wait for Tosh’s blog then…
-
Jan Crittenden livingston
November 21, 2005 at 11:53 am[toke lahti] “Well, this wasn’t so friendly. I’ve said many times that I know ccd’s are analog and it wouldn’t make any difference regarding the spatial offset if we were talking about “digital” cmos chips.
Hi Toke, Sorry, you are right, it wasn’t friendly, but I have to say that your wording of things are not so friendly either. There is a tone that feels like somehow we are trying to pull a fast one, that somehow the spatial offset is a bad thing. Even though I said that it is hard to explain without pictures, I have tried. And because you couldn’t understand it you say that the reason you didn’t understand it is because I didn’t explain it very well. I have said that it was not scaling as the information that the red and blue chips see is information that the green one can not see. It is an additive process, you come back and say it is scaling. No I haven’t found this discussion very friendly at all. So I guess I snapped. Sorry.
The fact is, Toke, that because they are analog, it would make a difference if we were talking about CMOS chips as they work very differently.
>Constructing the final YCbCr-image from offsetted rgb-chips is anyway done in after ad-converter, isn’t it?
No it is not.
>It would have been nicer if you would have pointed out those things from my explanation of spatial offset that you think are wrong.
I have tried, but you just turn around and say I am wrong.
>You are not giving any arguments or explanation to support your statement.
It isn’t an argument, I have tried to explain. It is an additive process. I will save any more words for the blog.
Best,
Jan
Jan Crittenden Livingston
Product Manager, DVCPRO, DVCPRO50, AG-DVX100
Panasonic Broadcast & TV Systems -
Toke
November 21, 2005 at 4:20 pm[Jan Crittenden Livingston] “you say that the reason you didn’t understand it is because I didn’t explain it very well.”
Now you are cutting the corners again. I said I didn’t understand OR you explained it wrong or just eplained other things. Usual thing in conversation, when this happens, you would check your own data and try to tell it in a way that I would understand. You chose not to.
[Jan Crittenden Livingston] “…that somehow the spatial offset is a bad thing.”
I think that spatial offset is a great thing as long as we are recording in YCbCr-more.
Without spatial offset high resolution and high sensivity cameras with this small ccd size and price range wouldn’t be possibe. And here Panny has done just right, unlike many others.And if I would think that spatial offset is a bad thing, why couldn’t we have a friendly conversation with that? I’m here just for the knowledge and I hope that we could also question Panny decisions without you getting angry.
Btw, there’s lots of people that think that spatial offset is a bad thing, because it uses averaging and therefore it gives you a bit more details, but all details are a bit more blurred. This debate is quite like 1-ccd vs. 3-ccd, because with spatial offset 3-ccd system acts quite like 1-ccd system.[Jan Crittenden Livingston] “I have said that it was not scaling as the information that the red and blue chips see is information that the green one can not see. It is an additive process, you come back and say it is scaling.”
RBG-chips always see what the other two cannot see, because each of them can only see one color.
With spatial offset (in this case) one color sees both spatially and colorwise different info.
Problem is that in the final picture all color pixels has to be spatially aligned, so they have to re-calculated. This can be done in digital domain (raw-conversion) or like in this case in analog domain.Spatial offset is an additive process in a sense that you get better luminance resolution to the end picture, but pixels are also a bit blurred because all the averaging. And the downside is that color components’ pixels are also a bit more blurred, because they have to be aligned back to same registration, but they don’t get any additional resolution.
In mathematical sense, as a whole, it isn’t an additive process, because you still get the same amount of data out of ccd’s.
So basically with spatial offset, you get more resolution to luminance and loose some with chrominance. And that suits well with component video recording (and also human vision).[Jan Crittenden Livingston] “Toke:’Constructing the final YCbCr-image from offsetted rgb-chips is anyway done in after ad-converter, isn’t it?’
No it is not.”You are really an ace with short answers!
So the whole re-aligning is an analog process and a final YCbCr-image is then digitized.
So for the color and gamma corrections, the image is then re-converted to RGB in DSP?Generally re-aligning could be done with better quality digitally, because then you could use all neighbor pixels with averaging, as in analog stage you can average only with neighbors in the same row (line).
Btw, Jan you said earlier that most of 2/3″ cameras and especially SD cameras use spatial offset.
Can you give couple of examples what cameras?2/3″ cameras usually have more pixels in their ccd’s than they record, so I can’t understand what do they benefit from spatial offset. The whole idea is to get more luminance resolution and those cameras already have more than they can record.
-
Randall3
November 21, 2005 at 5:13 pmToke, give it a rest. You keep implying that Panasonic has intentionally engineered an inferior camera when market share is so obviously at stake. Why would they do that?
The other implication is that they are just stupid for doing things this way or that. Send them your resume. Frankly, I haven’t got a clue about much of what you are talking about – I don’t know if Jan is an engineer or not, but any engineer out of the lab for awhile is going to lose touch with all the intricacies of every facet of a design – if they even knew all that in the first place. I don’t expect a marketing person to be an engineer of the same caliber as an engineer who is…well, engineering day to day.
Reply to this Discussion! Login or Sign Up