From: Steve on

On Thu, 17 Jul 2008 23:06:48 GMT, John Sheehy <JPS(a)no.komm> wrote:

>Steve <steve(a)example.com> wrote in news:hhdu741e94q168kgsfi3m8e79prn535eid@
>4ax.com:
>
>> That may be what you did, but it doesn't prove what you're trying to
>> prove, i.e., stuffing more pixels into a given sensor size doesn't
>> degrade noise or DR. In doing what you did above, the sensor from the
>> 400D would be capturing an image that is overall several times the
>> area of the FZ50. No one in their right mind would consider that a
>> fair comparison of different sensor sizes when what you're trying to
>> prove is whether stuffing in more pixels does not degrade noise and DR
>> for a "given sensor size."
>
>I don't know what to say to you. You have a very warped view of reality;
>you are just totally and completely wrong. The proper way to compare pixel
>densities is to compare the same magnification of the same sensor area.
>Anything else is a measure of something else.

You said you're trying to measure the noise and DR of the sensor as
pixel density changes. But the only thing you're measuring above is
resolution of a given area of the focal plane. I don't think anyone
is going to argue that higher pixel density gives better resolution at
the focal plane. But that is absolutely meaningless if the size of
the sensor is different. Almost nobody except you and some people
with special applications that may require a small sensor size cares
about resolution at the focal plane. The measure of resolution the
rest of us care about is the resolution of the image captured by the
entire sensor.

Your test of DR and noise vs. pixel density is not valid because you
did not take the variable of resolution out of the equation by keeping
it constant for the images captured. I.e., if you have two 10MP
sensors of different sizes, the only real world meaningful way to
measure noise and DR vs. pixel density is to allow both 10MP sensors
to capture a similar scene across the entire sensor and then compare
the results at 100%. If you don't do it that way, then you are adding
another variable (different magnifications even though the sensors
have the same total number of pixels) to the equation. When you do
that, you are no longer measuring the noise and DR of a sensor when
only the pixel density changes.

I'm pretty sure you understand this and are just trying to troll for
an argument.

Steve
From: Steve on

On Thu, 17 Jul 2008 22:15:06 GMT, John Sheehy <JPS(a)no.komm> wrote:

>That makes no sense at all. I am not comparing cameras; I am comparing
>the effects of pixel density. I am fighting the myth that pixel noise

Actually, you're not doing that at all. What you are doing is
comparing the effects of resolution at the focal plane. No one can
argue that higher pixel density gives higher resolution at the focal
plane. And that's the only thing your test proves ... the obvious.
Only you can argue that this somehow translates to higher pixel
density gives better noise and DR performance in any real world image.
It doesn't, and the rest of the world knows that to be true no matter
how much you profess otherwise.

Steve
From: Steve on

On Thu, 17 Jul 2008 20:37:49 GMT, John Sheehy <JPS(a)no.komm> wrote:

>John Sheehy <JPS(a)no.komm> wrote in news:Xns9ADE17B816Fjpsnokomm@
>199.45.49.11:
>
>> Anyone with any amount of experience in these matters would recognize
>> that even a magic lens with no softness of any kind
>
>Sorry, I hit send before I finished that post. It should have read "Anyone
>with any amount of experience in these matters would recognize that even a
>magic lens with no softness of any kind would pale next to the FZ50 lens
>and pixel density here, using the 400D and scaling 289%.

There's your problem... you're scaling the 400D image by 289% even
though it has better sensor resolution.

Just as a diversion, even though the results would be just as
meaningless if you want to compare only noise and DR vs. only pixel
density and not image resolution, why don't you try keeping the 400D
image at 100% but scaling the FZ50 image down to 35% (reciprocal of
2.89) and compare those results. Doing so would be in keeping with
the same spirit as your warped test of pixel density, but would not
over-resolve the 400D.

Steve
From: Steve on

On Thu, 17 Jul 2008 21:25:16 -0400, ASAAR <caught(a)22.com> wrote:

>On Thu, 17 Jul 2008 21:09:14 GMT, John Sheehy wrote:
>
>>> I was on your side for the most part originally but sorry to say
>>> you've lost me. I just hope you are not in science because this
>>> is NO way to run an experiment.
>>
>> This is the best possible way to run this experiment, as there are no
>> possible pairings out there of cameras with the same mount and the same
>> size sensors with vastly different pixel densities of the same era.
>
> How about Nikon's 6mp D40 vs. the 10mp D40x? Or Canon's A610 vs.
>A620? Announced at the same time, they used the same size 1/1.8"
>sensors, where the former had 5mp vs. the latter's 7mp. There are
>more such pairings, but I'd be surprised if you'd accept any of
>them, as they wouldn't aid your agenda.

Or Nikon's D3 vs. Canon's 1DsMkIII. Both have very similar sensor
sizes but very different pixel densities. Both are pretty much the
top of the heap as far as what either manufacturer is capable of in
every other respect so it's a fair comparison of what pixel density
does to noise and DR.

The problem there is that just like comparing almost identical low-end
cameras with different pixel densities, the results would not support
his hypothesis that increasing pixel density increases DR and
decreases noise for a given sensor.

Steve
From: Roger N. Clark (change username to rnclark) on
John,
Your post is full of inaccuracies. I'll hit a few of the
highlights.

John Sheehy wrote:
> Paul Furman <paul-@-edgehill.net> wrote in news:r8Ofk.16605$89.6967
> @nlpi069.nbdc.sbc.com:
>> Sure, but if you have bright highlights also, the small pixels
>> are badly crippled.
>
> How so? Smaller pixels can keep more highlight detail, like film,
> because the higher shot noise in each pixels guarantees that a percentage
> of pixels will not clip when the mean clip, so levels above the clipping
> point can be discerned at reduced resolution.

Apparently you do not understand film. As more grains become exposed,
the probability of exposing an additional one reduces, so exposure
grows a the log of exposure. This property give the shoulder you see
in characteristic curves and the great highlight room.

Electronic sensors clip. Your idea of shot noise increasing the probability
of an unclipped pixel scales as the square root of the full well capacity,
which gives a percent or 2 more dynamic range, or about 0.03 stop
over a single large pixel. Big deal.

The problem is on the bottom end. Let's say the pixel read noise is about
the same regardless of pixel size (which is true in real world sensors).
Let's say the pixel density of the small pixel sensor is 100 times the
large pixel sensor. The total read noise from the 100 small pixels
results in a signal over those 100 pixels to equal the
large pixels is 10 times (square root 100) the single pixel.
So in the end you lose. The noise floor is 10x higher than the single
pixel, thus you lose both sensitivity and dynamic range.
Before you think you can correct me on this, read below
where I show your math mistake.

> Shot noise per unit of area is lower in the FZ50, for a given exposure.
> It has one of the highest QEs in the industry. Per pixel, the 400D
> collects up to about 43K photons 4 stops above metered middle grey, the
> FZ50, about 4800 photons 2.5 stops above metered middle grey.

1st, you believe a low end low cost camera has one of the highest
QE in the industry? What's wrong with this picture?

Second, the FZ50 has a 1.97 micron pixel pitch. One needs about a micron
between pixels to separate the active areas, so the active ares is about
(1.97 - 1)^2 = 0.94 square micron. The electron density (this is a
technology limitation and has drawback as the density increases) in
silicon is about 1000 to 3000 electrons/square micron. Your density
is over 5000 electrons/sq micron, the highest of any camera.

What's wrong with this picture? Your numbers seem off.

4800*
> 83.521 = 40,090, so the FZ50 sensor collects a maximum number of photons
> per unit of area slightly less than the 400D. The FZ50 captures those
> 40,090 photons with 1.5 stops less exposure than the 400D, though, so it
> has an area-based shot noise of about 0.7 stops less than the 400D, for
> the same exposure.

How does the FZ 50 collect 2.8 times (1.5 stops) more light in the
same exposure? Fill factor decreases as pixel size decreases
because there must be space between pixels.
QE of the CCDs and CMOS sensors run in the 30 to 35 % range.
Your QE must be near 1. Only thinned back side illuminated CCDs
are that good, and are so expensive they are not found in any
high end DSLR. Are you saying they are in low cost P&S cameras?

Another what's wrong with this picture?

> It is a general trend in current sensors that the
> tiny pixels in P&S sensors have from 0.5 to 1.5 stops more area-based
> photon sensitivity than DSLRs (closer to the 0.5 for more recent DSLRs).

Physics of sensors does not agree with this assertion. Smaller sensors
have greater inactive area, and greater read noise per unit area
because you must sum more pixels each contributing read noise.

<sarcasm on>
Your implication is astronomers are all wrong. They should be using
tiny pixels so they can record fainter stars. Instead they
seem to be using large pixels. Twenty plus micron pixels are commonly
used in astronomical applications. I guess yu should show them
with your FZ50. Why don't you take some stunning astrophotos
with your FZ50 to prove everyone wrong?

I guess all those people complaining about high noise, low dynamic
range in their small sensor cameras are wrong. They must also be wrong
when their images get worse with new generations of tinier pixel cameras.
They insist older cameras did better, but you must be right.
<sarcasm off>
>
>> Read noise -
>
> Read noise per pixel is about 2.8 12-bit ADU in the FZ50, and about 1.65
> 12-bit ADU in the 400D, both at ISO 100.

But this is NOT sensor read noise from the 400D. It is post read
electronic noise. The gains are so low in DSLRs that at low ISOs
the the post electronics in order to capture the high end can't
currently get the low end. Again, you compare the small sensor
where the true read noise is the dominant noise source, and the
DSLR where the sensor read noise is not dominant.

> Scaled by pixel pitch, the read
> noise of the FZ50 is about 2.8/2.89 = 0.97 ADU, in 400D pixel terms. And
> that's not even including the fact that the FZ50 is 1.5 stops more
> sensitive in RAW numbers (not that this would affect DR, but it does
> affect real sensitivity).

Here is where you made a major math mistake. You sum the pixels to
give you your per unit area total photon count, but you average the
read noise.

If you sum to get total photon count to compare with the large pixel
photon count, then for the low end you sum the square root of the
number of small pixels per large pixel (5.7/1.97 micron pixel pitch)=
2.89x. So your 2.8 ADU becomes 2.8*2.89 = 8.1 ADU, not your 0.97.
You divided by 2.89 when you should have multiplied. You are in error
by 2.89 * 2.89 = 8.3 times or 3 stops.

> DR would be directly proportional to the photon count at
> saturation at any given ISO, at the pixel level, *if* there was no other
> noise than shot noise. There is read noise and all its associated post-
> read noises (and dark current noise when applicable), which lowers the DR
> of individual pixels, and the DR of an image is *NOT* the DR of a pixel;
> that is one of the biggest pieces of nonsense propagated as obvious
> truth.

Could you please cite where people are saying that
the DR of an image IS the DR of a pixel?
Perhaps you are making up a problem that does not exist, and
applying math mistakes to prove your idea?

Roger