From: ejmartin on
On Jul 17, 10:56 pm, "Roger N. Clark (change username to rnclark)"
<usern...(a)> wrote:
> John,
> Your post is full of inaccuracies.  I'll hit a few of the
> highlights.

> The problem is on the bottom end.  Let's say the pixel read noise is about
> the same regardless of pixel size (which is true in real world sensors).
> Let's say the pixel density of the small pixel sensor is 100 times the
> large pixel sensor.  The total read noise from the 100 small pixels
> results in a signal over those 100 pixels to equal the
> large pixels is 10 times (square root 100) the single pixel.
> So in the end you lose.  The noise floor is 10x higher than the single
> pixel, thus you lose both sensitivity and dynamic range.
> Before you think you can correct me on this, read below
> where I show your math mistake.

I fully agree about read noise; but I'm not sure that John is making
the elementary math errors you infer. In part this is because John
hasn't fully explained the context of his various measurements here,
though he has in various threads over at DPReview.

I think John's error lies in using low ISO read noise figures to infer
the properties of the photosites, which incorrectly inflates the read
noise of DSLR's relative to digicams. The former have low ISO DR
which is limited by the DR of the downstream circuitry, while the
latter have so little photosite DR that the downstream processing can
easily accomodate it.

> > Shot noise per unit of area is lower in the FZ50, for a given exposure.  
> > It has one of the highest QEs in the industry.  Per pixel, the 400D
> > collects up to about 43K photons 4 stops above metered middle grey, the
> > FZ50, about 4800 photons 2.5 stops above metered middle grey.  
> 1st, you believe a low end low cost camera has one of the highest
> QE in the industry?  What's wrong with this picture?

It does seem quite high, but I think I'm with John on this one. I did
a crude analysis of a publicly available raw file:

since I don't have access to an FZ50. Using the difference of the two
green subarrays as an ersatz difference file, the standard deviation
relative to the mean of uniform bright areas yielded a gain of 1.2
electrons/12-bit ADU. Note that this is a lower bound, since by
subtracting values from two different pixels there are other noise
sources that contribute besides just photon shot noise. Since raw
saturation is at 4095, this is consistent with John's figure of 4800
electrons at raw saturation.


> How does the FZ 50 collect 2.8 times (1.5 stops) more light in the
> same exposure?  Fill factor decreases as pixel size decreases
> because there must be space between pixels.
> QE of the CCDs and CMOS sensors run in the 30 to 35 % range.
> Your QE must be near 1.  Only thinned back side illuminated CCDs
> are that good, and are so expensive they are not found in any
> high end DSLR.  Are you saying they are in low cost P&S cameras?
>    Another what's wrong with this picture?

Dunno, but that's what the measurements seem to say.

> > It is a general trend in current sensors that the
> > tiny pixels in P&S sensors have from 0.5 to 1.5 stops more area-based
> > photon sensitivity than DSLRs (closer to the 0.5 for more recent DSLRs)..
> Physics of sensors does not agree with this assertion.  Smaller sensors
> have greater inactive area, and greater read noise per unit area
> because you must sum more pixels each contributing read noise.

I don't think John was referring to read noise here; his basic point
is that the metering of middle grey on digicams is about 2.5 stops
down from raw saturation, while on DSLR's it is about 3.5 stops.
Indeed, my DSLR's shot side-by-side with my Panasonic LX1 at the same
exposure (Tv/Av) shows the raw level from .5 to 1 stop closer to raw
saturation for the LX1 (it was a quick test, hence the largish error

> >> Read noise -
> > Read noise per pixel is about 2.8 12-bit ADU in the FZ50, and about 1.65
> > 12-bit ADU in the 400D, both at ISO 100.
> But this is NOT sensor read noise from the 400D.  It is post read
> electronic noise.  The gains are so low in DSLRs that at low ISOs
> the the post electronics in order to capture the high end can't
> currently get the low end.  Again, you compare the small sensor
> where the true read noise is the dominant noise source, and the
> DSLR where the sensor read noise is not dominant.

Yes, and this is where I think the error in the analysis lies.

> > Scaled by pixel pitch, the read
> > noise of the FZ50 is about 2.8/2.89 = 0.97 ADU, in 400D pixel terms.  And
> > that's not even including the fact that the FZ50 is 1.5 stops more
> > sensitive in RAW numbers (not that this would affect DR, but it does
> > affect real sensitivity).
> Here is where you made a major math mistake.  You sum the pixels to
> give you your per unit area total photon count, but you average the
> read noise.

No, I've gone around this with John before. He likes to normalize the
combined pixels' value to a 12-bit range, and so is not adding the raw
levels of the combining pixels; when you do that, the read noise in
ADU goes down by the square root of the number of pixels combined (so
by the ratio of pixel pitches), while the exposure in raw levels stays
constant. You are probably more accustomed to working in photon/
electron equivalents, in which case the signals add and the read
noises combine in quadrature (so both go up). Indeed, looking at your
analysis below confirms that.

> If you sum to get total photon count to compare with the large pixel
> photon count, then for the low end you sum the square root of the
> number of small pixels per large pixel (5.7/1.97 micron pixel pitch)=
> 2.89x.  So your 2.8 ADU becomes 2.8*2.89 = 8.1 ADU, not your 0.97.
> You divided by 2.89 when you should have multiplied.  You are in error
> by 2.89 * 2.89 = 8.3 times or 3 stops.

I think John only differs from this calculation by a different choice
of what he's holding fixed, ie the normalization.

> >   DR would be directly proportional to the photon count at
> > saturation at any given ISO, at the pixel level, *if* there was no other
> > noise than shot noise.  There is read noise and all its associated post-
> > read noises (and dark current noise when applicable), which lowers the DR
> > of individual pixels, and the DR of an image is *NOT* the DR of a pixel;
> > that is one of the biggest pieces of nonsense propagated as obvious
> > truth.
> Could you please cite where people are saying that
> the DR of an image IS the DR of a pixel?
> Perhaps you are making up a problem that does not exist, and
> applying math mistakes to prove your idea?

There has been a heated discussion over at DPReview forums, where the
proprietor of the review site has introduced a 'figure of merit' for
cameras, the pixel density -- pixel count divided by sensor area. He
thinks it is an appropriate measure of image quality. The motivation
behind the introduction of this figure is the site's testing
methodology for its camera reviews, which measures pixel level noise
and doesn't properly scale it to image level noise. By ignoring the
fact that the noise spectrum depends on spatial frequency, and always
measuring noise at the Nyquist frequency, they don't properly compare
noise at the same spatial frequency in lp/ph when comparing cameras
with different pixel counts. So it's the folks over at DPR and their
legions of readers that think that pixel noise and image noise are
strongly correlated. You might be amused to go over to DPR and read
any of the numerous recent threads in the News Discussion and Open
Talk forums with "Pixel Density" as the subject.

From: ejmartin on
On Jul 17, 12:59 am, "Roger N. Clark (change username to rnclark)"
<usern...(a)> wrote:
> John Sheehy wrote:
> > Scott W <biph...(a)> wrote in
> >news:381a062a-105d-4b40-92e7-08c023b59bf2(a)
> >> John, you should really redo the test with the ISO set higher on the
> >> 400D, clearly you can also set it higher on the FZ50 as well.
> > No, I shouldn't, because that would be a different issue altogether.  
> > I've already said, many times, that the highest ISOs in some models of
> > DSLRs have area-based read noise as good or slightly better than clusters
> > of current P&S pixels in aggregate.  It's a given that the 400D would
> > give less noise if it were set to ISO 1600.
> > This test is *NOT* about shooting at high ISOs.  It is about the shadows
> > (the weakest areas) at base or low ISOs.
> If that is your test, you biased the result in the FZ50 images by a factor
> of more than 20.  The FZ50 has a max signal at ISO 100 of about 2000 electrons
> or so and the 400D over 40,000.  So the FZ50 saturates some 4.3 stops
> lower than the 400D.  So if you equalize the high end, the 400D reaches
> some 4.3 stops lower.
> All this comes down to the electronic gains in the two cameras are different
> in your test.  Unless you make those close to equal, your test is
> biased.
> Roger

Huh? Now that is an elementary math error. You shouldn't equalize the
saturation values of different sized pixels, since for the same photon
flux the smaller pixels don't have to capture as many photons (that is
accomplished by having more of the smaller pixels to capture the
light, assuming equal QE).
From: ejmartin on
On Jul 18, 11:24 pm, "Roger N. Clark (change username to rnclark)"
<usern...(a)> wrote:
> Ray,
> I think you are missing the point.  John  is trying to make the
> case that more pixels can be better.
> Example:  Which would you rather have:
> APS-C DSLR with   3 megapixels?
> APS-C DSLR with   6 megapixels?
> APS-C DSLR with   8 megapixels?
> APS-C DSLR with  10 megapixels?
> APS-C DSLR with  20 megapixels?
> APS-C DSLR with  40 megapixels?
> APS-C DSLR with  80 megapixels?
> APS-C DSLR with 120 megapixels?
> APS-C DSLR with 200 megapixels?
> APS-C DSLR with 500 megapixels?
> APS-C DSLR with 999 megapixels?
> Assume all cameras delivered the same frames per second.
> John is trying to simulate those high megapixel sensors
> and since they do not yet exist.  On the surface I agree
> with his idea, but I have a problem with his execution
> and have shown math errors in his analysis.
> In the above list, my sensor performance model, shown in
> Figure 9 at:
> shows between 10 and 20 megapixels would deliver the highest
> quality images if one had diffraction limited lenses at f/8.

From the description of the model, I'm not sure it properly accounts
for diffraction limitation effects. Once things become diffraction
limited, are you using the effective resolution as determined by the
size of the Airy disk? Are you also using the S/N ratio at that same
spatial frequency, or are you using the S/N at Nyquist? They are not
the same.

> John used the same focal length, f/stop and exposure time on 2 sensors.
> He needs to show a series of images up to the saturation point.
> He needs to show where each sensor saturates and then measure
> the floor relative to the saturation point to illustrate his
> dynamic range.  But he is also using a property of low ISO settings
> on DSLRs where the noise is dominated by system electronics after
> the sensor.  Even so I believe he will find the large pixel sensor
> will have a larger dynamic range.  When camera electronics improve,
> that difference will be larger.

I don't think one needs to go to all that trouble; one knows all the
relevant parameters for the sensors in question. Take his measured
full well and read noise, that the FZ50 saturates at 4800 electrons at
base ISO, with a read noise of 2.8 ADU; the pixel pitch is 1.97µ. Now,
4800/4095=1.17 e-/ADU, and so 2.8ADU=3.3 electrons.

The 1D3 has a read noise of 4.0 electrons at ISO 1600; this yields an
upper bound of the sensor read noise per pixel. Full well at ISO 100:
71000 electrons. These are my figures; Roger, you quote 70200
electrons FWC, and also 4.0 electrons read noise (at ISO 3200). Pixel
pitch: 7.2µ.

To compare the sensor read noises on a per area basis, we should scale
the FZ50 read noise by the ratio of the pixel pitches, and the FWC's
by the ratio of the square of the pixel pitches:

3.3 * 7.2/1.97 = 12.1 electrons FZ50 read noise at 1D3 pixel size

4800 * (7.2/1.97)^2 = 64000 electrons FZ50 FWC at 1D3 pixel size

So, binning FZ50 pixels to the size of 1D3 pixels gives a DR of
64000/12.1=5290=12.4 stops. On the other hand, the 1D3 *sensor* has a
DR of 71000/4.0=17750=14.1 stops. John only gets the FZ50 to be
competitive because he uses the read noise that the 1D3 delivers at
base ISO, around 20 electrons, rather than the sensor read noise of 4
electrons. Doing so, it looks like the FZ50 wins since its measured
base ISO DR is about 11.7 stops. But that is misleading, because the
1D3 read noise at that ISO has almost nothing to do with the
properties of the sensor, but as you say it is coming from the
downstream electronics.

From: David J Taylor on
Roger N. Clark (change username to rnclark) wrote:
> Once again, let's work a simple example.
> Sensor A has 100 pixels for every pixel in sensor B.
> Both have pixels with the same read noise = 4 electrons.
> Assume 100% fill factors and same QE.
> Compute dynamic range for the area of pixel in sensor B.
> sensor A: X = Signal A = Signal B if we sum the signal from each
> pixel in A (sum(100* signal in one pixel).
> read noise: sensor A = 4*sqrt(100) = 40 sensor B= 4 electrons.
> Dynamic range, DR: Sensor A DR= X/40, Sensor B: X/4
> The large pixel has higher DR.
> Roger

... but sensor A is delivering a much higher spatial resolution. Depending
how the image is displayed, and how the eye/brain interprets the signal
and the noise, it is not impossible that the perceived image from the
higher resolution sensor will be preferable to that from the lower
resolution sensor ....

Having said that, it strikes me that for a particular
scene/light-level/lens/display/viewing-distance combination there will be
an optimum sensor size, and the different sensor-size cameras we use
simply cover a different subset of the total picture-taking conditions.


From: ASAAR on
On Sat, 19 Jul 2008 04:25:24 GMT, John Sheehy wrote:

>>> This is the best possible way to run this experiment, as there are no
>>> possible pairings out there of cameras with the same mount and the same
>>> size sensors with vastly different pixel densities of the same era.
>> How about Nikon's 6mp D40 vs. the 10mp D40x? Or Canon's A610 vs.
>> A620? Announced at the same time, they used the same size 1/1.8"
>> sensors, where the former had 5mp vs. the latter's 7mp. There are
>> more such pairings, but I'd be surprised if you'd accept any of
>> them, as they wouldn't aid your agenda.
> Why don't you show us, with homogenous RAW conversions, how they
> compare, both upsampled, and downsampled?
> If I did it, you would say I'm lying or rigged something if you didn't
> like the results.
> . . .
> I remember looking at the RAW data from the Nikons a while back, and I
> remember that the two cameras had very different QEs, with the D40X
> being more sensitive, photon-wise, but the D40 having lower read noise
> at higher ISOs.
> . . .
> Well, can you get someone to provide the RAWs (not RAW conversions!)
> from the two Nikons, same scene and lens, and make sure the lighting and
> the exposure is the same?

Gee, what makes you think that anyone would think that you might
lie about anything? Hmm. You admit being familiar with the D40,
the D40x and their RAW data, which are very similar contemporary
cameras ("from the same era") that have vastly different numbers of
pixels on similar sensors and which use the same mount, yet as we
see above, you insisted that :

> This is the best possible way to run this experiment, as there are no
> possible pairings out there of cameras with the same mount and the same
> size sensors with vastly different pixel densities of the same era.

In fact as others have noted, there are many such pairings.
Refusing to admit such an obvious mistake tells us much about
whether we should entertain the notion that you're capable of lying
or rigging something if *you* don't like what you see. Refusing to
address Roger's points only adds to the suspicion that you can't be