From: Roger N. Clark (change username to rnclark) on
ejmartin wrote:

> Well, that's the tradeoff isn't it? At least, that's the tradeoff for
> current implementations at high ISO, where it's the sensor properties
> that control DR -- big photosites for more sensitivity in shadows, or
> small photosites for more resolution. At low ISO, due to the
> limitations of current implementations in the ISO amplifier/ADC, one
> actually does better per unit area with small photosites, because they
> place less demands on the dynamic range of that downstream
> electronics. Small photosites will be better at low ISO until camera
> companies start delivering all the DR that the sensor is offering.

Actually, its not simply low iso, its the lowest ISO, e.g. ISO 100.
Try again at ISO200 and most modern DSLRs have about the same dynamic
range at ISO200 as at ISO 100, and nearly as much at 400. So this
biased test with small sensor P&S cameras is only valid at the lowest
ISO where other camera electronics can't match the huge dynamic range of
the large pixels.

You can see some of these effects in Figures 4 and 5 at:
http://www.clarkvision.com/imagedetail/digital.sensor.performance.summary

In the future I expect this low iso limitation to be corrected; consumer
astrophoto cameras do not have this problem, but they use 16-bit converters.

Roger
From: Roger N. Clark (change username to rnclark) on
ejmartin wrote:

>> In the above list, my sensor performance model, shown in
>> Figure 9 at:http://www.clarkvision.com/imagedetail/digital.sensor.performance.sum...
>> shows between 10 and 20 megapixels would deliver the highest
>> quality images if one had diffraction limited lenses at f/8.
>
> From the description of the model, I'm not sure it properly accounts
> for diffraction limitation effects. Once things become diffraction
> limited, are you using the effective resolution as determined by the
> size of the Airy disk? Are you also using the S/N ratio at that same
> spatial frequency, or are you using the S/N at Nyquist? They are not
> the same.

I use the dropping MTF as diffraction overtakes the pixel pitch
and the image becomes softer so effective resolution drops.
But I assume S/N per effective resolution unit is constant so
you are just slicing up the same total photon count into more pixels.
But at a lower point the DR of individual pixels becomes too small
and limits dynamic range too much, dropping image quality faster
as pixel size continues to drop. One can derive various subjective
functions which change the exact shape of the curves but don't
change the fact that there is an optimum.

>> John used the same focal length, f/stop and exposure time on 2 sensors.
>> He needs to show a series of images up to the saturation point.
>> He needs to show where each sensor saturates and then measure
>> the floor relative to the saturation point to illustrate his
>> dynamic range. But he is also using a property of low ISO settings
>> on DSLRs where the noise is dominated by system electronics after
>> the sensor. Even so I believe he will find the large pixel sensor
>> will have a larger dynamic range. When camera electronics improve,
>> that difference will be larger.
>
> I don't think one needs to go to all that trouble; one knows all the
> relevant parameters for the sensors in question. Take his measured
> full well and read noise, that the FZ50 saturates at 4800 electrons at
> base ISO, with a read noise of 2.8 ADU; the pixel pitch is 1.97�. Now,
> 4800/4095=1.17 e-/ADU, and so 2.8ADU=3.3 electrons.

This is what I have a problem with. There is no evidence in the
sensor industry for 80+ % QE unless one is using thinned, back-side
illuminated sensors, which is a custom process that is currently quite
expensive.

On the contrary, there have been numerous claims over the years
of great performance, but when the details are measured, one finds
pretty close to predicted performance. In the case of the claimed
FZ50, I have yet to see a full sensor analysis of this sort:
http://www.clarkvision.com/imagedetail/evaluation-1d2

Unless one completes the full range of measurements, including getting
data at multiple ISOs, one can't be sure of the full well and gain,
and read noise. For example, look at the procedures that test for
various conditions:
http://www.clarkvision.com/imagedetail/evaluation-1d2/howtotakedata.html

For example, tests of the Canon 1D Mark III initially showed it had
anomalously different full well and gains, but then it was discovered that
the lowest two ISOs were really the same gain and the higher ISO data
were simply scaled by 2.

So let's see a real analysis with all the data presented before concluding
the FZ50 is out of the box. Also, be aware that not all "raw" data from
cameras is actually raw data from the sensor. Perhaps the raw FZ50
data files have had processing.

Roger
From: Roger N. Clark (change username to rnclark) on
ejmartin wrote:
> On Jul 17, 12:59 am, "Roger N. Clark (change username to rnclark)"
> <usern...(a)qwest.net> wrote:
>> John Sheehy wrote:
>>> Scott W <biph...(a)hotmail.com> wrote in
>>> news:381a062a-105d-4b40-92e7-08c023b59bf2(a)r66g2000hsg.googlegroups.com:
>>>> John, you should really redo the test with the ISO set higher on the
>>>> 400D, clearly you can also set it higher on the FZ50 as well.
>>> No, I shouldn't, because that would be a different issue altogether.
>>> I've already said, many times, that the highest ISOs in some models of
>>> DSLRs have area-based read noise as good or slightly better than clusters
>>> of current P&S pixels in aggregate. It's a given that the 400D would
>>> give less noise if it were set to ISO 1600.
>>> This test is *NOT* about shooting at high ISOs. It is about the shadows
>>> (the weakest areas) at base or low ISOs.
>> If that is your test, you biased the result in the FZ50 images by a factor
>> of more than 20. The FZ50 has a max signal at ISO 100 of about 2000 electrons
>> or so and the 400D over 40,000. So the FZ50 saturates some 4.3 stops
>> lower than the 400D. So if you equalize the high end, the 400D reaches
>> some 4.3 stops lower.
>>
>> All this comes down to the electronic gains in the two cameras are different
>> in your test. Unless you make those close to equal, your test is
>> biased.
>>
>> Roger
>
> Huh? Now that is an elementary math error. You shouldn't equalize the
> saturation values of different sized pixels, since for the same photon
> flux the smaller pixels don't have to capture as many photons (that is
> accomplished by having more of the smaller pixels to capture the
> light, assuming equal QE).

Yes, I agree.
I misunderstood the full intent of the test. John should
not have stated 13,500 ISO. It would have been better to state
ISO 100 and look at signal X stops and lower below saturation.

What needs to be done is to understand the upper end as well as the
lower end. We need images up to saturation on both sensors. Only then
would we know the full dynamic range of the two systems. It is not enough
to look at the low end with one standard exposure because different
manufacturers set the meter levels differently, different lenses were used
which may have different transmissions, and where the ISO definition is
set can be relatively different. So one must determine the full
dynamic range, from the high to the low end to show which is doing better.

Then to be objective, it should be done as a function of ISO. As ISO
increases, the smaller pixel sensor will lose faster, and the in the
case shown, the difference should be a lot larger at ISO 200, and not
in favor of the small pixels. This effect is again due to the electronics
limiting the large pixel sensor at ISO 100, not the sensor itself.

The effect of the dynamic range impacting results is illustrated in
Figure 5 at:
http://www.clarkvision.com/imagedetail/digital.sensor.performance.summary
At ISO 50, the dynamic range difference between the small sensor
(S70 on that figure) and the large 1D sensors is only about 1/2 stop.
But at ISO 100 the S70 is worse by a stop, at ISO 200, worse by 2 stops,
and at ISO 400 worse by over 3 stops.

Roger
From: John Sheehy on
Steve <steve(a)example.com> wrote in
news:gq0484hskr2tuscb9n4e4pknmo52sa10de(a)4ax.com:

> You'd be wrong if you are comparinig what you say you are trying to
> compare, S/N and DR. In any valid test, the D3 would do better if you
> look at a crop at 100% for the D3 and 50% for the 1DsMkIII to account
> for the fact that the D3 has lower spatial resolution. The D3 would
> do better specifically because it has larger pixels.

No. The SNR and DR of an image are not the SNR and DR of the individual
pixels! That is a baseless myth; improperly applied arithmetic. There
is no "noise floor"; that is just a metaphor. The so-called "noise
floor" is not an opaque barrier below which nothing can be seen; it is
merely an indication of the signal level at which SNR is 1:1, or signal
is equal to the standard deviation of read noise. Signal levels and
standard deviations are apples and oranges. There is no difference
between an SNR of 1.1:1 and 1:1.1, except that the latter is slightly
noisier, by a ratio of 1.21:1. There is nothing intrinsically
significant about a 1:1 SNR, anymore than there is about a speed of a
mile per minute, or a fuel efficiency of one mile per milliliter.

The more the so-called noise floor is, and/or the finer its grain
relative to subject detail, the better you can see through it.

> Of course, you
> could also look at 100% crops for both. But that would be incorrectly
> handicapping the 1DsMkIII,

Yes, unless you are comparing pixel-level performance; IOW, assuming that
a sensor with 21MP is going to be displayed at 175% the area of an image
with 12 MP, or how a 12MP crop from the 1Ds3 would compare to a D3.

> as you have done to the 400D in your test
> against the FZ50.

Not at all. I was not demonstrating the cameras; I was BORROWING their
pixels for a demonstration of the effects of pixel density. It has
*NOTHING*, I repeat *NOTHING* to do with the cameras, except for the read
and shot noise of their individual pixels and how they image per unit of
area. If you are reading anything else into my demo, you are over-
reading.

> Now, if you wanted to measure spatial resolution, of course the
> 1DsMkIII would do better. But that's not what you say you're trying
> to measure.

Well, it is one of the benefits of higher pixel density, and the only one
that applies across the board (it would be the only benefit if there were
no read noise).

--

<>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
John P Sheehy <JPS(a)no.komm>
><<> <>>< <>>< ><<> <>>< ><<> ><<> <>><
From: Bob Newman on
On Jul 19, 4:19 am, "Roger N. Clark (change username to rnclark)"
<usern...(a)qwest.net> wrote:
> John Sheehy wrote:
> > Then again, I'm
> > the guy who got 100% on all his math and most of his science tests,
> > without studying, so maybe I'm expecting too much.
>
> So what happened? I showed you some of your math and conceptual
> errors but you failed to recognize them.
I don't think John has shown any maths errors, simply because he
hasn't shown any maths. He's performed an experiment and presented the
results. The discussion is about what that experiment actually tells
us. One of the strangest ideas in this discussion is that the
experiment should be 'fair'. It shows what it shows, and what it shows
is that ,at 100ISO and with equal sensor areas, an electronic imaging
system using ~2um pixels
can produce results in several accepted parameters of IQ superior to
one using ~6um pixels. It may be that there are circumstances in which
this would not hold true. However, many would have held that it could
never be the case, and John has disproved this.
Your assertion that the test would be fairer if John equalised the
electronic gain might make sense for astronomical applications, in
which you are interested in optimum capture of point sources of low
intensity, and therefore per pixel metrics are a major concern. In
general photography we are not concerned with point sources, but
extended ones, and are more interested in integrated metrics. This
applies, as Emil pointed out to the total photon flux captured during
an exposure, whether it be collected in a few big pixels or many small
ones. Similarly, we are not so interested in per pixel read noise as
the integrated read noise over an area. What are the actual and
possible performance metrics achievable is still a matter for
discussion.
This also applies to the downstream electronics. Surely, it is an
advantage of small pixels with lower per pixel DR that we can use less
highly specified, lower cost electronics to produce equivalent of
better image level results. In that sense, by including the noise
contribution of the downstream electronics, John's test is rather
informative.