From: Roger N. Clark (change username to rnclark) on
acl wrote:
> On Mar 21, 7:15 am, "Roger N. Clark (change username to rnclark)"
> <usern...(a)qwest.net> wrote:
>> John Sheehy wrote:
>>
>>>> Nikon D50 1.8 4.0
>>>> Nikon D200 1.3 2.0 3.8 7.4 15.
>>> I don't recall seeing values this low at the low ISOs in the Nikon RAW
>>> files I had. These are probably taken literally from the RAW blackframe,
>>> so they are automatically reduced to about 60% of what they'd be if they
>>> weren't black-clipped, like the Canons.
>> Well, perhaps you could examine the real data, e.g.:http://www.clarkvision.com/imagedetail/evaluation-nikon-d200
>>
>> I don't just do a dark frame measurement; I analyze the
>> noise and response over the entire range of the sensor and model
>> the results. See Figure 1 on the above web page. You'll see the
>> largest deviation from the model is less than 10%, and I have
>> light levels down to DN 16 (out of 4095). Where is your data
>> that proves this is wrong?
>
> But if indeed the signal is clipped to what would have been zero had
> there been no noise, then this would start to be visible only when the
> signal and the standard deviation are roughly equal (since if the
> signal is higher, the noise doesn't reach zero so doesn't get
> clipped).
>
> This would be invisible on the scale of figure 1. But if I understand
> correctly, you obtained the values for the read noise by measuring the
> output s and the noise n and fitting the noise curve to
> n(s)=sqrt(f^2+m) where m is the number of electrons and f the fixed
> noise? [that is, you determine f from this]? In that case indeed your
> value for f would be the true read noise, because the deviation from
> the model caused by this clipping would be over a tiny range of values
> to the extreme left of your plot and wouldn't affect the fitting
> appreciably.
>
> Anyway, my D200 does clip the noise at zero (ie the stdev is
> abnormally low for very low signals). Not that this contradicts your
> results or has any practical significance (that I can tell).

I use the following noise model:

N = (P + r^2 + t^2)^(0.5),

Where N = total noise in electrons, P = number of photons,
r = read noise in electrons, and
t = thermal noise in electrons (effectively zero for short exposures).
Noise from a stream of photons, the light we all see and image
with our cameras, is the square root of the number of photons,
so that is why the P in equation 2 is not squared (sqrt(P)2 = P).

I track the signal and noise as a function of intensity, and watch for
deviations from the model. Deviations indicate other noise sources
are present, or other issues in the testing, or the camera and its
processing. At low signals, if the read noise was clipped significantly,
it would become obvious in the data as it would not fit well, showing
a change in read noise as a function of intensity.

Details are given here:
Procedures for Evaluating Digital Camera
Sensor Noise, Dynamic Range, and Full Well Capacities;
Canon 1D Mark II Analysis
http://www.clarkvision.com/imagedetail/evaluation-1d2

Roger
From: Roger N. Clark (change username to rnclark) on
John Sheehy wrote:
> "Roger N. Clark (change username to rnclark)" <username(a)qwest.net> wrote
> in news:4600A9A2.6020506(a)qwest.net:
>
>> Here is a demo: See figure 9 at:
>> http://www.clarkvision.com/photoinfo/night.and.low.light.photography
>
>> Here is the original raw data converted linearly in IP, scaled by 128:
>> http://www.clarkvision.com/photoinfo/night.and.low.light.photography/ni
>> ghtscene_linear_JZ3F7340_times128-876px.jpg
>
>> Now here is the same data with the bottom 4 bits truncated:
>> http://www.clarkvision.com/photoinfo/night.and.low.light.photography/ni
>> ghtscene_linear_JZ3F7340-lose-4bits_times128-876px.jpg
>
>> You lose quite a bit in my opinion.
>> It would be a disaster in astrophotography.
>
> *You* do. I never have the blackpoint drift up like that when I
> truncate/quantize data; the effect is usually subtle. The overall
> intensity should remain almost the same. You are doing something wrong,
> I think.
>
> Part of the problem might be that you are using tools that hide what
> they're really doing from you. I see references to "linear conversions"
> in your texts. You should do all the steps yourself, under your control,
> so you know *exactly* what is happening to the data at every step of the
> way. IRIS, DCRAW with the "-D" parameter are the only, and loading the
> RAW images from un-compressed DNGs are the only ways I know of that get
> you the real RAW data. (MaximDL, as well, I think).
>
> Note, I didn't say that an ISO 1600 suffers nothing at all from 8-bit
> quantization; I said that it is still better than ISO 100, pushed to the
> same EI.
>

Well, lets look at this another way. Go to:
http://www.clarkvision.com/imagedetail/dynamicrange2

4 bits is DN = 16 in the 0 to 4092 range. In 16-bit
data file, that would be 16*16 = 256.

Now go to Figure 7 and draw a vertical line at 256 on the
horizontal axis. Now note all the data below that line that
you cut off. Now go to Figure 8b and draw a vertical line
at 4 stops, and note all the data you cut off. Now go to
Figure 9D and draw the vertical line at 256 and
note all the data you cut off. (Note too how noisy the
8-bit jpeg data are.)

Pretty obvious.

Roger
From: acl on
On Mar 22, 6:39 am, "Roger N. Clark (change username to rnclark)"
<usern...(a)qwest.net> wrote:
> acl wrote:
> > On Mar 21, 7:15 am, "Roger N. Clark (change username to rnclark)"
> > <usern...(a)qwest.net> wrote:
> >> John Sheehy wrote:
>
> >>>> Nikon D50 1.8 4.0
> >>>> Nikon D200 1.3 2.0 3.8 7.4 15.
> >>> I don't recall seeing values this low at the low ISOs in the Nikon RAW
> >>> files I had. These are probably taken literally from the RAW blackframe,
> >>> so they are automatically reduced to about 60% of what they'd be if they
> >>> weren't black-clipped, like the Canons.
> >> Well, perhaps you could examine the real data, e.g.:http://www.clarkvision.com/imagedetail/evaluation-nikon-d200
>
> >> I don't just do a dark frame measurement; I analyze the
> >> noise and response over the entire range of the sensor and model
> >> the results. See Figure 1 on the above web page. You'll see the
> >> largest deviation from the model is less than 10%, and I have
> >> light levels down to DN 16 (out of 4095). Where is your data
> >> that proves this is wrong?
>
> > But if indeed the signal is clipped to what would have been zero had
> > there been no noise, then this would start to be visible only when the
> > signal and the standard deviation are roughly equal (since if the
> > signal is higher, the noise doesn't reach zero so doesn't get
> > clipped).
>
> > This would be invisible on the scale of figure 1. But if I understand
> > correctly, you obtained the values for the read noise by measuring the
> > output s and the noise n and fitting the noise curve to
> > n(s)=sqrt(f^2+m) where m is the number of electrons and f the fixed
> > noise? [that is, you determine f from this]? In that case indeed your
> > value for f would be the true read noise, because the deviation from
> > the model caused by this clipping would be over a tiny range of values
> > to the extreme left of your plot and wouldn't affect the fitting
> > appreciably.
>
> > Anyway, my D200 does clip the noise at zero (ie the stdev is
> > abnormally low for very low signals). Not that this contradicts your
> > results or has any practical significance (that I can tell).
>
> I use the following noise model:
>
> N = (P + r^2 + t^2)^(0.5),
>
> Where N = total noise in electrons, P = number of photons,
> r = read noise in electrons, and
> t = thermal noise in electrons (effectively zero for short exposures).
> Noise from a stream of photons, the light we all see and image
> with our cameras, is the square root of the number of photons,
> so that is why the P in equation 2 is not squared (sqrt(P)2 = P).
>
> I track the signal and noise as a function of intensity, and watch for
> deviations from the model. Deviations indicate other noise sources
> are present, or other issues in the testing, or the camera and its
> processing. At low signals, if the read noise was clipped significantly,
> it would become obvious in the data as it would not fit well, showing
> a change in read noise as a function of intensity.
>

What I mean is this. As you say in your webpage
http://www.clarkvision.com/imagedetail/evaluation-nikon-d200/
the read noise at ISO 100 corresponds to about 1 DN; 10 electrons. So
unless the signal itself is of the order of 10 electrons, almost no
clipping will occur. In other words, we're talking about a deviation
from the noise model you have when you are at 1 DN or thereabouts,
which basically means no deviation. This would be completely invisible
on the graph and missed by any fitting procedure I know of (and
rightly so).

Another way to put it: this thing would occur when s\approx n, with s
the number of electrons and n the "noise electrons". This could not
possibly affect the fitting unless you only include a very small range
of data, nor would it be visible unless you specifically looked for it
(or noticed it by chance).

Now it may be that what I saw in my blackframes is because of the way
dcraw outputs "raw" data; maybe it subtracts an offset. I don't know,
and this effect, whatever is causing it, is so inconsequential that I
did not try to find out.

But all this has made me doubt myself, so I'll take some blackframes
and check again. I'll try to find and use a program not based on dcraw
to read the raw files (if such a thing exists).

From: Roger N. Clark (change username to rnclark) on
John Sheehy wrote:
> "Roger N. Clark (change username to rnclark)" <username(a)qwest.net> wrote
> in news:45FE0C82.5010703(a)qwest.net:
>
>> John Sheehy wrote:
>>
>>> The Panasonic FZ50 collects as many photons at ISO 100 saturation,
>>> per unit of sensor area, as the 1DmkII. This is a real-world fact,
>>> that shows that your concern is pretty much a boogey-man story, in
>>> the range of current pixel sizes. And, even when miniaturization of
>>> the sensel *does* lead to photon loss per unit of area, it takes a
>>> huge difference in photon collection to make a difference in shot
>>> noise. Shot noise is not proportional to signal; it's proportional
>>> to its square root.
>> There is a simple reason for this "real-world fact."
>> The 1D Mark II is a CMOS sensor; CMOS sensors have lower fill
>> factors than CCDs. The FZ50 is a CCD, which generally have
>> larger fill factors.
>
> This I know.
>
>> You are comparing apples and
>> oranges.
>
> I am not "comparing" in the context you suggest. I am simply trying to
> demonstrate the fact that small pixels are not necessarily the bad thing
> they are made out to be by big pixel fanatics. Maybe you're not
> concerned, but I get very concerned about false information circulating
> as fact, or half-truths taken out of context like an evangelist quoting
> scripture for his own gain. There is a growing cult of people who
> believe that small pixels can not give good image quality, and your work
> is the most often-quoted Bible.

Me too!

>> The on pixel support electronics is why
>> there are no small pixel size CMOS sensors, because once
>> pixel size drops below about 4 microns, the active area
>> drops too much. CCD encounter similar problems around
>> 2 microns, only due to the inactive area between pixels.
>
> You don't need all of the amplification levels, though. If the pixel
> pitch halves to 4u, you can eliminate the ISO 100- and ISO 200-related
> circuit components.

This makes NO sense. As pixel size and active area drops,
the unity gain ISO drops. You don't need ISOs above unity gain ISO,
so it is the high ISOs that are not needed. The low ISOs give
you the full well range of the sensor. Dropping those low ISOs and
you just lose dynamic range, which you've already reduced by
using a smaller pixel..

> When you go smaller yet, there may be no more
> benefit in Canon's current technology at all. What if you could read 2u
> pixels with 4800 photons each with a single amplification with only 1.5
> electrons of read noise; what would be the point in having bigger pixels,
> especially if you had the option of the firmware downsampling or binning
> for you, if you didn't want all that data?

The problem with this scenario are multiple:
1) reduced dynamic range.
2) you want many more pixels, so the readout is slower and you lose
frames per second. You lose with fast action photography.
3) you lose high ISO performance.

> My main concern is that companies don't want to be bothered with higher
> pixel densities in DSLRs, and big-pixel fanaticism is exactly what they
> want people to believe, so that they don't have to move in the right
> direction for maximum IQ, or niche products.

Image quality is more than just megapixels. Signal-to-noise ratio
is very important, and that is what you are sacrificing with
smaller pixels. However, the one thing you have not thought
that does change the equation is QE. If QE could be increased
along with maintaining full well with smaller pixels, then
we would have a winner. See below.

> AFAIAC, there are huge gaps
> in current offerings. Where is the camera that takes EOS lenses that has
> a small sensor like the one in the FZ50? Imagine an FZ50 sensor
> capturing the focal plane of a 500mm or 600mm f/4L IS!

No, it would not be very good. See below.

> Imagine a more
> professional version with lower read noise. No bokeh-destroying TCs
> necessary; you can leave them home and get as much or better detail, with
> better bokeh.

The factors in image quality include resolution, and signal-to-noise ratio.
To get that wonderful quality with current QE and full wells gives
the sweet spot of about 6 to 8 microns. And that sweet spot also corresponds
to the sweet spot in 35mm camera lenses. WE ARE AT THE SWEET SPOT TODAY!

If you changed DSLR sensors to 4 microns, to give good image quality,
you would need to maintain full wells, increase QE by 3x (basically
to max: >90% QE), and improve all the lenses by about 2x in MTF
response. While all of this might happen, and I hope it does,
there are no indications of sensors that meet that criteria, and
lens designs for that improved MTF would not be cheap.

Its nice to dream of the future, but don't forget we have wonderful
performance right now. I imagine a 30D class full frame sensor,
about 22 megapixels, 5 frames per second.
That should come out soon. ;-)

Roger
From: Roger N. Clark (change username to rnclark) on
acl wrote:

> What I mean is this. As you say in your webpage
> http://www.clarkvision.com/imagedetail/evaluation-nikon-d200/
> the read noise at ISO 100 corresponds to about 1 DN; 10 electrons.

Remember, a standard deviation of 1 means peak to peak variations on about
4 DN. It is not simply you get 1 and only 1 all the time.

There is another issue with the Nikon raw data: it is not true raw, but
depleted values. I think they did a good job in designing the
decimation, as they made it below the photon noise.

Roger