From: John Sheehy on
"Roger N. Clark (change username to rnclark)" <username(a)qwest.net> wrote
in news:4600A9A2.6020506(a)qwest.net:

> Here is a demo: See figure 9 at:
> http://www.clarkvision.com/photoinfo/night.and.low.light.photography

> Here is the original raw data converted linearly in IP, scaled by 128:
> http://www.clarkvision.com/photoinfo/night.and.low.light.photography/ni
> ghtscene_linear_JZ3F7340_times128-876px.jpg

> Now here is the same data with the bottom 4 bits truncated:
> http://www.clarkvision.com/photoinfo/night.and.low.light.photography/ni
> ghtscene_linear_JZ3F7340-lose-4bits_times128-876px.jpg

> You lose quite a bit in my opinion.
> It would be a disaster in astrophotography.

*You* do. I never have the blackpoint drift up like that when I
truncate/quantize data; the effect is usually subtle. The overall
intensity should remain almost the same. You are doing something wrong,
I think.

Part of the problem might be that you are using tools that hide what
they're really doing from you. I see references to "linear conversions"
in your texts. You should do all the steps yourself, under your control,
so you know *exactly* what is happening to the data at every step of the
way. IRIS, DCRAW with the "-D" parameter are the only, and loading the
RAW images from un-compressed DNGs are the only ways I know of that get
you the real RAW data. (MaximDL, as well, I think).

Note, I didn't say that an ISO 1600 suffers nothing at all from 8-bit
quantization; I said that it is still better than ISO 100, pushed to the
same EI.

--

<>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
John P Sheehy <JPS(a)no.komm>
><<> <>>< <>>< ><<> <>>< ><<> ><<> <>><
From: acl on
On Mar 21, 7:15 am, "Roger N. Clark (change username to rnclark)"
<usern...(a)qwest.net> wrote:
> John Sheehy wrote:
>
> >> Nikon D50 1.8 4.0
> >> Nikon D200 1.3 2.0 3.8 7.4 15.
>
> > I don't recall seeing values this low at the low ISOs in the Nikon RAW
> > files I had. These are probably taken literally from the RAW blackframe,
> > so they are automatically reduced to about 60% of what they'd be if they
> > weren't black-clipped, like the Canons.
>
> Well, perhaps you could examine the real data, e.g.:http://www.clarkvision.com/imagedetail/evaluation-nikon-d200
>
> I don't just do a dark frame measurement; I analyze the
> noise and response over the entire range of the sensor and model
> the results. See Figure 1 on the above web page. You'll see the
> largest deviation from the model is less than 10%, and I have
> light levels down to DN 16 (out of 4095). Where is your data
> that proves this is wrong?

But if indeed the signal is clipped to what would have been zero had
there been no noise, then this would start to be visible only when the
signal and the standard deviation are roughly equal (since if the
signal is higher, the noise doesn't reach zero so doesn't get
clipped).

This would be invisible on the scale of figure 1. But if I understand
correctly, you obtained the values for the read noise by measuring the
output s and the noise n and fitting the noise curve to
n(s)=sqrt(f^2+m) where m is the number of electrons and f the fixed
noise? [that is, you determine f from this]? In that case indeed your
value for f would be the true read noise, because the deviation from
the model caused by this clipping would be over a tiny range of values
to the extreme left of your plot and wouldn't affect the fitting
appreciably.

Anyway, my D200 does clip the noise at zero (ie the stdev is
abnormally low for very low signals). Not that this contradicts your
results or has any practical significance (that I can tell).


From: John Sheehy on
"acl" <achilleaslazarides(a)yahoo.co.uk> wrote in
news:1174524772.426623.293260(a)l77g2000hsb.googlegroups.com:

> This would be invisible on the scale of figure 1. But if I understand
> correctly, you obtained the values for the read noise by measuring the
> output s and the noise n and fitting the noise curve to
> n(s)=sqrt(f^2+m) where m is the number of electrons and f the fixed
> noise? [that is, you determine f from this]? In that case indeed your
> value for f would be the true read noise,

Not necessarily. Many cameras have fairly significant noise that is
neither in the blackframe, nor is shot noise. There are basically three
components I've seen; the fixed, blanket noise (blackframe noise), the shot
noise, and noise that is directly proportional to signal. If the latter
type is significant, the camera will fail to reach the maximum S/N dictated
by the photon count. My XTi is certainly like this; the top 1.5 stops or
so at ISO 100 has the same S/N; about 100:1. Failure to account for it
leads to an estimation of lower photon count than the actual. When I
measure shot noise in low-ISO highlights, I divide signal in DN by standard
deviation in DN, of a completely OOF patch of a solid area with no texture
and shadows (like the color checker squares) in a single color channel of
the RAW data (treating green as two different, but similar, channels), and
square the result of the division. I consider this number to be the
*minimum* number of photons; not *the* number.

There may be some other noise correlations as well, which I have not
noticed yet (albeit low in intensity).


--

<>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
John P Sheehy <JPS(a)no.komm>
><<> <>>< <>>< ><<> <>>< ><<> ><<> <>><
From: John Sheehy on
"Roger N. Clark (change username to rnclark)" <username(a)qwest.net> wrote
in news:4600B417.9010205(a)qwest.net:

> The ADUs (DNs) are errors introduced by 1) sensor noise + 2) analog
> gain amplifier noise + 3) A/D converter noise and converter errors.
> It's not a straight line increase because one of those three dominates
> at one end and another dominates at the other end of the ISO.
> #1 and 2 are strongly coupled. 1+2 dominates at the high ISO,
> #3 dominates at the low ISO in the above sensors. We are all
> hoping that #3 will be less in the new canon 1DMIII with the
> 14-bit converter. And Canon says that is the case.
> I hope they are right.

Here's the shadow area of a 1DmkIII ISO 100 RAW, at the original 14 bits,
and at quantizations to 12, 11, and 10 bits:

http://www.pbase.com/jps_photo/image/76001165

The demoasicing is a bit rough; it's my own quick'n'dirty one, but it is
applied homogenously to all quantization levels, and I gave the three with
extra quantization the same bit depth for demosaicing as the 14-bit (they
all have the same precision for processing). Gave them all a little USM
(0.5px/120%), which emphasizes the noise a little. These are all pushed
from ISO 100 to 3200; the full tonal range of these images is linear, and
represents the lowest 256 photonic levels (1024 through 1279) of the 15,280
usable levels of the ISO 100 RAW.

--

<>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
John P Sheehy <JPS(a)no.komm>
><<> <>>< <>>< ><<> <>>< ><<> ><<> <>><
From: Roger N. Clark (change username to rnclark) on
John Sheehy wrote:
> "acl" <achilleaslazarides(a)yahoo.co.uk> wrote in
> news:1174524772.426623.293260(a)l77g2000hsb.googlegroups.com:
>
>> This would be invisible on the scale of figure 1. But if I understand
>> correctly, you obtained the values for the read noise by measuring the
>> output s and the noise n and fitting the noise curve to
>> n(s)=sqrt(f^2+m) where m is the number of electrons and f the fixed
>> noise? [that is, you determine f from this]? In that case indeed your
>> value for f would be the true read noise,
>
> Not necessarily. Many cameras have fairly significant noise that is
> neither in the blackframe, nor is shot noise. There are basically three
> components I've seen; the fixed, blanket noise (blackframe noise), the shot
> noise, and noise that is directly proportional to signal. If the latter
> type is significant, the camera will fail to reach the maximum S/N dictated
> by the photon count. My XTi is certainly like this; the top 1.5 stops or
> so at ISO 100 has the same S/N; about 100:1. Failure to account for it
> leads to an estimation of lower photon count than the actual. When I
> measure shot noise in low-ISO highlights, I divide signal in DN by standard
> deviation in DN, of a completely OOF patch of a solid area with no texture
> and shadows (like the color checker squares) in a single color channel of
> the RAW data (treating green as two different, but similar, channels), and
> square the result of the division. I consider this number to be the
> *minimum* number of photons; not *the* number.
>
> There may be some other noise correlations as well, which I have not
> noticed yet (albeit low in intensity).

You are limiting the signal-to-noise ratio your derive because of
variations in the target you are imaging. E.g. the macbeth color
checker of make of paper, which has fine textures. Illuminate
the chart at a low angle and this will become more obvious.
Those small variations in the target translate to small
variations in intensity from pixel to pixel and result
in your lower S/N. I initially tried to do this too in order
to speed up testing, but hit this problem. I've encountered
this problem at work in testing sensors too (more difficult
when you are trying to evaluate sensors in flight on aircraft
and spacecraft). I have found the only reliable way is
the method used by the sensor manufacturers which uses
pairs of full field illumination. That method also avoids
scattered light which can also influence the lowest signal
measurements which impact correct dynamic range evaluations.
Details are available on my website and references therein:

Procedures for Evaluating Digital Camera
Sensor Noise, Dynamic Range, and Full Well Capacities;
Canon 1D Mark II Analysis
http://www.clarkvision.com/imagedetail/evaluation-1d2

Roger