From: acl on
On Mar 22, 7:22 am, "Roger N. Clark (change username to rnclark)"
<usern...(a)qwest.net> wrote:
> acl wrote:
> > What I mean is this. As you say in your webpage
> >http://www.clarkvision.com/imagedetail/evaluation-nikon-d200/
> > the read noise at ISO 100 corresponds to about 1 DN; 10 electrons.
>
> Remember, a standard deviation of 1 means peak to peak variations on about
> 4 DN. It is not simply you get 1 and only 1 all the time.
>

I've written papers on stochastic processes, and I know perfectly well
what a standard deviation is; the point is that if this thing occurs,
it is confined to extremely low signals. Maybe I should have replaced
"when s=n" by "when the signal is of the order of the noise", to
prevent this. Anyway, not much point in talking about this, as I think
it's gotten to the point where everybody is talking past each other
and we're just creating noise ourselves [which by now exceeds the
signal, methinks :) ]. I'll take some blackframes tomorrow and check
again.

> There is another issue with the Nikon raw data: it is not true raw, but
> depleted values. I think they did a good job in designing the
> decimation, as they made it below the photon noise.

The D200 (and more expensive models) have an option to save
uncompressed raw data. And yes, the resolution loss is indeed below
the shot noise (using your measured values for the well depth).
Although I guess it's now my turn to point out that this noise
obviously isn't always sqrt(n) so shot noise can exceed the resolution
limit (eg for a uniform subject it could be that you get zero photons
in one pixel and 80000 in the other; not terribly likely, though), but
never mind.

But keep in mind that Nikons do process their "raw" data. I once wrote
a short program to count the number of pixels above a given threshold
in the data dumped by dcraw. I ran it on some blackframes. For a given
threshold, the number of these pixels increases as the exposure time
increases, up to an exposure time of 1s. At and above 1s, the number
drops immediately to zero for thresholds of x and above (I don't
remember what x was for ISO 800), except for a hot pixel which stays
there. So obviously some filtering is done starting at 1s (maybe
they're mapped, I don't know).

It also looks to me (by eye) like more filtering is done at long
exposure times, but I have not done any systematic testing. Maybe
looking for correlations in the noise (in blackframes, for instance)
will show something, but if I am going to get off my butt and do so
much work I might as well do something publishable, so it won't be
this :)

Well, plus I am rubbish at programming and extremely lazy.

From: John Sheehy on
"Roger N. Clark (change username to rnclark)" <username(a)qwest.net> wrote
in news:4602047C.9020303(a)qwest.net:

> There is another issue with the Nikon raw data: it is not true raw,
> but depleted values. I think they did a good job in designing the
> decimation, as they made it below the photon noise.

The Leica M8 does something similar, but a little different. It writes out
8-bit gamma-adjusted RAWs as uncompressed DNG files. The RAW image is
sitting neatly in the DNGs; any program that opens ".raw" files (the kind
from before the era of digital cameras) can read them.

--

<>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
John P Sheehy <JPS(a)no.komm>
><<> <>>< <>>< ><<> <>>< ><<> ><<> <>><
From: John Sheehy on
"Roger N. Clark (change username to rnclark)" <username(a)qwest.net> wrote
in news:4601F854.30701(a)qwest.net:

> You are limiting the signal-to-noise ratio your derive because of
> variations in the target you are imaging.

No, that is not the problem. I am quite aware of texture; that is why I
extremely unfocus the chart, and use diffuse light. I also window the
visible luminance range to exaggerate contrast for the squares, so I can
clearly see any dust or texture that might be present. I look for areas
that only vary at high frequency due to noise, and create a rectangular
selection, and try others, of sufficient size to get a good sample, but
small enough so that it is less likely to include a problem area.

The results vary from camera to camera as well; my 20D and my FZ50 have no
such limit to S/N, but the XTi does.

--

<>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
John P Sheehy <JPS(a)no.komm>
><<> <>>< <>>< ><<> <>>< ><<> ><<> <>><
From: Roger N. Clark (change username to rnclark) on
John Sheehy wrote:
> "Roger N. Clark (change username to rnclark)" <username(a)qwest.net> wrote
> in news:4601F854.30701(a)qwest.net:
>
>> You are limiting the signal-to-noise ratio your derive because of
>> variations in the target you are imaging.
>
> No, that is not the problem. I am quite aware of texture; that is why I
> extremely unfocus the chart, and use diffuse light. I also window the
> visible luminance range to exaggerate contrast for the squares, so I can
> clearly see any dust or texture that might be present. I look for areas
> that only vary at high frequency due to noise, and create a rectangular
> selection, and try others, of sufficient size to get a good sample, but
> small enough so that it is less likely to include a problem area.

Perhaps you need to look at this issue a little closer. There are
very difficult problems in getting uniformity better than ~1%.
Here are some of the issues:

1) Even with diffuse light, it is very difficult to produce a uniform
filed of illumination better than a percent. Try some computing
of light source distance and angles to different spots.
1/r^2 has a big impact. Scrambling the light may help, but
it also scrambles knowledge. For example if one part of the
diffuser has a fingerprint or slightly different reflectance
for some reason, it produces a different field,
and at the <1% level it becomes important. I have several diffuse
illuminaters and run tests for uniformity and none pass the
1% test in my lab.

2) At the <1% level few targets are truly uniform. I have tested multiple
surfaces in my lab for just this issue and most fail. There are
large (many mm) variations in macbeth targets at the ~< 1% level.
Here, for example is the macbeth color chart:
http://www.clarkvision.com/imagedetail/evaluation-1d2/target.JZ3F5201-700.jpg
Now here is the same chart with the top and bottom rows stretched
panel by panel to show the variations:
http://www.clarkvision.com/imagedetail/evaluation-1d2/target.JZ3F5201-700-str1.jpg
There are variations on a few mm range, small spots (those are
not sensor dust spots--they are too in focus), and there are gradients
from one side of a patch to the other. The variations are
typically a couple of percent (which in my opinion is actually
very very good for such a low cost mass produced test target.)

3) The light field through the lens as projected onto the focal
plane even given a perfectly uniformly lit test target, is not uniform.
You have a) cosine angle changing the apparent size of the
aperture, b) 1/r^ changes from center to edge of the frame,
variations in optical coatings and optical surfaces translate
to small variations in the throughput of the system, d) center
optic rays pass through more glass than edge optic rays, and the
percentage of light passing through different angles to the optical
axis pass through different amounts of glass, thus have different absorption.

All of these effects may be small in photographic terms (although light
fall-off is commonly seen), but at the percent level, even few percent
level, they become important. Some cameras collect enough photons
that the noise from photons gives S/N > 200. With your methods
you are likely limiting what you can achieve.

Try replacing the macbeth chart with several sheets of white paper.
Take a picture and stretch it. Can you see any variation in level?
If you can't see any variation, please tell us how you compensated
for all the above effects, which would require a careful balance
of increasing illumination off axis to counter the light fall-off
of your lens, let alone the other non-symmetric effects.

If you are testing sensors and want answers better than 10%, your
method requires field illumination to be better than 10 times
the photon noise, or 0.0005%. There is a reason why sensor
manufacturers have adopted the methods in use today.
Your method, even defocusing the target (which introduces other
issues), probably can't even meet a 2% uniformity requirement.

(Actually I tried this too, thinking I could speed up the
testing. It became obvious in my first tests it didn't work.)

(I've designed illumination sources for laboratory spectrometers
for 25+ years, where requirements are quite tight.)

Roger

>
> The results vary from camera to camera as well; my 20D and my FZ50 have no
> such limit to S/N, but the XTi does.
>
From: John Sheehy on
Lionel <usenet(a)imagenoir.com> wrote in
news:9si60396qh59d9uno14lcmevko3f8n4gqa(a)4ax.com:

> PS: I've stopped responding to John's posts on this topic, because the
> weird misconceptions he's expressing about data aquisition technology
> are getting so irritating that I feel more like flaming him than
> educating him.

What misconceptions?

Almost every reply you or Roger has made to me has ignored what I have
actually written, and assumed something else entirely.

Look at the post you just replied to; I made it quite clear in my post
that Roger responded to, that the effect only happens with *ONE CAMERA*,
yet Roger replied as if my technique were at fault, in some elementary
way. He didn't pay attention, and *YOU* didn't pay attention, made
obvious by your ganging up with him and failing to point out to him that
it only happened with one camera.

Did you even notice that fact? (That post wasn't the first time I said
it was only one camera, either).

Did you point out to Roger that when he wrote that ADCs have an error of
+/- 1 DN, that because there was no range of errors amongst ADCs, and
that 1 AN = 1 DN, that it would seem that he was writing about the
rounding or truncation aspect of the quantization, itself, but mistakenly
doubled? Surely if he were talking about ADC noise not due directly to
the mathematical part of quantization, he would have given the range of
error the the best and worse, or typical ADC, none of which would likely
be exactly +/- 1.

It was not my fault that I thought he was talking about the mathematical
aspect; he, as usual, is sloppy with his language, and doesn't care that
it leads to false conclusions. He is more interested in maintaining his
existing statements than seeking and propagating truth.

If anyone is weird here, it is you and Roger. You agree with and support
each other when an "adversary" appears, no matter how lame your
statements or criticisms.

Where was Roger when when you implied that microlenses can effect dynamic
range, without qualifying that you meant mixing sensor well depths *and*
microlenses? Or perhaps you didn't even have that in mind the first time
you did; you came up with that exceptional, non-traditional situation to
make yourself right, without giving me a chance to comment on such an
unusual arrangement, to which I would have immediately said that
different well depths and/or sensitivities would affect overall system
DR. Your use of different well depths in the example brings things to
another level of dishonesty on your part. That was nothing short of
pathetic.

--

<>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
John P Sheehy <JPS(a)no.komm>
><<> <>>< <>>< ><<> <>>< ><<> ><<> <>><