From: John Sheehy on
"Roger N. Clark (change username to rnclark)" <username(a)qwest.net> wrote
in news:4601FEBA.4080801(a)qwest.net:


> Well, lets look at this another way. Go to:
> http://www.clarkvision.com/imagedetail/dynamicrange2
>
> 4 bits is DN = 16 in the 0 to 4092 range. In 16-bit
> data file, that would be 16*16 = 256.
>
> Now go to Figure 7 and draw a vertical line at 256 on the
> horizontal axis. Now note all the data below that line that
> you cut off. Now go to Figure 8b and draw a vertical line
> at 4 stops, and note all the data you cut off. Now go to
> Figure 9D and draw the vertical line at 256 and
> note all the data you cut off. (Note too how noisy the
> 8-bit jpeg data are.)

You can't just divide by 16, to drop 4 LSBs. 0 through 15 become 0. You
have to add 8 first, and then divide by 16 (integer division), then
multiply by 16, and subtract the 8, to get something similar to what you
would get if the ADC were actually doing the quantization. The ADC is
working with analog noise that dithers the results; you lose that
benefit" when you quantize data that is already quantized. You won't
notice the offset when the full range of DNs is high, but for one where a
small range of DN is used for full scene DR, it is essential. I am
amazed that you didn't stop and say to yourself, "I must have done
something wrong" when you saw your quantized image go dark. That's what
I said to myself, the first time I did it. I looked at the histograms,
and saw the shift, and realized that an offset is needed unless the
offset is a very small number relative to the full range of the scene.

In the case of the mkIII image at 14, 12, 11, and 10 bits in another
post, I used PS' Levels, because it simplifies the process, by doing the
necessary offset to keep the distribution of tones constant.


--

<>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
John P Sheehy <JPS(a)no.komm>
><<> <>>< <>>< ><<> <>>< ><<> ><<> <>><
From: ASAAR on
On Sat, 24 Mar 2007 13:46:43 GMT, John Sheehy wrote:

> Almost every reply you or Roger has made to me has ignored what I have
> actually written, and assumed something else entirely.

. . .

> It was not my fault that I thought he was talking about the mathematical
> aspect; he, as usual, is sloppy with his language, and doesn't care that
> it leads to false conclusions. He is more interested in maintaining his
> existing statements than seeking and propagating truth.

Ah, deja vu, yet again. You've distilled l'essence du Roger.


> made obvious by your ganging up with him

That too, reminiscent of one of Lionel's bizarre, out of the blue,
unprovoked attacks coming across almost as an RNC sock puppet. I
wouldn't be surprised if it was true. It may be in the stars!

From: Paul Furman on
acl wrote:

> On Mar 22, 7:22 am, "Roger N. Clark (change username to rnclark)"
> <usern...(a)qwest.net> wrote:
>
>>acl wrote:
>>
>>>What I mean is this. As you say in your webpage
>>>http://www.clarkvision.com/imagedetail/evaluation-nikon-d200/
>>>the read noise at ISO 100 corresponds to about 1 DN; 10 electrons.
>>
>>Remember, a standard deviation of 1 means peak to peak variations on about
>>4 DN. It is not simply you get 1 and only 1 all the time.
>>
>
>
> I've written papers on stochastic processes, and I know perfectly well
> what a standard deviation is; the point is that if this thing occurs,
> it is confined to extremely low signals. Maybe I should have replaced
> "when s=n" by "when the signal is of the order of the noise", to
> prevent this. Anyway, not much point in talking about this, as I think
> it's gotten to the point where everybody is talking past each other
> and we're just creating noise ourselves [which by now exceeds the
> signal, methinks :) ]. I'll take some blackframes tomorrow and check
> again.
>
>
>>There is another issue with the Nikon raw data: it is not true raw, but
>>depleted values. I think they did a good job in designing the
>>decimation, as they made it below the photon noise.
>
>
> The D200 (and more expensive models) have an option to save
> uncompressed raw data. And yes, the resolution loss is indeed below
> the shot noise (using your measured values for the well depth).
> Although I guess it's now my turn to point out that this noise
> obviously isn't always sqrt(n) so shot noise can exceed the resolution
> limit (eg for a uniform subject it could be that you get zero photons
> in one pixel and 80000 in the other; not terribly likely, though), but
> never mind.

I finally took a shot where I wished I'd turned off RAW compression on
my D200. It was the new moon, shot mid-day almost straight up, kinda
hazy at +2 EC just before blowing then darkened in PP to a black sky and
the remaining moon detail was pretty badly posterized. I actually got it
to look good with a lot of PP work so I can't easily show the problem
but I guess that was the cause. A rather unusual situation.


> But keep in mind that Nikons do process their "raw" data. I once wrote
> a short program to count the number of pixels above a given threshold
> in the data dumped by dcraw. I ran it on some blackframes. For a given
> threshold, the number of these pixels increases as the exposure time
> increases, up to an exposure time of 1s. At and above 1s, the number
> drops immediately to zero for thresholds of x and above (I don't
> remember what x was for ISO 800), except for a hot pixel which stays
> there. So obviously some filtering is done starting at 1s (maybe
> they're mapped, I don't know).
>
> It also looks to me (by eye) like more filtering is done at long
> exposure times, but I have not done any systematic testing. Maybe
> looking for correlations in the noise (in blackframes, for instance)
> will show something, but if I am going to get off my butt and do so
> much work I might as well do something publishable, so it won't be
> this :)
>
> Well, plus I am rubbish at programming and extremely lazy.
>
From: acl on
On Mar 25, 6:52 am, Paul Furman <p...@-edgehill.net> wrote:

>
> I finally took a shot where I wished I'd turned off RAW compression on
> my D200. It was the new moon, shot mid-day almost straight up, kinda
> hazy at +2 EC just before blowing then darkened in PP to a black sky and
> the remaining moon detail was pretty badly posterized. I actually got it
> to look good with a lot of PP work so I can't easily show the problem
> but I guess that was the cause. A rather unusual situation.

That's interesting; I never managed to see any difference between
compressed and uncompressed raw. Even when I tried to force it (by
unrealistically extreme processing) I couldn't see it, even by
subtracting the images in photoshop. Is it easy for you to post this
somewhere? From what you say, it sounds like you did some heavy
processing, did you do it in 16 bits or 8 (I mean after conversion)?
This sort of extreme adjustment is just about the only place where I
can see a difference between 8 and 16 bit processing (or 15 bit or
whatever it is that photoshop actually uses).

On the one hand, I find it hard to believe it's the compression, the
gaps between the levels that are present are smaller than the
theoretical photon noise, so basically the extra tonal resolution of
uncompressed raw just records noise more accurately [and since you
can't really see shot noise in reasonably high-key areas, that tells
you it's irrelevant resolution anyway]. On the other hand, who knows?
Maybe there is some indirect effect.


From: Roger N. Clark (change username to rnclark) on
John Sheehy wrote:
> Almost every reply you or Roger has made to me has ignored what I have
> actually written, and assumed something else entirely.
>
> Look at the post you just replied to; I made it quite clear in my post
> that Roger responded to, that the effect only happens with *ONE CAMERA*,
> yet Roger replied as if my technique were at fault, in some elementary
> way. He didn't pay attention, and *YOU* didn't pay attention, made
> obvious by your ganging up with him and failing to point out to him that
> it only happened with one camera.
>
> Did you even notice that fact? (That post wasn't the first time I said
> it was only one camera, either).

I look at the big picture. It's not just one line of one of
your responses that I have been responding to.
Here are some of your posts, which involve MULTIPLE cameras:

You said:
> The results vary from camera to camera as well; my 20D and my FZ50 have no
> such limit to S/N, but the XTi does.
and responding to data I've presented:
> Those 10D figures are way off. They are 1.9, 2.8, 4.9, 9.0, and 18.0.
> Perhaps your figures were taken from a blackpointed RAW blackframe.
and:
> I don't recall seeing values this low at the low ISOs in the Nikon RAW
> files I had.
and data others have derived using the same methods I use:
> The 5D figure is very high for ISO 1600, also. The 5D ISO 1600
> blackframes I have here are all 4.6.
and then you discuss conclusions from other cameras:
> Here's the shadow area of a 1DmkIII ISO 100 RAW, at the original 14 bits,
> and at quantizations to 12, 11, and 10 bits:
> http://www.pbase.com/jps_photo/image/76001165
> The demoasicing is a bit rough; it's my own quick'n'dirty one,

and these are just from a coulple of your many posts in this thread.

What I see is you attacking the data on multiple cameras from multiple
sources, all of which paint a consistent picture. But as the details
of your own testing come out, and shown to be inadequate,
you start the personal attacks. A more appropriate response
would be to 1) verify that your methods actually do not suffer
from the problems I outlined, and 2) then explain why your results
with your methods are actually correct and why they are better
than those using industry standards.

Roger