From: John Sheehy on
Paul Furman <paul-@-edgehill.net> wrote in
news:nizLh.8469$yW.716(a)newssvr11.news.prodigy.net:

> The read noise (rounding errors)

Read noise is not rounding errors. The blackframe read noise in Canons
is mostly real, analog noise picked up somewhere between amplification at
the sensor wells, and digitization.

I'm not sure if Roger is suggesting that the read noise is just something
that happens in general before digitization or is part of the
quantization itself, but it is most certainly *NOT* the quantization. It
is analog noise, digitized.

The idea that current blackframe read noises are a hard mathematical
result of quantization is nonsense. In the absence of any analog noises,
the quantization only makes noises of less than 0.3 ADUs (and 0.3 is a
worst-case scenario, and requires a complex signal to appear all over the
image).

Do you remember my images of the dock pilings, shot with the same
absolute exposure at ISOs 100 and 200 from a couple years ago?

I quantized the ISO 1600 image to the same level as the ISO 100, and its
noise did not increase visibly at all. I had to subtract one from the
other and multiply the result greatly to even see the difference! The
ISO 100, however, looked quite noisy compared to either the quantized or
unquantized ISO 1600. Conclusion: bit depth and quantization are *NOT*
the limiters of shadow quality; analog read noise is.

Another point in this regard is that the DR of the 1DmkIII is exactly the
same as the 1DmkII; if the standard deviation of the blackframe were
somehow correlated to the least significant bits, you would expect the
values to remain fairly constant with the 2 extra bits (in native ADUs),
but they do not - they quadruple, meaning that they have *NOTHING*
whatsoever to do with quantization, and everything to do with analog read
noise.

--

<>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
John P Sheehy <JPS(a)no.komm>
><<> <>>< <>>< ><<> <>>< ><<> ><<> <>><
From: Paul Furman on
John Sheehy wrote:

> Paul Furman wrote
>
>>The read noise (rounding errors)
>
> Read noise is not rounding errors.

Maybe a semantics problem? Are you two talking about the same thing?

> The blackframe read noise in Canons
> is mostly real, analog noise picked up somewhere between amplification at
> the sensor wells, and digitization.

Urgh, what is 'blackframe'?


> I'm not sure if Roger is suggesting that the read noise is just something
> that happens in general before digitization or is part of the
> quantization itself, but it is most certainly *NOT* the quantization. It
> is analog noise, digitized.

I think analog noise is what he calls plain old noise??? Rounding errors
was an issue for him in explaining why the high bit ImagePlus raw
converter produced cleaner images, though I'm not convinced rounding
errors are significant.


> The idea that current blackframe read noises are a hard mathematical
> result of quantization is nonsense. In the absence of any analog noises,
> the quantization only makes noises of less than 0.3 ADUs (and 0.3 is a
> worst-case scenario, and requires a complex signal to appear all over the
> image).

ADU = Analog to Digital Unit?
electrons, photons, bits???


> Do you remember my images of the dock pilings, shot with the same
> absolute exposure at ISOs 100 and 200 from a couple years ago?

Yes, that was the result of getting the detail into the higher part of
the counts so that when the gamma curve is applied, it doesn't get
trashed: more detail in the highlights than the shadows due to linear
conversion to normal gamma. After A/D conversion, in the raw conversion
step. Roger's argument is to add more bits to the raw conversion and get
more detail in the shadows that way.

> I quantized the ISO 1600 image to the same level as the ISO 100, and its
> noise did not increase visibly at all. I had to subtract one from the
> other and multiply the result greatly to even see the difference! The
> ISO 100, however, looked quite noisy compared to either the quantized or
> unquantized ISO 1600. Conclusion: bit depth and quantization are *NOT*
> the limiters of shadow quality; analog read noise is.

I'm not sure what you mean by 'quantized'. Is that the application of
normal gamma curves during raw conversion. Sorry if I'm not using the
right terms.

> Another point in this regard is that the DR of the 1DmkIII is exactly the
> same as the 1DmkII; if the standard deviation of the blackframe were
> somehow correlated to the least significant bits, you would expect the
> values to remain fairly constant with the 2 extra bits (in native ADUs),
> but they do not - they quadruple, meaning that they have *NOTHING*
> whatsoever to do with quantization, and everything to do with analog read
> noise.

You lost me here. Standard deviation refers to noise level deviating
from what it should be? I don't even really know what standard deviation
is, honestly.
From: John Sheehy on
"Roger N. Clark (change username to rnclark)" <username(a)qwest.net> wrote
in news:45FE0C82.5010703(a)qwest.net:

> John Sheehy wrote:
>
>> The Panasonic FZ50 collects as many photons at ISO 100 saturation,
>> per unit of sensor area, as the 1DmkII. This is a real-world fact,
>> that shows that your concern is pretty much a boogey-man story, in
>> the range of current pixel sizes. And, even when miniaturization of
>> the sensel *does* lead to photon loss per unit of area, it takes a
>> huge difference in photon collection to make a difference in shot
>> noise. Shot noise is not proportional to signal; it's proportional
>> to its square root.
>
> There is a simple reason for this "real-world fact."
> The 1D Mark II is a CMOS sensor; CMOS sensors have lower fill
> factors than CCDs. The FZ50 is a CCD, which generally have
> larger fill factors.

This I know.

> You are comparing apples and
> oranges.

I am not "comparing" in the context you suggest. I am simply trying to
demonstrate the fact that small pixels are not necessarily the bad thing
they are made out to be by big pixel fanatics. Maybe you're not
concerned, but I get very concerned about false information circulating
as fact, or half-truths taken out of context like an evangelist quoting
scripture for his own gain. There is a growing cult of people who
believe that small pixels can not give good image quality, and your work
is the most often-quoted Bible.

> The on pixel support electronics is why
> there are no small pixel size CMOS sensors, because once
> pixel size drops below about 4 microns, the active area
> drops too much. CCD encounter similar problems around
> 2 microns, only due to the inactive area between pixels.

You don't need all of the amplification levels, though. If the pixel
pitch halves to 4u, you can eliminate the ISO 100- and ISO 200-related
circuit components. When you go smaller yet, there may be no more
benefit in Canon's current technology at all. What if you could read 2u
pixels with 4800 photons each with a single amplification with only 1.5
electrons of read noise; what would be the point in having bigger pixels,
especially if you had the option of the firmware downsampling or binning
for you, if you didn't want all that data?

My main concern is that companies don't want to be bothered with higher
pixel densities in DSLRs, and big-pixel fanaticism is exactly what they
want people to believe, so that they don't have to move in the right
direction for maximum IQ, or niche products. AFAIAC, there are huge gaps
in current offerings. Where is the camera that takes EOS lenses that has
a small sensor like the one in the FZ50? Imagine an FZ50 sensor
capturing the focal plane of a 500mm or 600mm f/4L IS! Imagine a more
professional version with lower read noise. No bokeh-destroying TCs
necessary; you can leave them home and get as much or better detail, with
better bokeh.

--

<>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
John P Sheehy <JPS(a)no.komm>
><<> <>>< <>>< ><<> <>>< ><<> ><<> <>><
From: John Sheehy on
Paul Furman <paul-@-edgehill.net> wrote in
news:zIELh.1358$rj1.662(a)newssvr23.news.prodigy.net:

> John Sheehy wrote:
>
>> Paul Furman wrote
>>
>>>The read noise (rounding errors)
>>
>> Read noise is not rounding errors.
>
> Maybe a semantics problem? Are you two talking about the same thing?

I don't think so, based on Roger's hope that the extra 2 bits in the
mkIII will increase DR (lower read noise), which they fail to do.

>> The blackframe read noise in Canons
>> is mostly real, analog noise picked up somewhere between
>> amplification at the sensor wells, and digitization.
>
> Urgh, what is 'blackframe'?

That an "exposure" that really has no exposure; going through the motion
of an exposure, with the lens cap on. Most if not all digital cameras
have pixels that are covered, and therefore capture "blackframe pixels"
in every exposure.

>> I'm not sure if Roger is suggesting that the read noise is just
>> something that happens in general before digitization or is part of
>> the quantization itself, but it is most certainly *NOT* the
>> quantization. It is analog noise, digitized.

> I think analog noise is what he calls plain old noise??? Rounding
> errors was an issue for him in explaining why the high bit ImagePlus
> raw converter produced cleaner images, though I'm not convinced
> rounding errors are significant.

They aren't all that significant in capture, not with current read noise
levels. They are a little more significant in conversion and PP, though,
AFAICT.

>> The idea that current blackframe read noises are a hard mathematical
>> result of quantization is nonsense. In the absence of any analog
>> noises, the quantization only makes noises of less than 0.3 ADUs (and
>> 0.3 is a worst-case scenario, and requires a complex signal to appear
>> all over the image).
>
> ADU = Analog to Digital Unit?
> electrons, photons, bits???

There is no direct relationship between ADUs and photons or electrons.
They can be expressed as a ratio to each other, but that are arbitrarily
independent from system to system.

>> Do you remember my images of the dock pilings, shot with the same
>> absolute exposure at ISOs 100 and 200 from a couple years ago?
>
> Yes, that was the result of getting the detail into the higher part of
> the counts so that when the gamma curve is applied,

Not exactly. It's not about the higher parts of the counts, per se.
It's about signal-to-noise ratios. Michael Reichmann's explanation of
"exposing to the right" introduced that vocabulary of more levels to the
right, but in real world cameras, the levels aren't nearly important as
the S/N ratios, which increase as you expose to the right with a given
camera and ISO. When you compare one ISO against another, or against
another camera, then exposing to the right in one is not necessarily
better than exposing to the left in another.

> it doesn't get
> trashed: more detail in the highlights than the shadows due to linear
> conversion to normal gamma. After A/D conversion, in the raw
> conversion step. Roger's argument is to add more bits to the raw
> conversion and get more detail in the shadows that way.

I have nothing against precision; more precision is always better, even
if by just a tiny amount. In my own hand-conversions, I try to use the
full range of precision available to me; I promote RAW files to 16-bit,
multiplying the values by 16, before doing any white balancing, or
demosaicing, and I even downsample in this bloated precision before
clipping the blackpoint.

My point in playing down bit depth in this thread is that it is not the
main source of shadow noise in current cameras; analog read noise is the
main source. Roger's opinion on this is incorrect, IMO, and I have
proven it by quantizing RAW data myself. Quantization does not rear it's
ugly head, in clear visibility, until you quantize so far that the
standard deviation is a bit below 1 ADU. IOW, you can turn the last four
least significant bits of any ISO 1600 from a current Canon into zeros,
and gain only a tad of noise, and still be quite a bit cleaner than ISO
100 under-exposed by 4 stops, even though they are both quantized exactly
the same.

We need less analog read noise, much more than we need >12-bit depths.

>> I quantized the ISO 1600 image to the same level as the ISO 100, and
>> its noise did not increase visibly at all. I had to subtract one
>> from the other and multiply the result greatly to even see the
>> difference! The ISO 100, however, looked quite noisy compared to
>> either the quantized or unquantized ISO 1600. Conclusion: bit depth
>> and quantization are *NOT* the limiters of shadow quality; analog
>> read noise is.

> I'm not sure what you mean by 'quantized'. Is that the application of
> normal gamma curves during raw conversion. Sorry if I'm not using the
> right terms.

Quantization is just the act of converting analog data to digitized
integers. If there is no added noise in the process, then any analog
range of values equivalent to one ADU will wind up with that single ADU
value. For systems where absolute values matter, this means errors over
any one ADU range, like -0.999 to 0, or -0.5 to +0.499, or 0 to +0.999;
never +/- 1 as Roger suggests in other posts. In systems like RAW data,
where the blackpoint is movable and arbitrary, there is no point in
viewing the errors as anything but +/- 0.5. Of course, the analog part
of the read noise can make it wider than that, (and always does in
current consumer products), but that's the most that quantization in and
of itself will do.

>> Another point in this regard is that the DR of the 1DmkIII is exactly
>> the same as the 1DmkII; if the standard deviation of the blackframe
>> were somehow correlated to the least significant bits, you would
>> expect the values to remain fairly constant with the 2 extra bits (in
>> native ADUs), but they do not - they quadruple, meaning that they
>> have *NOTHING* whatsoever to do with quantization, and everything to
>> do with analog read noise.

> You lost me here. Standard deviation refers to noise level deviating
> from what it should be? I don't even really know what standard
> deviation is, honestly.

It's what you get when you take a number of samples, subtract each one
separately from the average of them all, square each result, average them
all together, and take the square root of the new average.

There is no way that you can tell what value a pixel is supposed to be,
so the individual deviation of a particular pixel is generally unknown.
If, however, you photograph something like a Color Checker card, out-of-
focus, in even lighting, then you have an average value that all samples
within a square can be considered as the fixed value from which
everything is deviating. In a real camera, light is not even and may
increase the standard deviation for non-noise reasons. So, you can
subtract one RAW image from another, properly registered, and measure the
standard deviation a little more accurately, as you're only measuring
what changes between frames. This ignores fixed-pattern noises, of
course, that repeat from frame to frame. We know that adding noise to
noise of equal intensity multiplies it by the square root of two, so you
have to divide your standard deviations by the square root of two to get
the single-frame deviations of non-repeating noises with the subtractive
methods.

With a blackframe, you know exactly what the signal is supposed to be;
nothing.

--

<>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
John P Sheehy <JPS(a)no.komm>
><<> <>>< <>>< ><<> <>>< ><<> ><<> <>><
From: David J. Littleboy on

"Scott W" <biphoto(a)hotmail.com> wrote:
"Roger N. Clark (change username to rnclark)" wrote:

> Where is your test with the picket fence? As I recall,
> all attempts to downsample without artifacts pretty much failed.

Here is the test image.
http://www.pbase.com/konascott/image/69543104/original
<<<<<<<<<<<<<<<<<

How's this for a first shot?

http://www.pbase.com/davidjl/image/75917810/original

David J. Littleboy
Tokyo, Japan