From: John Sheehy on
"Roger N. Clark (change username to rnclark)" <username(a)qwest.net> wrote in
news:45F21915.5090409(a)qwest.net:

> I too agree that pattern noise is more obvious that random noise.
> Probably by at least a factor of ten. It is our eye+brain's
> ability to pick out a pattern in the presence of a lot
> of random noise that makes us able to detect many things
> in everyday life. It probably developed as a necessary
> thing for survival. But then it becomes a problem when we try
> and make something artificial and we see the defects in it.
> It gives the makers of camera gear quite a challenge.

How does that co-exist with your conclusion that current cameras are
limited by shot noise?

Saying that current cameras are limited by shot noise means that all future
improvements lie purely in well depth, quantum efficiency, fill factor, and
sensor size (you'd probably add "large pixels", but I'd disagree). The
fact is, a 10:1 S:N on the 1DmkII at ISO 100 would be 1.5 stops further
below saturation, and 1:1 would be 4.3 stop further below it, if there were
no blackframe read noise

http://www.pbase.com/jps_photo/image/75392571

and that is only statistically, without consideration for the pattern noise
effects, which widen the visual gap even further.

--

<>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
John P Sheehy <JPS(a)no.komm>
><<> <>>< <>>< ><<> <>>< ><<> ><<> <>><
From: Bart van der Wolf on

"John Sheehy" <JPS(a)no.komm> wrote in message
news:Xns98F06D6F99D10jpsnokomm(a)130.81.64.196...
> "Roger N. Clark (change username to rnclark)" <username(a)qwest.net>
> wrote
> in news:45F160FC.5020001(a)qwest.net:
>
>
>> The problem is that our eyes plus brain are very good at
>> picking out patterns, whether that pattern is below random
>> noise, or embedded in other patterns.

What's worse, we see non-existing patterns (e.g. a triangle in the
following link) because we want to:
<http://www.xs4all.nl/~bvdwolf/temp/Triangle-or-not.gif>.

> Yes, that is a problem, and that is exactly why you can't evaluate
> noise by standard deviation alone.

That depends what one wants to evaluate. Standard deviation (together
with mean) only tells something about pixel to pixel (or sensel to
sensel) performance. It doesn't allow to make valid judgements about
anything larger. Banding could be either calibrated out of the larger
structure, or an analysis of systematic noise should be done (and care
should be taken to not mistake Raw-converter effects for camera or
sensor array effects).

--
Bart

From: Bart van der Wolf on

"John Sheehy" <JPS(a)no.komm> wrote in message
news:Xns98F06DCDB2811jpsnokomm(a)130.81.64.196...
> "Roger N. Clark (change username to rnclark)" <username(a)qwest.net>
> wrote in
> news:45F21915.5090409(a)qwest.net:
>
>> I too agree that pattern noise is more obvious that random noise.
>> Probably by at least a factor of ten. It is our eye+brain's
>> ability to pick out a pattern in the presence of a lot
>> of random noise that makes us able to detect many things
>> in everyday life. It probably developed as a necessary
>> thing for survival. But then it becomes a problem when we try
>> and make something artificial and we see the defects in it.
>> It gives the makers of camera gear quite a challenge.
>
> How does that co-exist with your conclusion that current cameras are
> limited by shot noise?

Shot noise is a physical limitation, not a man made one. The man made
limitations can be improved upon.

--
Bart

From: acl on
On Mar 12, 2:11 am, "Bart van der Wolf" <bvdw...(a)no.spam> wrote:
> "John Sheehy" <J...(a)no.komm> wrote in message
>
> news:Xns98F06D6F99D10jpsnokomm(a)130.81.64.196...
>
> > "Roger N. Clark (change username to rnclark)" <usern...(a)qwest.net>
> > wrote
> > innews:45F160FC.5020001(a)qwest.net:
>
> >> The problem is that our eyes plus brain are very good at
> >> picking out patterns, whether that pattern is below random
> >> noise, or embedded in other patterns.
>
> What's worse, we see non-existing patterns (e.g. a triangle in the
> following link) because we want to:
> <http://www.xs4all.nl/~bvdwolf/temp/Triangle-or-not.gif>.
>
> > Yes, that is a problem, and that is exactly why you can't evaluate
> > noise by standard deviation alone.
>
> That depends what one wants to evaluate. Standard deviation (together
> with mean) only tells something about pixel to pixel (or sensel to
> sensel) performance. It doesn't allow to make valid judgements about
> anything larger.

As a matter of fact, they don't tell you anything (literally) about
pixel to pixel behaviour. If I tell you that a signal has mean zero
and given standard dev, what else can you tell me about it? Nothing.
It could be anything from an otherwise random time series to a sine
wave to a series of square waves to anything else. It's like knowing
the first two coefficients in an infinite power series (well that's
exactly what it is: the first two coefficients in an infinite power
series).

the reason people use the first two moments (mean and std) is that the
noises under consideration are often assumed to be gaussian, in which
case these 2 qtys completely characterise the noise. this is usually a
good approximation when the noise comes from many different sources.

> Banding could be either calibrated out of the larger
> structure, or an analysis of systematic noise should be done (and care
> should be taken to not mistake Raw-converter effects for camera or
> sensor array effects).


From: Roger N. Clark (change username to rnclark) on
John Sheehy wrote:
> "Roger N. Clark (change username to rnclark)" <username(a)qwest.net> wrote in
> news:45F21915.5090409(a)qwest.net:
>
>> I too agree that pattern noise is more obvious that random noise.
>> Probably by at least a factor of ten. It is our eye+brain's
>> ability to pick out a pattern in the presence of a lot
>> of random noise that makes us able to detect many things
>> in everyday life. It probably developed as a necessary
>> thing for survival. But then it becomes a problem when we try
>> and make something artificial and we see the defects in it.
>> It gives the makers of camera gear quite a challenge.
>
> How does that co-exist with your conclusion that current cameras are
> limited by shot noise?
>
> Saying that current cameras are limited by shot noise means that all future
> improvements lie purely in well depth, quantum efficiency, fill factor, and
> sensor size (you'd probably add "large pixels", but I'd disagree). The
> fact is, a 10:1 S:N on the 1DmkII at ISO 100 would be 1.5 stops further
> below saturation, and 1:1 would be 4.3 stop further below it, if there were
> no blackframe read noise
>
> http://www.pbase.com/jps_photo/image/75392571
>
> and that is only statistically, without consideration for the pattern noise
> effects, which widen the visual gap even further.
>
Nice plot. If you look at my past posts, you would also see that
I've said for at least a couple of years 14-bit or higher A/D are
needed too because current DSLRs are limited by 12-bit converters.
Some attacked me in this NG with the idea that "if more than 12-bits were
really needed, then why haven't camera manufacturers done it?"
We'll we now see they have, and I'm sure 14 or more-bits will become a
new standard in future DSLRs.

Regarding fixed pattern noise versus photon Poisson noise, your plot
and some simple illustrations show what is dominant. First clue,
look at the thousands of images on the net. How many show fixed
pattern noise? It is very rare. You tend to see fixed pattern noise
at the very lowest lows in an image. Second, if fixed pattern noise
is really a factor, guess what, you can calibrate most of it out with dark
frame subtraction. I think good examples of fixed pattern noise is
illustrated at:
http://www.clarkvision.com/photoinfo/night.and.low.light.photography
Figure 1, for example shows two merged low light images and fixed pattern
noise is not apparent, nor is it the dominant noise source in the image.
Figure 2 shows the black sky above the Sydney opera house in an ISO 100
20 second exposure. Fixed pattern noise is a little over 1 bit out of 12
in the raw data. It simply is not a factor. But where the scene has
signal, e.g. the lit roof, noise is proportional to the square root
of the signal strength, with photon noise up to 18 out of 4095
in the 12-bit raw file. So, over most of the range, photon noise
dominates. The low end, the bottom few values or bottom couple of bits,
is a combination of photon noise, read noise, and fixed pattern noise.
That gives about 10 bits out of 12 with photon noise as the dominant
noise source. Again, if you work at the low end, calibrate out
the majority of fixed pattern noise with dark frames.


Let's work an example.
Let's assume fixed pattern noise is more objectionable by
10 times random noise (this is a reasonable estimate
for me, and I find fixed pattern noise quite objectionable).
But then with processing, e.g. dark frame subtraction, it can
be reduced about 10x, then filtered and reduced more, all with
minimal impact on resolution. Random photon noise in an image
from can only be reduced by pixel averaging, thus reducing spatial
resolution.

Let's use your full well depth, rounding off to 53,000 electrons.
Fixed pattern noise in DSLRs like the 20D and 1D Mark II are between 1 and
2 bits in the A/D at low ISOs. At low signal levels, line-to-line
pattern noise is on the order of 7 electrons in the 1D Mark II, with
low frequency offset of a few tens of electrons (at ISO 100 fixed pattern
noise appears at about the 1-bit level, which is ~13 electrons. The low frequency
fixed pattern noise is entirely eliminated by a dark frame subtraction,
and line-to-line (what you call 1D) is reduced by about 10X with
dark frame subtraction.

So there are multiple conditions. Here is one example:

ISO 100, 1D Mark II, 53,000 electron full signal:

Signal Photon noise Read Noise Fixed-pattern What noise dominates
(elect- stops (electrons) +A/D noise noise Photon, read, or pattern
rons) (electrons) (electrons)

53,000 0 230 17 ~13 Photon
12,250 -4 110 17 ~13 Photon
3,312 -6 57 17 ~13 Photon
828 -8 29 17 ~13 Photon
207 -10 14 17 ~13 all 3 similar
51 -12 7 17 ~13 read + pattern

The above table demonstrate the the sensor has noise dominated by photon
statistics over most of its dynamic range. Each generation
of cameras that comes out pushed the floor where other noise sources in the
electronics show. It is likely we'll see the 1D Mark III push those limits
a stop or two lower. But photon noise remains and is the ultimate
limit.

Here is another test series that illustrates the above conclusions:
Digital Camera Raw Converter Shadow Detail and Image Editor Limitations:
Factors in Getting Shadow Detail in Images
http://www.clarkvision.com/imagedetail/raw.converter.shadow.detail

Figure 6 shows areas from +2 to -7.6 stops. But if you look at the different
raw conversions, you'll see widely different results and wildly different
fixed pattern noise. Then look at Figure 16: the camera jpeg looks pretty
clean with less pattern noise than some of the raw conversions.
So when you say you don't believe photon noise versus fixed pattern noise,
understand the effects of converters too.

Roger