From: Roger N. Clark (change username to rnclark) on
John Sheehy wrote:

> The "one example" was at ISO 1600. Read noise is 3.34 electrons at ISO
> 1600 on the FZ50, and as I already said, I found afterwards that just
> puxhing ISO 100 would have been better, with a read noise of 2.7 electrons.
> 3 2.7 electron FZ50 ISO 1600 pixels binned together will collect a max of
> 2700 electrons, with a read noise of about 8.1 electrons, quite comparable
> to a DSLR. The best Canons are about half of that; shot noise is
> significant in ISO 1600 shadows, however, and should be similar.
>
> If you don't bin, you have 3x the linear resolution.
>

Your math doesn't add up. If the FZ50 gets 4800 electrons at
ISO 100, then at ISO 1600 the most it will record is
4800/16 = 300. With 3.3 electron read noise, that is only a dynamic
range of 91. VERY poor. But I digress. Your 3 binned pixels
would then have a max signal of 900 electrons and read noise
of 5.8 electrons and a dynamic range of only 155.

The Canon 1D Mark II at ISO 1600 records ~3,300 electrons
with 3.9 electron read noise and has a dynamic range
of 850, or 9.7 stops.

Roger
From: Roger N. Clark (change username to rnclark) on
John Sheehy wrote:

> The Panasonic FZ50 collects as many photons at ISO 100 saturation, per unit
> of sensor area, as the 1DmkII. This is a real-world fact, that shows that
> your concern is pretty much a boogey-man story, in the range of current
> pixel sizes. And, even when miniaturization of the sensel *does* lead to
> photon loss per unit of area, it takes a huge difference in photon
> collection to make a difference in shot noise. Shot noise is not
> proportional to signal; it's proportional to its square root.

There is a simple reason for this "real-world fact."
The 1D Mark II is a CMOS sensor; CMOS sensors have lower fill
factors than CCDs. The FZ50 is a CCD, which generally have
larger fill factors. You are comparing apples and
oranges. The on pixel support electronics is why
there are no small pixel size CMOS sensors, because once
pixel size drops below about 4 microns, the active area
drops too much. CCD encounter similar problems around
2 microns, only due to the inactive area between pixels.

Roger
From: Scott W on
On Mar 18, 5:05 pm, "Roger N. Clark (change username to rnclark)"
<usern...(a)qwest.net> wrote:
> Hey Scott,
> Where is your test with the picket fence? As I recall,
> all attempts to downsample without artifacts pretty much failed.
> Quite interesting.
>
> Roger

Here is the test image.
http://www.pbase.com/konascott/image/69543104/original

What looks like a fence is a test pattern then is just past Nyquist
when down sampled to 25%.
What I find that that mostly we try to avoid frequencies that are
twice the Nyquist limit as these
are the ones that make strong moiré patterns. Frequencies that are
just past Nyquist create much
more subtle artifacts and in a normal photo are not all that visible,
the test pattern however does
show the artifacts pretty strongly and any of the down sample methods
that people put forth.

In an perfect world we would not have any information past Nyquist but
given that we are often left with a limited number of sample, like
what a computer screen can display, we are force to push things a bit
if we want the photo to look at all sharp.

Now if I could have a 20 inch monitor with something like 3000x2000
pixels life would be a lot easier.

Scott

From: acl on
On Mar 19, 8:46 am, "Scott W" <biph...(a)hotmail.com> wrote:

> In an perfect world we would not have any information past Nyquist but
> given that we are often left with a limited number of sample, like
> what a computer screen can display, we are force to push things a bit
> if we want the photo to look at all sharp.

But it's not so simple. Imagine using a square cutoff (a step) in
frequency space to remove all frequencies above Nyquist. We'd get
ringing artifacts even though they are not actually caused by the
downsampling itself (but by the low-pass filter). We need a smooth
rolloff. In fact, the product of the extend of the rolloff in
frequency space and the extend of the artifacts in real space should
be a constant, I think, so it's a tradeoff. Of course it depends on
the constant, if it's 10^-10 who cares. I don't know what it is.

Also, if simply removing all such frequencies (above half the
sampling) in any way was sufficient to avoid artifacts, binning 2x2
(ie just addding the 4 pixels together) would result in zero
artifacts. I think the point is to avoid creating artifacts by the
process of removing the high frequencies itself.

From: Paul Furman on
Thanks for the reply... a few comments below.

Roger N. Clark wrote:
> Paul Furman wrote:
>> Roger N. Clark wrote:
>>> John Sheehy wrote:
>>>
>>>> Think Canon high ISO. Less noise in electrons, at higher gain.
>>>> That's real world. The small-pixel cameras tend to have very good
>>>> read noise at ISO 100, and poor amplifiers for high ISO; worse than
>>>> pushing. Better can be done.
>>>
>>> John, you are confusing several things in your argument in this thread.
>>> 1) Small pixel size cameras are at near unity gain at low ISO.
>>> (for other readers, unity gain ISO is the ISO where 1 electron
>>> equals one bit in the A/D converter).
>>
>> Aughhhh, I hate that I can't understand these discussions.
>>
>> Unity Gain ISO - the ISO where 1 electron = 1 bit in the A/D converter
>
> Yes, the smallest integer interval. In a 12-bit A/D, there are
> 2^12-1 levels = 4095. So the finest interval that is recorded is
> max signal into the A/D converter / 4095.
>
>> A/D Converter - analog to digital, where electrons are assigned numbers
>
> Yes. And digital converters always have an accuracy of +- one number.
> So if the signal is 2/4095 of full signal, the answer the A/D will give
> is sometimes 1, sometimes 2, or sometimes 3.
>
>> So unity gain ISO is where there isn't a rounding error problem.
>> Read Noise is the rounding problems, higher bit depth in the raw file
>> lessens read noise.
>
> Yes, assuming one can actually "see" one electron (and in electronic
> sensors, the noise is only a few electrons, so there is really no benefit
> to gains higher that digitizing one electron. It's really pretty
> impressive when you think about it. We are buying, for a few hundred
> dollars, devices (digital cameras) that directly detect quantum processes!
>
>> P&S cameras don't have this problem because there are so few
>> electrons, they are easy to count?
>
> Effectively, yes. They capture so few photons that 12 bits (4095 levels)
> adequately records the highest signals to the smallest signals with the
> few electron noise. Current electronic sensors, CCDs or CMOS, capture
> at most about 1000 to 2000 photo-electrons per square micron.
> So a 2 square micron CCD fills up with electrons at only 4,000 to
> 8,000 electrons. 8000/4095 = 1.95 electrons per number out of
> the A/D converter. But a large pixel DSLR can have 60+ square micron
> collection area. For example the 1D mark II stores a maximum of about
> 80,000 electrons (ISO 50), so the 12-bit A/D converter gives
> 80,000/4095 = 19.5 electrons per data number. If you boost the
> gain to a higher ISO, so you look at only the bottom 8,000 electrons,
> then the A/D records 1.95 electrons per number, like the small
> P&S camera, but at a much higher ISO. When you boost gain
> to so one number in the A/D conversion is equivalent to 1 electron,
> that is the unity gain ISO. That is a factor of more than
> 16 from current small pixel P&S cameras to large pixel DSLRs,
> and is the fundamental reason why small sensor cameras have poor
> high ISO performance, and why they always will relative to their
> larger cousins.

OK so a 1D mark II can boost ISO 19.5x without increasing read noise. I
see unity on your chart at 1300 (close enough). Still it seems like the
read noise would be trivial compared to the basic noise at ISO 1300.

>> Does it really matter if there are minor rounding errors? Is it really
>> noise because colors are off by 1 bit? Relevant noise is random wack
>> hair-brained colors, not minute color shifts, right?
>
> Noise that we view is mostly due to intensity variations.
> Noise due to color shifts is called chrominance noise and is less
> bothersome. Noise in bright parts of a scene are not objectionable
> to most viewers, but the noise becomes more obvious in night scenes
> or in shadow detail. For example, look at Figure 5 on this page:
> http://www.clarkvision.com/imagedetail/does.pixel.size.matter2
> The Canon S70 image looks pretty noisy, especially in the dark areas,
> and that is due to a few electron noise. So 1 bit noise is usually
> not a factor unless one is pushing limits (like is done in
> high ISO action photography, night and low light photography).

The read noise (rounding errors) is going to be the difference between
30 & 31 (on a scale from -30 to 4096) and intuitively I'd guess ISO 1600
noise is more like 30 & 300 isn't that roughly in the ballpark? Maybe it
does make more of a difference in the shadows because of the linear
issue & applying a curve when setting a 'normal' gamma. When I look at
noise I see clear reds, blues & greens in what ought to be greys, that
looks like a lot more than a few bits to me.

>> What is Dark Current?
>
> All electronic sensors have some electrons that leak into the
> well with the other electrons from the converted photons.
> The dark current amount is temperature dependent and that adds noise
> equal to the square root of the number of electrons accumulated
> from the dark current over the exposure. For most modern
> digital cameras with exposures less then a few tens of seconds,
> dark current is negligible. For long exposure of
> minutes it can become dominant over read noise.

Oh yeah, now I recall, it's heat generated noise... background heat
producing apparent detail, and can be reduced with dark frame subtraction.


>> What's this business of clipping at blackpoint before setting gamma?
>> That means you can set the blackpoint? How could there be negative
>> number? Why?
>
> Because there is noise all signals, e.g. read noise, the natural
> fluctuations can send a measured signal to negative voltage.
> Manufacturers usually set a small offset in the electronics
> voltage to compensate. Lets say the sensor put out 1 volt on the
> output amplifier to the analog-to-digital (A/D) converter. Manufacturers
> add a small negative voltage, like 0.02 volts so the A/D converter
> digitizes from -0.02 volt to 1 volt. Thus 0 light on the sensor
> gives about number 100 in the output raw file. Some raw converters
> subtract off that level, but some values will be less than 100, and
> in the subtracted image, values would be clipped at zero.

OK this makes sense. So including the negative shadow noise would give
blacker blacks, even though it is just random cloudiness.


>> I don't get the charts against pixel pitch. It doesn't matter because
>> there could be some efficiency or inefficiency in the layout, the only
>> thing that matters is full well electrons, right?
>
> Actually, what matters is:
> 1) quantum efficiency of converting photons to electrons
> (typically in the 30 to 50% range in modern digital cameras,
> and that is very good),
> 2) active area to convert photons to electrons (currently effectively
> in the 80% range although manufacturers do not generally
> publish that number (Kodak does on their sensors),
> 3) the full well capacity to hold those electrons.
>
> Quantum efficiency is similar for current consumer devices, so
> within a factor of two they are pretty much the same.

Only 30-50% sounds like tons of room for improvement.


> Full well
> capacity is correlated to pixel pitch, as is active area.
> Full well capacity is about 1000 to 2000 electrons per square
> micron. The vertical scatter in the pixel pitch plots you refer to are
> mostly due to the variations in active area, full well capacity
> and quantum efficiency between devices.

The pixel pitch is a nice clue but ultimately it's not reliable data and
not really meaningful, except to show how neatly each camera has packed
it's sensors. Those charts seem to me like they would be easier to read
as a simple stack or bar chart.


>> Signal to Noise seems clear enough, shoot a gray card & count the
>> pixels that come out some color other than gray.
>
> Not quite. Not a color, but an intensity in each red, green or
> blue channel.
>
>> What is the significance of raw noise versus final bayer interpolated
>> RGB values unless you are doing binning to interpolate by greatly
>> shrinking the pixel count? (if I understand the term 'binning' correctly)
>
> The raw conversion with Bayer interpolation is variable. Some converters,
> like the Canon converter do minimal sharpening and effectively average
> pixels, reducing noise by about 1.5x. Other converters (in their default
> settings)

Well default settings don't really matter.


> attempt to increase apparent spatial detail but at the
> expense of increased noise. The Rawshooter Essentials is one
> such example (technology now in Photoshop CS3 beta), and does very
> well in my experience.

I tried RSE and was not pleased at all, I like CS3/CS much better. It
may be that RSE just allowed more extreme adjustments so I was more
likely to create unnatural looking conversions.


> It is nice to have the high signal to noise
> ratio that large pixels give to play the game in raw conversion:
> do I want a lower noise lower resolution image, or more detail
> at the expense of noise? If the signal-to-noise ratio is high
> to begin with, you can afford to push for more apparent
> spatial detail. You don't have that luxury with smaller pixels
> and the lower signal-to-noise ratios they give.
>
>> And who cares what the characteristics are before white balancing?
>> Nobody is using un-whitebalanced images and the basic WB is fairly
>> dramatic in any lighting. I can see how these things make the math
>> clean but I don't see how they are necessarily relevant in the final
>> product.
>
> Yes, I basically agree. One must have adequate S/N to white balance,
> however. For example, there are very few blue photons from
> an incandescent lamp, so after white balancing noise in the blue
> can be quite large. In that case it might be better to use a color
> correction filter on the lens and a longer exposure to get more blue
> photons.
>
>> Ah, my head hurts... am I understanding?
>
> I think so. It's those who don't know what question to ask that
> are probably not understanding (unless of course they completely
> understand).
>
> Very good questions. I'll probably develop this into a web page and
> add it to my sensor analysis section.

Thanks again for taking the time.