Approve the Cookies
This website uses cookies to improve your user experience. By using this site, you agree to our use of cookies and our Privacy Policy.
OK
Index  •   • New posts  •   • RTAT  •   • 'Best of'  •   • Gallery  •   • Gear  •   • Reviews
Guest
New posts  •   • RTAT  •   • 'Best of'  •   • Gallery  •   • Gear  •   • Reviews
Register to forums    Log in

 
FORUMS Canon Cameras, Lenses & Accessories Canon EOS Digital Cameras 
Thread started 28 Jul 2014 (Monday) 15:27
Search threadPrev/next
sponsored links
(this ad will go away when you log in as a registered member)

Why Canon, when Nikon...

 
davesrose
Title Fairy still hasn't visited me!
3,725 posts
Likes: 524
Joined Apr 2007
Location: Atlanta, GA
     
Aug 17, 2014 13:58 |  #346

AJSJones wrote in post #17101635 (external link)
The topic, where I came in was only the DR of the sensors from Sony and Canon. All other discussions of DR would be the same for both sensors, and therefore not relevant to the difference between them. Those other items are important, but don;t affect the comparison of the sensors.

There's been a lot of topics raised on this thread: it's mainly a thread meant to bash Canon, praise Sony on anything:D You have said DR is the saturation limit (or point, whatever semantics you want to get into) vs the noise limit, and that tonal range has nothing to do with DR. How you process a Sony/Nikon RAW vs a Canon has very much to do with the DR of the sensor, RAW, and processed image. Tonal range has everything to do with the DR of a processed image.

AJSJones wrote in post #17101635 (external link)
It sounds like you are talking about how Sony/Nikon cameras set exposure (i.e how close they come to the saturation limit.). Otherwise what does "a D810's DR when it comes to saturation point" mean? The DR is the ratio of the "saturation point" to the noise. We keep increasing the exposure until we reach the 39,000 electrons (or whatever) and that becomes the digital value of 16383. Sensels that yield a value of 16382 and below will be below clipping IN BOTH cameras. The DR differs because the lowest signal that can be distinguished from noise is much lower in the Sony than in the Canon.

I thought my examples were quite clear. If the saturation limit of the Canon is greater then the Sony, then you should be exposing further into the highest light intensity. The RAW's effective DR is determined by the sensitivity of the sensor and how the ADC processes the contrast range of the RAW. Sony's main advantage seems to be noise handling of the shadows....what is the difference with highlight recovery?


Canon 5D mk IV
EF 135mm 2.0L, EF 70-200mm 2.8L IS II, EF 24-70 2.8L II, EF 50mm 1.4, EF 100mm 2.8L Macro, EF 16-35mm 4L IS, Sigma 150-600mm C, 580EX, 600EX-RT, MeFoto Globetrotter tripod, grips, Black Rapid RS-7, CAMS plate and strap system, Lowepro Flipside 500 AW, and a few other things...
smugmug (external link)

  
  LOG IN TO REPLY
sponsored links
(this ad will go away when you log in as a registered member)
pwm2
"Sorry for being a noob"
Avatar
8,626 posts
Likes: 3
Joined May 2007
Location: Sweden
     
Aug 17, 2014 15:44 |  #347

davesrose wrote in post #17101108 (external link)
No, in audio, DR is the range of DB of volume.

Which was exactly what I wrote - the distance between the hissing noise floor level up to the maximum volume before clipping. In audio expressed in dB, since a Bell is a very big unit. In photography, the unit is instead stops.

In photography, it's the range of luminance (be it your scene, what the sensor records, and the contrast range of your output image).

Just that the range doesn't care about the number of steps between lowest and highest. It only care about the distance. The number of steps is the resolution if talking audio and tonality if talking photography. But separate from the distance between noise floor and max (clipping) level.

As for the rest of all your replies, I don't see much contradictions to my original posts. It appears the only thing you're arguing is sensor DR. Never did I say 14bit processing leaves a clean 14bit file. You'll see in my last post I prefaced the situation of utilizing full 14 bits with a hypothetical "magic" sensor that can fill a full 14 stops. A 14bit processor gives you 14bit tonal "precision".

Nothing "magic" about a sensor that can fill 14 bits. They already exist.

But it doesn't matter if the sensor can even capture enough range for 15 or 16 or 24 bits - the signal conditioning and digitizing parts must also be able to maintain enough margin to be able to capture 14 bits of usable information. And there is where the Canon DSLR bodies currently falls very short.

You're also missing context in some of my posts: my comparison of DVD to blu-ray was in response to teamspeeds assertion that the image quality of an upscaled image can be just as good as a native image.

Not at all. TeamSpead never did imply such a thing. Upscaling an image can't present you with features that wasn't captured in the original, low-resolution, data. But upscaling will not magically invent new features unless you use one of the special image packages that makes use of fractal pattern functions to invent extra structure in the upscaled image.

The thing here is that is debated is that the fewer pixels of the 22MP 5D3 represents the same sensor area as the larger number of pixels of the 36MP D800E. The correct way to compare is then to either upscale 22MP -> 36MP or downscale 36MP -> 22MP. Both alternatives will give the same result, i.e. showing that the true difference between the 5D3 and D800E is greater than what it looks like if you just look at two 100% crops. The 100% crops are not relevant because they are not comparable to what you get if you take the same percent of the image sensor data and creates a fixed-size web image or a same sized print.

Ditto about perlin noise in 3D graphics (that's completely irrelevant as that's intended procedural noise).

Which was why I did say that your digitally rendered images are irrelevant because they only contain intended procedural noise and have no corresponding noise floor issues as real sensor data has. A rendered image with 8-bit or 16-bit or 32-bit pixel data doesn't have some magic "wet cloth" put over the data drenching all the weak shadow information.

As I've already stated, the Sony sensor is clearly superior at noise handling and extra resolution.

Just that most of what you think is superior noise handling isn't superior handling but more dynamic range.

The noise you get when using high ISO is noise on top of the image data. But it doesn't really change the intensity of the captured data - it just adds intensity deviations to individual pixels.

With the Exmor sensor, you get additional data that is totally lost in noise with a Canon body. So directly accessible additional dynamic range.

I have yet to see it's highlight recovery though: the largest value in tonal DR. With the Sony sensor, you can be more confident there isn't noise while pushing shadows.

This is a side track. The Exmor sensor doesn't get extra shadow stops by losing highlight recovery. The Exmor sensor gets extra shadow stops because the total distance from max highlight to weakest shadow is larger. Which mean that any low-ISO exposure setting where the Exmor sensor handles the highlights as well a Canon body will have the Exmor sensor manage extra shadow stops. Or low-ISO exposure setting where you get similar quality from shadows will allow the Exmor sensor to support extra stops of highlights without clipping any color channel.

But if we look at tonal DR, that's just 16 shades of grey (if looking at the first 4 stops). Lets say the sensor and situation can record up to 13 stops of light. The last stops (where your highlights are) is much greater then that: over 2048 shades of grey just in the last stop.

You are forgetting that if a single pixel is limited to 16 shades of gray, that doesn't mean that a photo will be limited with these 16 shades of gray. All because of noise. If you have a 10x10 pixel area that is exposed to the same level, then each of the 100 pixels will capture the input signal +/- a bit of noise. This allows the visible area to represent thousands of levels of gray even when each pixel had just 16 shades of gray.

This is also why the right-side image looks like grayscale despite just having black or white pixels:
http://www.visgraf.imp​a.br …_steinberg_dith​ering.html (external link)
Make the pixel sizes small enough, and you will end up with an image that our eyes will find identical to the left image. And this is one of the important concepts used in many printers. An ink-jet printer can only manage a limited number of dot sizes. But it can print with very small dots and vary the distribution of smaller or larger dots or the distance between the dots.

A number of signal capture devices does perform multiple samples of the input while adding noise to the input signal just to measure what percentage of the samples that gets bumped a bit value higher - so allowing an 16-bit ADC to capture sound or voltage data with 17 or 18 bits of resolution. In the case of photography, random noise in the sensor + photon noise + our eyes will manage the same - allowing us to see more tonality than the sensor itself is able to capture.

But anyway - this is irrelevant because we are debating DR while you constantly jump into debating tonality. They are different concepts. And you aren't happy if you get good tonality from the sensor but don't get enough dynamic range so the shadows or highlights ends up clipped - i.e. with zero tonality.

Effective DR is more significant at the higher stops of light.

You just adjust your exposure to decide if you want more margin for highlights or shadows - as long as the sensor does have enough DR, that's your freedom of choice. Without the DR, it isn't really even worth debating because then you have a tool that isn't up to the task.

From the information I've seen, the Sony sensors really reign supreme for shadow recovery. If we believe the DXO info that the Canons have a higher saturation point (and I suspect so, if their performance exceeds the Sony at high ISO), then it only confirms the fact that you should be ETTR more with a Canon.

Note that the Canon sensors can capture more electrons in each well because each well is larger. But that isn't really important. The Sony sensor has two wells for every one well of the Canon sensors. Two Sony wells captures more electrons than one Canon well. Which is why the Canon will end up losing when you do the same-size print. And any exposure where the highlight just almost fills the wells of a Canon or a Sony sensor will give an image where the Sony sensor has several stops extra of shadow detail.

While any exposure where you instead expose for the shadows will end up with the Sony sensor having several stops extra for highlight recovery.

In the end, it's just up to the photographer to decide if they should expose to the right or to the left. Keep the extra stops for shadows or for highlights. The sensor really doesn't care which choice the photographer selects. Extra stops are extra stops - it's just a question of adapting aperture or shutter time. A bit similar to moving a microphone closer to or further away from a sound source to adjust the signal strength that gets received by the mixer table.


5DMk2 + BG-E6 | 40D + BG-E2N | 350D + BG-E3 + RC-1 | Elan 7E | Minolta Dimage 7U | (Gear thread)
10-22 | 16-35/2.8 L II | 20-35 | 24-105 L IS | 28-135 IS | 40/2.8 | 50/1.8 II | 70-200/2.8 L IS | 100/2.8 L IS | 100-400 L IS | Sigma 18-200DC
Speedlite 420EZ | Speedlite 580EX | EF 1.4x II | EF 2x II

  
  LOG IN TO REPLY
pwm2
"Sorry for being a noob"
Avatar
8,626 posts
Likes: 3
Joined May 2007
Location: Sweden
     
Aug 17, 2014 15:49 |  #348

davesrose wrote in post #17101123 (external link)
It is easier to process color: you can't recover blown highlights, though. While you have greater latitude making sure WB is correct, traditional sensors have twice the density of green receptors (where most luminance info is) as the red and blue receptors (which are considered chroma receptors).

The difference in number of red, green or blue receptors isn't relevant to this debate.

What is relevant is that if you clip a single color channel then you do not just clip the intensity but you also change the color. This happens whatever distribution you have of red, green and blue receptors or if you have a Foveon sensor where every pixel is a full RGB triplet.

And the camera is good at lying to us when it shows histograms or zebra patterns because of the rebalancing of the channels that happens when performing WB adjustment.


5DMk2 + BG-E6 | 40D + BG-E2N | 350D + BG-E3 + RC-1 | Elan 7E | Minolta Dimage 7U | (Gear thread)
10-22 | 16-35/2.8 L II | 20-35 | 24-105 L IS | 28-135 IS | 40/2.8 | 50/1.8 II | 70-200/2.8 L IS | 100/2.8 L IS | 100-400 L IS | Sigma 18-200DC
Speedlite 420EZ | Speedlite 580EX | EF 1.4x II | EF 2x II

  
  LOG IN TO REPLY
pwm2
"Sorry for being a noob"
Avatar
8,626 posts
Likes: 3
Joined May 2007
Location: Sweden
     
Aug 17, 2014 16:00 |  #349

davesrose wrote in post #17101168 (external link)
I don't know why you went back to reply to all my previous posts at once, but if you want to be specific: there is one muscle in the inner ear that attaches to the malleous to dampen vibrations to the inner ear. The human brain is also quite different then a microchip, and can filter out ambient sounds. The eye being able to scan, and having two types of photo receptors is just an example of how a digital system is way different. Because our eyes can accomodate and percieve around 20 stops of light, so then digital systems should evolve to render equivalents with their technologies.

I did not go back and reply to all your previous posts at once. I just have not had time to visit POTN since last weekend. So I have a backlog of posts to read.

The eye do not have any 20 stops of dynamic range compared to any 12 stops for a camera sensor. The eye manages less dynamic range than your digital camera if you consider the static contrast it can handle from a single snapshot. But the eye constantly changes ISO (chemical reaction) and aperture as you focus on different parts of a scene making you think that the eye has a huge dynamic range.

So if you want to debate 20 or 22 stops from the eye then that comes from multi-exposure input so you must then also compare it with a merge of multiple exposures using your camera. Take one exposure at 1/4000 and one exposure at 1/125 and then at the same time change the aperture with a couple of stops and change the ISO with a couple of stop and you'll quickly notice that your camera has no problems matching and winning over your eye if you compare same-for-same i.e. when the camera is also allowed to change exposure settings to optimize for different parts of a scene.

Our technology can already do what our eyes or ears can do when it comes to dynamic range or absolute maximas/minimas. It's just that a print is static while our brain processes dynamic information from our eyes and ears. The issue isn't sensor range here, but the difference between real-time adaptation compared to capturing and presenting static data at a later time.


5DMk2 + BG-E6 | 40D + BG-E2N | 350D + BG-E3 + RC-1 | Elan 7E | Minolta Dimage 7U | (Gear thread)
10-22 | 16-35/2.8 L II | 20-35 | 24-105 L IS | 28-135 IS | 40/2.8 | 50/1.8 II | 70-200/2.8 L IS | 100/2.8 L IS | 100-400 L IS | Sigma 18-200DC
Speedlite 420EZ | Speedlite 580EX | EF 1.4x II | EF 2x II

  
  LOG IN TO REPLY
davesrose
Title Fairy still hasn't visited me!
3,725 posts
Likes: 524
Joined Apr 2007
Location: Atlanta, GA
     
Aug 17, 2014 16:04 |  #350

pwm2 wrote in post #17101887 (external link)
Which was exactly what I wrote - the distance between the hissing noise floor level up to the maximum volume before clipping. In audio expressed in dB, since a Bell is a very big unit. In photography, the unit is instead stops.

That's not the same thing: my term for audio DR encompasses source, microphone, and audio system: yours is just audio system. Same with photography: difference in luminosity or tonal stops

pwm2 wrote in post #17101887 (external link)
Not at all. TeamSpead never did imply such a thing. Upscaling an image can't present you with features that wasn't captured in the original, low-resolution, data. But upscaling will not magically invent new features unless you use one of the special image packages that makes use of fractal pattern functions to invent extra structure in the upscaled image.

The context of those posts where arguing my initial statement that upscaling an image to a higher resolution introduces its own artifacts.


Canon 5D mk IV
EF 135mm 2.0L, EF 70-200mm 2.8L IS II, EF 24-70 2.8L II, EF 50mm 1.4, EF 100mm 2.8L Macro, EF 16-35mm 4L IS, Sigma 150-600mm C, 580EX, 600EX-RT, MeFoto Globetrotter tripod, grips, Black Rapid RS-7, CAMS plate and strap system, Lowepro Flipside 500 AW, and a few other things...
smugmug (external link)

  
  LOG IN TO REPLY
pwm2
"Sorry for being a noob"
Avatar
8,626 posts
Likes: 3
Joined May 2007
Location: Sweden
     
Aug 17, 2014 16:08 |  #351

mystik610 wrote in post #17101513 (external link)
Noise banding makes Canon sensors less desirable at high ISO as well. All around, Canon sensors are lacking, relative to what others are capable of. In terms of low light performance, the a7s, and Nikon DF perform quite a bit better in low light than any of Canon's offerings.

Canon has some catching up to do.

But somewhere around ISO 1600 the banding issues are no more because every stop you increase the ISO you lose one stop of possible dynamic range. And somewhere between ISO 1600 and ISO 3200 the remaining dynamic range gets too narrow to involve the banding issues. The banding (pattern noise) in the Canon bodies happens in the least significant bits of the ADC data that will instead end up containing random noise when you step up the ISO far enough. And that is why it's only at low ISO that an Exmor sensor wins out because it has usable content in the least significant bits when Canon bodies have pattern noise there. When both sensors have increased the amplification enough and so added enough random noise to the image data, hiding a couple of stops of shadow information below the noise, then the difference becomes quite small.


5DMk2 + BG-E6 | 40D + BG-E2N | 350D + BG-E3 + RC-1 | Elan 7E | Minolta Dimage 7U | (Gear thread)
10-22 | 16-35/2.8 L II | 20-35 | 24-105 L IS | 28-135 IS | 40/2.8 | 50/1.8 II | 70-200/2.8 L IS | 100/2.8 L IS | 100-400 L IS | Sigma 18-200DC
Speedlite 420EZ | Speedlite 580EX | EF 1.4x II | EF 2x II

  
  LOG IN TO REPLY
AJSJones
Goldmember
Avatar
2,647 posts
Gallery: 6 photos
Likes: 92
Joined Dec 2001
Location: California
     
Aug 17, 2014 16:35 |  #352

davesrose wrote in post #17101689 (external link)
There's been a lot of topics raised on this thread: it's mainly a thread meant to bash Canon, praise Sony on anything:D You have said DR is the saturation limit (or point, whatever semantics you want to get into) vs the noise limit, and that tonal range has nothing to do with DR. How you process a Sony/Nikon RAW vs a Canon has very much to do with the DR of the sensor, RAW, and processed image. Tonal range has everything to do with the DR of a processed image.

What's the difference between tonal range and dynamic range in your vocabulary? People use tonality to describe how well the dynamic range (from lowest to highest recordable) is divided into steps. Post processing of the digitized values from the sensor is up to the operator. The dynamic range that can be recorded by the sensor is not. The range of the tones (different luminance levels) that can be accurately recorded by the sensor is the dynamic range (from darkest tone that's just above the noie, to the highest one before clipping.

davesrose wrote in post #17101689 (external link)
I thought my examples were quite clear. If the saturation limit of the Canon is greater then the Sony, then you should be exposing further into the highest light intensity. The RAW's effective DR is determined by the sensitivity of the sensor and how the ADC processes the contrast range of the RAW. Sony's main advantage seems to be noise handling of the shadows....what is the difference with highlight recovery?

Apparently not. Highlight recovery is something that happens after the data have been collected from the sensor. - it is a processing issue, not a capture issue. ( The number of electrons collected IS the saturation limit, but it's not the same as the number of photons collected.) If sensels go over the saturation limit in the raw data, they are clipped and cannot be recovered. Highlight recovery as you seem to use it, relates to the difference between highlights blown in the heavily proceesed data seen in the jpeg on the camera's LCD and the actual values in the raw file, which may not have been blown before jpeg processing (in your example you have learned that some degree of blown values in the jpeg is OK because you know the raw values were not blown). You keep talking of the "saturation limit" as if its absolute value had some relevance to the DR - if Canon is higher (it's not, when you look at total sensor area, but that's beside the point:() then X, if Nikon is higher, then Y. Irrelvant. The only thing that matters for sensor DR is the ratio of the two quantities saturation value and noise value. "Noise handling" only makes sense when the "noise" is assessed at a level related to the saturation (how many stops below it), so there's no such thing as separate "noise" handing and "highlight handling" when discussing raw data from the sensor.


My picture galleries (external link)

  
  LOG IN TO REPLY
Mornnb
Goldmember
1,646 posts
Gallery: 6 photos
Likes: 23
Joined Aug 2012
Location: Sydney
     
Aug 17, 2014 17:31 |  #353

pwm2 wrote in post #17101887 (external link)
The thing here is that is debated is that the fewer pixels of the 22MP 5D3 represents the same sensor area as the larger number of pixels of the 36MP D800E. The correct way to compare is then to either upscale 22MP -> 36MP or downscale 36MP -> 22MP. Both alternatives will give the same result, i.e. showing that the true difference between the 5D3 and D800E is greater than what it looks like if you just look at two 100% crops. The 100% crops are not relevant because they are not comparable to what you get if you take the same percent of the image sensor data and creates a fixed-size web image or a same sized print.


Slightly different topic, but I'd point out the real advantage of the Exmor is the dynamic range performance.
Most lenses struggle to provide enough resolution for a 22MP sensor, let alone a 36MP sensor.

Take a look at Dxomark, this is a graph of the measured effective resolution of camera/lens combinations. The 5D Mark III performs on par with the D800, Canon's generally sharper lenses make up for the sensor.

IMAGE: http://www.dxomark.com/var/ezwebin_site/storage/images/media/images/graph13/80780-2-eng-US/graph1.jpg

Camera.......Sharpness Mean (Perceptual Megapixel)........DxOM​ark Score Mean
Canon EOS 5D Mark III.......15.......24
Nikon D800...............14.​......26


The point is, unless you have a Zeiss Otus or a 300mm 2.8, you don't really need more than 22MP because you're limited by the lens.


Note that the Canon sensors can capture more electrons in each well because each well is larger. But that isn't really important. The Sony sensor has two wells for every one well of the Canon sensors. Two Sony wells captures more electrons than one Canon well. Which is why the Canon will end up losing when you do the same-size print. And any exposure where the highlight just almost fills the wells of a Canon or a Sony sensor will give an image where the Sony sensor has several stops extra of shadow detail.


To capture low noise at low iso and high iso, you need large and efficient photodiodes. Note that Canon's sensors outperform the D800 in High ISO noise due to their larger and more efficient photodiodes. At high iso, the noise produced by the photodiodes becomes the significant issue. At low iso, the noise issue relates to the analogue circuitry and the A/D converter. Sony's sensor does A/D conversion on sensor, Canon does it off sensor in the Digic 5+ chip. This means much longer analog pathways which pick up more noise.

The difference in number of red, green or blue receptors isn't relevant to this debate.

What is relevant is that if you clip a single color channel then you do not just clip the intensity but you also change the color. This happens whatever distribution you have of red, green and blue receptors or if you have a Foveon sensor where every pixel is a full RGB triplet.

And the camera is good at lying to us when it shows histograms or zebra patterns because of the rebalancing of the channels that happens when performing WB adjustment.

Generally speaking from experience, lowering highlights is less effective and less usable than raising the shadows. And the reason is that at the top of the highlights the camera picks up a more limited range of tones, and that some of the channels will be clipped.


Canon 5D Mark III - Leica M240
EF 16-35mm F/4 IS L - EF 14mm f/2.8 L II - - EF 17mm TS-E L - EF 24-70mm f/2.8 L II - EF 70-200mm IS II f/2.8 L - Sigma 35mm f/1.4 Art - Sigma 85mm f/1.4 EX
Voigtlander 15mm III - 28mm Elmarit-M ASPH - 35mm f/1.4 Summilux-M FLE - 50mm f/1.4 Summilux-M ASPH
500px (external link)

  
  LOG IN TO REPLY
davesrose
Title Fairy still hasn't visited me!
3,725 posts
Likes: 524
Joined Apr 2007
Location: Atlanta, GA
     
Aug 17, 2014 18:10 |  #354

AJSJones wrote in post #17101990 (external link)
Apparently not. Highlight recovery is something that happens after the data have been collected from the sensor. - it is a processing issue, not a capture issue. ( The number of electrons collected IS the saturation limit, but it's not the same as the number of photons collected.) If sensels go over the saturation limit in the raw data, they are clipped and cannot be recovered. Highlight recovery as you seem to use it, relates to the difference between highlights blown in the heavily proceesed data seen in the jpeg on the camera's LCD and the actual values in the raw file, which may not have been blown before jpeg processing (in your example you have learned that some degree of blown values in the jpeg is OK because you know the raw values were not blown). You keep talking of the "saturation limit" as if its absolute value had some relevance to the DR - if Canon is higher (it's not, when you look at total sensor area, but that's beside the point:() then X, if Nikon is higher, then Y. Irrelvant. The only thing that matters for sensor DR is the ratio of the two quantities saturation value and noise value. "Noise handling" only makes sense when the "noise" is assessed at a level related to the saturation (how many stops below it), so there's no such thing as separate "noise" handing and "highlight handling" when discussing raw data from the sensor.

DR in photography is the maximum intensity of luminance you have: be it scene or tonal value in image. From my understanding in sensors, the maximum DR for them is the maximum saturation point: which is quite seperate from the lowest acceptable noise floor. You say the extra saturation limit of the Canon sensor is irrelevant because essientially Sony sensors have extra resolution. I'm sounding like a broken record, but the Sony sensor clearly has advantages in resolution and noise handling. DR and resolution are related for detail, but fundamentally they're different. All examples of single shot HDR examples I've seen of the Sony sensor are not pushing into the extreme highlights as my example of what I'm used to with Canon. When you factor that we're working with an interpreted 14bit file (and the sensor itself isn't recording 14 stops of light) there is leeway in exposure for filling either your shadows vs highlights. From what I've seen, the examples of the advantage of the Sony DR is bringing up shadows: but highlight recovery is just as relevant for DR. Especially when you factor that tonally, there's potentially way more details in tone in your upper stops vs lower.


Canon 5D mk IV
EF 135mm 2.0L, EF 70-200mm 2.8L IS II, EF 24-70 2.8L II, EF 50mm 1.4, EF 100mm 2.8L Macro, EF 16-35mm 4L IS, Sigma 150-600mm C, 580EX, 600EX-RT, MeFoto Globetrotter tripod, grips, Black Rapid RS-7, CAMS plate and strap system, Lowepro Flipside 500 AW, and a few other things...
smugmug (external link)

  
  LOG IN TO REPLY
AJSJones
Goldmember
Avatar
2,647 posts
Gallery: 6 photos
Likes: 92
Joined Dec 2001
Location: California
     
Aug 17, 2014 18:25 |  #355

davesrose wrote in post #17102156 (external link)
DR in photography is the maximum intensity of luminance you have: be it scene or tonal value in image. From my understanding in sensors, the maximum DR for them is the maximum saturation point: which is quite seperate from the lowest acceptable noise floor. .

I think we have now identified the vocabulary problem.
The DR is the ratio of those two quantities they cannot be separated when expressing a DR. When data come off a sensor, there is a maximum value (when a sensel cannot accumulate any more charge (expressed in electrons or a derived parameter) and the lowest value that is distinctly above the noise floor (from a sensel upon which not enough photons land to produce a "signal" bigger than the noise). You can't have a dynamic range without specifying both ends of the range.

In synthetic images, you can specify zero/black/pedestal whatever as "black" (so in that world perhaps one can think of the highest luminance value as the top of the range, and already have set the bottom) - however, in real life from sensors, there is a value that comes from noise that cannot be ignored in specifying "range" - it is a key determinant of the actual DR of the sensor.


My picture galleries (external link)

  
  LOG IN TO REPLY
davesrose
Title Fairy still hasn't visited me!
3,725 posts
Likes: 524
Joined Apr 2007
Location: Atlanta, GA
     
Aug 17, 2014 18:43 |  #356

AJSJones wrote in post #17102179 (external link)
I think we have now identified the vocabulary problem.
...

In synthetic images, you can specify zero/black/pedestal whatever as "black" (so in that world perhaps one can think of the highest luminance value as the top of the range, and already have set the bottom) - however, in real life from sensors, there is a value that comes from noise that cannot be ignored in specifying "range" - it is a key determinant of the actual DR of the sensor.

Maybe our posts have shown a difference in perspective. I'm used to the worlds of computer graphics, where I can see absolute differences between 8bpc, 16bpc, and 32bpc images: and it's easier to quantify their differences. The final ADC from a sensor is an individual process of what's acceptable noise (where your shadows are) vs maximum saturation point (the highest clipping of light intensity). We can argue til we're blue in the face about how easy it is to recover noisy blacks vs clipped highlights: I'm still waiting for examples of the Sony's clipping characteristics.


Canon 5D mk IV
EF 135mm 2.0L, EF 70-200mm 2.8L IS II, EF 24-70 2.8L II, EF 50mm 1.4, EF 100mm 2.8L Macro, EF 16-35mm 4L IS, Sigma 150-600mm C, 580EX, 600EX-RT, MeFoto Globetrotter tripod, grips, Black Rapid RS-7, CAMS plate and strap system, Lowepro Flipside 500 AW, and a few other things...
smugmug (external link)

  
  LOG IN TO REPLY
davesrose
Title Fairy still hasn't visited me!
3,725 posts
Likes: 524
Joined Apr 2007
Location: Atlanta, GA
     
Aug 17, 2014 19:14 |  #357

pwm2 wrote in post #17101926 (external link)
The eye do not have any 20 stops of dynamic range compared to any 12 stops for a camera sensor. The eye manages less dynamic range than your digital camera if you consider the static contrast it can handle from a single snapshot. But the eye constantly changes ISO (chemical reaction) and aperture as you focus on different parts of a scene...

Just to get back to this...since I also have a medical background...when you factor accommodation, the eye does have around 20 stops of light accuity. The retina by itself isn't even capable of 8 stops...but with all reasons already mentioned (accomodation, scanning, etc), the accepted dynamic range of human vision is around 20 stops.

The main thing that fascinates me about human vision is our two photoreceptors: they're perfectly distributed to get color details in natural daylight vs tonal detail at night. Because digital systems are so different, they have to have a WB and an extended dynamic range.


Canon 5D mk IV
EF 135mm 2.0L, EF 70-200mm 2.8L IS II, EF 24-70 2.8L II, EF 50mm 1.4, EF 100mm 2.8L Macro, EF 16-35mm 4L IS, Sigma 150-600mm C, 580EX, 600EX-RT, MeFoto Globetrotter tripod, grips, Black Rapid RS-7, CAMS plate and strap system, Lowepro Flipside 500 AW, and a few other things...
smugmug (external link)

  
  LOG IN TO REPLY
AJSJones
Goldmember
Avatar
2,647 posts
Gallery: 6 photos
Likes: 92
Joined Dec 2001
Location: California
     
Aug 17, 2014 19:38 |  #358

davesrose wrote in post #17102211 (external link)
I'm used to the worlds of computer graphics, where I can see absolute differences between 8bpc, 16bpc, and 32bpc images

That is not relevant to real world analog sensor conversion of light into electrons and subsequent digitization. In computer graphics you can assign a pixel as 0 luminance or, 0,0,0 or whatever, so you can get used to thinking the bottom of the "dynamic range" is fixed. In the analog world of sensors, it doesn't work that way. There is the bottom of the analog range of values. At some point a signal will rise above that and be distinguishable from the noise. That is a concept absent from synthetic image creation, so perhaps this is where your perspective is out of step with the photographers in the thread.(Perhaps you need to add noise sometimes to make things look more natural, and that's because there's noise in the real world, in what we see and what sensors capture) Once the system can distinguish some signal from noise, then we can make the subjective evaluation of how much noise is (aesthetically) "acceptable" (banding comes in and complicates things) but below some signal value, there is only noise. This lowest distinguishable signal is the equivalent of "black" in computer graphics BUT it has a real, finite value. In converting raw to RGB images, one of the parameters that must be assigned is the black level, and there are black sliders to set what recorded luminance value will be set as black - see post #295 for examples of recorded signals at the "white" and "black" end of the DR. The black end is not zero, as it might be in a pixel in a CG program image where no photons had been raytraced.

davesrose wrote in post #17102211 (external link)
I'm still waiting for examples of the Sony's clipping characteristics.

You will never get them, because once a sensel is saturated (clipped), only post processing can do anything to guess a value less than maximum. If an exposure has been made where the highest (desired) luminance value in the scene has been captured at less than the well saturation, there is no "clipping". (That was why I made the comment about setting exposure properly to match the full well capacity) In such an image, the jpeg will often show what appear to be clipped highlights and that may not be desirable, so one can go back to (unclipped raw) and "recover" them for the RGB image (as you demonstrated) . They were only "lost" by the jpeg processing. If you hadn't made a jpeg, they would never have been lost and there would have been no such thing as "highlight recovery". You can bring the highlights down if the overall image you wish to present will benefit from that but it is a post facto tone-mapping operation. Now, if you have pushed the exposure too much, then one or more channel may have clipped and color can get whacky when trying to guess what the unclipped colour might have been. We've come full circle to the point of saying why a histogram based on the raw values from the ADC would ensure that we don't clip any sensels when exposing towards the right.


My picture galleries (external link)

  
  LOG IN TO REPLY
Pagman
I just hold the thing :-)
Avatar
8,661 posts
Gallery: 1910 photos
Likes: 8928
Joined Dec 2011
     
Aug 17, 2014 19:41 as a reply to  @ post 17101050 |  #359

One of the reasons i'm happy with nikon, taken with a budget zoom.

P.


HOSTED PHOTO
please log in to view hosted photos in full size.


Some stuff.

  
  LOG IN TO REPLY
AJSJones
Goldmember
Avatar
2,647 posts
Gallery: 6 photos
Likes: 92
Joined Dec 2001
Location: California
     
Aug 17, 2014 19:45 |  #360

davesrose wrote in post #17102211 (external link)
The final ADC from a sensor is an individual process of what's acceptable noise (where your shadows are) vs maximum saturation point (the highest clipping of light intensity)

Perhaps another vocabulary issue. To me clipping means that the well has simply exceeded its capacity. More photons have landed on it than it can handle. It just reads out as "full well capacity" no matter how many more photons fall in. Up to that point (effectively) the readout is proportional to the number of photons it collected during the exposure. The concept of "highest clipping" is therefore one that confuses me.


My picture galleries (external link)

  
  LOG IN TO REPLY
sponsored links
(this ad will go away when you log in as a registered member)

54,257 views & 0 likes for this thread
Why Canon, when Nikon...
FORUMS Canon Cameras, Lenses & Accessories Canon EOS Digital Cameras 
AAA
x 1600
y 1600

Jump to forum...   •  Rules   •  Index   •  New posts   •  RTAT   •  'Best of'   •  Gallery   •  Gear   •  Reviews   •  Member list   •  Polls   •  Image rules   •  Search   •  Password reset

Not a member yet?
Register to forums
Registered members may log in to forums and access all the features: full search, image upload, follow forums, own gear list and ratings, likes, more forums, private messaging, thread follow, notifications, own gallery, all settings, view hosted photos, own reviews, see more and do more... and all is free. Don't be a stranger - register now and start posting!


COOKIES DISCLAIMER: This website uses cookies to improve your user experience. By using this site, you agree to our use of cookies and to our privacy policy.
Privacy policy and cookie usage info.


POWERED BY AMASS forum software 2.1forum software
version 2.1 /
code and design
by Pekka Saarinen ©
for photography-on-the.net

Latest registered member is Dman780
1030 guests, 342 members online
Simultaneous users record so far is 15144, that happened on Nov 22, 2018

Photography-on-the.net Digital Photography Forums is the website for photographers and all who love great photos, camera and post processing techniques, gear talk, discussion and sharing. Professionals, hobbyists, newbies and those who don't even own a camera -- all are welcome regardless of skill, favourite brand, gear, gender or age. Registering and usage is free.