That's interesting. I guess that implies that the vertical axis (amount of pixels) is less than the total pixel count of the sensor, and therefore once a certain number of pixels with the same tonal range exceeds the axis the histogram peaks 'off the chart'.
I wonder if it's possible, or useful even, for cameras' onboard computers to dynamically scale the histogram so that all its pixels show on the chart even if the entire frame is filled with just one tone. Perhaps we can even experiment with different camera models with same sensor size to see if the max pixels on the vertical axis is constant.
That would be good. Almost like with digital audio. To scale or 'normalize' the waveform.


