mystik610 wrote in post #18342290
Well all any autofocus system does is tell the lens where to move based on some sort of input it receives. In the case of PDAF, phase detect sensors calculate the phase distance and tell the lens where to move....contrast detect uses the image sensor data to find peak contrast at a given point.
Again, just to clarify, the phase detect sensors do not tell the lens where to move, that's the job of the autofocus processor, all the phase detect sensors do it report a some phase difference.
mystik610 wrote in post #18342290
A lot of people tend to mystify DPAF because they don't really understand it, but its on-sensor PDAF through and through, and nothing about the way the Canon has explained the system to work indicates that its anything else beyond that.
This is correct, DPAF is just a way of getting a phase difference at every pixel on the sensor while minimizing light loss.
mystik610 wrote in post #18342290
Fundamentally, the only difference between DPAF and the PDAF systems in other mirrorless cameras is that DPAF has two photodiodes per pixel to calculate phase difference, whereas the PDAF systems in other mirrorless systems are splitting the light hitting a photodiode in two, and as such, the sensitivity suffers because the phase detect sensors only see half to the light. Hybrid AF systems augment CDAF to improve the sensitivity of the system... Contrast detect doesn't have a dedicated sensor....it's using the image sensor itself to achieve critical focus.
What you've written isn't quite clear so just that we're on the same page, all other on sensor phase detect system uses pixel masking through the CFA grid in order to block light coming from one side of the lens. These CFA masks are setup in pairs where one pixel is masked from the left and a corresponding second pixel is masked from the right. The data from these pixels are then read together in order to get at the phase difference. The issue here isn't with sensitivity but that the data from these masked pixels isn't even used in the final image and these gaps must be filled in with the final image, but even with something like the A9 where you have ~600 AF points, the number of pixels is small enough that it won't have any significant effect on the image quality.
mystik610 wrote in post #18342290
Admittedly I'm using the term "contrast detect" rather loosely because Sony is doing more with the live-view image to focus than simply find peak contrast to focus. I guess you could call it "image sensor focus", because from a broad sense, its using data from the image sensor to tell where the lens to move, and it isn't always based on peak contrast alone. In the case of eye focus, the AF system is doing more than finding peak contrast...its using the live view image data to do object recognition.
I'm not sure what mechanism you're thinking of for "image sensor focus" but Sony has three tools at its disposal to tell the lens where to move to achieve focus. First is the phase difference, this one is relatively obvious. Second is the contrast iterations. The third is the time history of known, or at least assumed to be known, focus positions. That's about it, in terms of lens position object recognition does give you anything other than letting you know where on the sensor to look at the available phase and/or contrast information. Also, as far as contrast detect while I can certainly believe there's a finishing contrast detect step for S-AF I really doubt that there is for C-AF. Systems that use contrast detect for C-AF have a very noticeable and regular oscillation and none of the recent Sony bodies, at least the ones that I've owned (A7RII, RX100IV/V) exhibit that. The idea of contrast detect being the ultimate for accuracy is only true for S-AF, the idea of a finishing step in C-AF doesn't really make sense. Finally, just to be clear again, everybody is using the image off the sensor for object recognition (even Nikon), this isn't unique to Sony at all; they're just doing it better with their eye C-AF implementation.
mystik610 wrote in post #18342290
Using the image sensor data to find the eye doesn't appear to be too demanding on the processor. It was a feature that existed on the first a7r, which had a horridly slow processor, and actually did not have a PDAF AF system...it is contrast detect only. Eye focus does not work with continuous AF mode on the a7r though. According to Sony, continuous eye-focus is achieved from higher sensor read-out speed in the newer copper wire structure image sensors. So its an image sensor dependent system, and sensor design seems to matter more than the image processor does.
I don't think you're giving Sony quite enough credit here, while a few other manufacturers have also introduced eye C-AF (Panasonic, Olympus, Fuji) they all require the face to be much larger in the frame (a.k.a. operate on a coarser version of the image streaming off the sensor) than what Sony is doing, nor are they anywhere as accurate as what Sony has achieved. This is coming from some combination of superior software and hardware in the Sony cameras. The A7R had it in S-AF, but not C-AF although I suspect that the processor in the A7R is still better than anything Canon has access to unless they ditch TI or somehow force them to step up their game, but I couldn't say that for sure.