Deep Learning was originally developed for the 1DX series...
1DX Mark III, to be specific. And only to work in Live view, since it needs the more detailed information from the image sensor to work with.
The 1DX can track certain colors with its 100 kilopixel color sensitive light meter, and apply focusing as long as the subject stays at one of the 61 AF points.
The 1DX Mark II can track faces with its 360 kilopixel light meter. Still limited to the 61 AF points.
But the 1DX Mark III can do a part of the tricks you find in the R3, as long as you use Live view.


I don't know why anyone would want to refocus on the wall. Case 2 would give some time for my subject to emerge again. 



