nqjudo wrote in post #17557002
I agree with Alan. Someone should be a little concerned about their research funding on this one

The theory is interesting, even if the current implementation is wonky, or perhaps even misdirected.
First of all this depends on the phase difference between the inner and outer reflections of the glass, but the image needs sufficiently high resolution for that difference to be detectible in the first place in a static image; I can't imagine the difference being more than a few pixels on a 20mp+ sensor and close up to the glass surface.
However, while deducing the origin of the reflection post-fact may be difficult, if they integrate this algorithm into a camera's software and have it dynamically track reflections as the camera is shifting in space, by the time the exposure is taken the camera might already have the reflection data figured out, possibly even as part of a double-exposure. This of course will only work on cameras that constantly read off the sensor, meaning point & shoots and some mirrorless systems, although I'm guessing these are the kinds of cameras most people tend to shoot through glass with in the first place.
Alternatively, the purpose of this algorithm may be more useful in other applications, like machine vision or something technical where reflections may be a problem. One thing I can think of is surveillance, since video cameras constantly read the sensor, they would be optimal candidates for reflection removal.