What I *think* Lytro means is that the device takes 11 million samples at once of the light which passes into it through the front lens, and these samples are said to include information about the direction the sampled light was coming from, and its angle of incidence. This kind of information is not used however as optical information in the same way that a conventional image sensor does. This kind of information is not used in the production of the image. It is information used to *edit* the image using software after and independently of the optical information producing the image. There are already three SW applications that I know of which can apply image-independent edits to images to simulate DOF (eg using a depth map). The information that the Lytro PP SW uses is for a similar result, but apparently at the cost of very much reduced image quality (max 1.2MP jpg, possibly further uneditable, with unknown measures of IQ that we give so much attention to from our gear).