I've always wondered this and now that I've finally found the right forum, I'm asking it now. 
As you know, if we look outside the window, you'll realize that both outdoor and indoor are exposed properly simultaneously. Your eyes are articulating that painting on the wall in the dim room in full colors as well as the bill board outside on a bright sunny day at the same time.
We know that cameras can't do this. Focusing on one side will extremely overexpose/underexpose the other. Cameras have to resort to runarounds like meshing multiple shots into a single HDR.
Why do cameras suffer from this while human optics don't? If our eyes can do it, then why can't the camera mimic it? Is it something that's remains a mystery or we know it but cannot overcome it or on the way to overcoming it? Naturally I'm assuming it's on the camera's end with the image processor inside than the lenses.
I hope you can explain and it would interesting to see some in-depth technical reason behind this as well.

