I'm sure this has been covered before but I'm not quite sure how to find it.
When I open up a RAW image file in ACR and adjust exposure using the exposure slider I get a lot of latitude up to EV +2.0 if not more - basically as long as I am not clipping the exposure adjustments act just as they would if I had shot at a greater exposure in camera. Little noise, sure, but basically all is good. Where things get ugly is if I forget to do this in ACR and wait until Photoshop proper (CS4 in my case).
When I create an exposure adjustment layer in PS and up it even a little bit, the results are horrible! I get banding, clipping, weird tone mapping - basically not a very useful tool in my book. Even up as little as EV +0.7 it's bad.
Why is this? Or is it something I'm doing wrong? I'm just wondering why I have so much more latitude in adjusting exposure using ACR v. Photoshop when on the surface I would think they'd do the same thing, same file depth etc.


. What I can clearly see is the non-linear adjustment in PS you talked about vs. linear in ACR. In PS, the surfer's skin tones and shirt (midtones and shadows) did not receive the same exposure adjustment as the white water (highlights), whereas everything was adjusted relatively equally using exposure slider in ACR and I can see further proof of that as I watch the histogram slide to the right while maintaining the same shape.
