It appears that you shoot handheld with potentially large camera movements between shots. Whatever alignment and deghosting algorithm Aurora is using is getting confused or its algorithm is ill equipped to deal with your more shifted image sets. If parts of the scene are aligned properly and parts (the clouds, for example) are not, then deghosting is an issue (assuming you are not zooming or moving into the scene between shots). To test and eliminate deghosting as an issue, you would have to shoot a purely static scene with no moving elements but with large camera movements to see how large the camera movements can be before the alignment algorithm breaks down. I agree with you that Aurora's current algorithm is not good.
I can see in some of the shots that the foreground (the fence along the water) is aligned but the mountains are not - obviously the mountains are not moving, so in addition to the alignment problem, whatever the deghosting algorithm is choosing to keep and choosing to reject is getting confused and not working. It cannot differentiate scene movement from camera movement in your test image sequence - based on the scatter of the multiple mountain profiles, it appears that the movement between shots is large.
Shooting with a tripod, or steadying your handheld shots by leaning against a wall or some supportive object would minimize these issues, and make the difference between true alignment shifts and motion within the scene requiring deghosting easier for the application to manage - if a tripod is too cumbersome, you could probably use a monopod or a hiking pole with a 1/4-20 threaded post bonded to the top of the pole for attaching a ballhead or other compact tripod head, or threading directly into the base of your camera. If you choose to shoot handheld, then it appears other applications work better for your style and workflow.
Aurora is good at some things, not so good at others. Other applications can deal with larger shifts and potential movement within such sequences - Aurora cannot at this point. One problem with Aurora in this regard is that it does not accept 32bit HDR image files (in Radiance or EXR format) as input, so you cannot merge/align/deghost in the most accurate/versatile application and then feed the resulting 32bit file into Aurora for creating output. To me, this implies that Aurora performs image fusion (like Enfuse) and cannot integrate true 32bit files into its workflow. You are almost better off creating a flat 16bit TIFF in Photomatix and using that single tonally compressed image as input to Aurora. Another approach would be to bring your raw images into photoshop as layers, use photoshop to align them, crop the excess and then export each layer as a 16bit TIFF. Then bring the TIFFs into Aurora and proceed as usual. Pretty clunky workflow though.
It seems like SNS works for you and that Aurora is probably not going to add anything to your workflow or output.