Hi folks,
I have recently been trying a new approach to working with linear, 32bit data. Enter "PixInsight" a cross-platform application that appears to have its roots and user base in astrophotography and image processing. Well, because many image processing tasks are universal, it turns out that PixInsight is a full-fledged image processing delight when it comes to making HDR images.
In this mini-tutorial, I would like to demonstrate the approach one may use to merge a sequence of raw images into a single, 32bit image ready for further processing and finishing in Photoshop.
As some of you may be aware, I have been using an application called Zero Noise, developed by forum member Guillermo Luijk (aka "_GUI_"). Well, I know _GUI_ has been busy and that it appears as if Zero Noise has stalled, although it is still usable if you are persistent enough or use the Linux version. Coincidentally, PixInsight uses a very similar approach to making linear HDR data from multiple exposures, and it couldn't be simpler. Also recall, if you ever used Zero Noise, that the resulting image was scaled to the shortest (darkest) exposure of the sequence, necessitating some clever post-processing and knowledge to make the full glory of the data come alive in the realm most of us are used to editing images.
Well, I am happy to report that PixInsight merges images a lot like Zero Noise, creates output that resembles that of Zero Noise and also provides the tools necessary to reestablish the image into the realm of what most of us are comfortable editing. Here is a brief walkthrough.
1) Merging raw exposures
This is pretty straightforward. PixInsight will accept raw files and will process them so that it can compare two images at a time, thresholding between the two images and deciding which images to keep from the longer exposure, and which to take from the shorter exposure. If you would like, you can have PixInsight generate a binary map of the threshold - a black and white image that is black where the longer exposure pixels were used and white where the shorter exposure pixels were used.
Here is the merge interface:
Pretty straightforward. The "binarizing threshold" permits you the ability to adjust the threshold level at which the map separates dark from light exposure pixels. You press the little circle button in the bottom left corner of the window and a terminal window pops up to give you feedback on what is happening. Essentially, PixInsight is invoking draw like this:
dcraw -w -q 3 -t 0 -o 0 -4
This is nerd speak for "make me a high quality (-q 3), linear 16bit file (-4) with no color profile (-o 0), white balanced using the camera white balance setting, if possible (-w) - don't rotate it (-t 0)."
So, the raw files are decoded and compared to see which pixels get used in the final merge.
Here is a binarized map, in this case for the longest shadow exposure compared with the midtones exposure:
In this map, the shadow tones that are kept are shown in black, the mid tones that are used from the next image are shown in white. THe merger progressively compares the composite to the next image to merge all of the best pixels.
As we learned from using Zero Noise, this merging process does not require 7 exposures 1 EV apart to get the job done - you could use the first, middle and last to get the full dynamic range represented, with none of the artifacting that can happen with merging algorithms that average pixels in some weighted or non-weighted manner.
2) The Result
Here is what you get from the merge:
Pretty dark. To the right of the image is the "Histogram Transformation" window - right now just take a look at it to notice that the histogram reveals what we already see in the image - virtually nothing in the histogram until the very left edge. How about that?
Below the image is the Statistics window, telling you all about your pixels. This is important. To be able to transform this data into something more useable, we need to balance the histogram.











