TCampbell wrote in post #18081213
Considering this is a 'straight out of the camera' shot, it looks pretty good.
I can point out the things that I do notice...
You are right in that the noise is a bit much. More on that later...
Something else I'm noticing are some optical issues near the edges of the frame in which the stars are smearing in a meridonal direction. In other words if you draw a line from the very center of the image to each of these bright stars around the edge, the stars are actually smearing in a direction perpendicular to that line. This is an optical issue with the lens... but don't feel bad... MANY (possibly most) lenses have this problem when you shoot wide-open.
While we typically want to shoot wide-open to collect more data, we can reduce this optical issues by stopping down. But stopping down requires longer exposures to compensate and that ultimately means you typically need the camera to be mounted on a tracking head to allow for much longer exposure times.
Back to the noise...
There have been numerous discussions on ISO for astrophotography... it turns out the difference between shooting (just to use an example and nothing about your image) the difference between ISO 1600 and ISO 3200 (in camera) is that literally the camera simply doubles the values that it saves to the file. It doesn't actually change the sensitivity of the camera sensor. In other words... if you shot at ISO 1600 and simply told your post-processing software to increase the exposure by 1 full stop and then compared the two images, you'd see that they look pretty much the same (probably subtle differences in how the algorithms work but probably nothing you could notice with your eye.)
Given a single unprocessed exposure and the time constraint of the 24mm lens (e.g. 600 ÷ 1.6 crop factor = 375. 375 ÷ 24mm focal length = 15.625 seconds) then using ISO 3200 certainly makes sense. But to get the noise levels down when the needs and constraints of the exposure would normally result in noise, you can shoot multiple exposures and then "stack" them.
But it turns there is a difference in what type of "noise" you get...
Camera sensors use both analog amplification of the information on the sensor... and that's used up until some ISO limit which varies by camera sensor model... and then to go beyond it uses simple digital amplification (not analog) for ISOs beyond that limit. The noise buildup from analog amplification is called "upstream" read noise and the noise buildup from "digital" amplification is called "downstream" read noise.
It turns out that magic limit for your T3i is probably ISO 800 (because every other 18MP Canon sensor seems to be ISO 800 -- including my own 60Da).
Since everything after ISO 800 is "downstream" read noise then it doesn't really make much sense to push ISO beyond ISO 800. In other words, ISO 800 is probably the ideal ISO for your camera when shooting astrophotography and to increase the exposure beyond ISO 800 it's best to increase the exposure time rather than the ISO (because to increase the exposure beyond 800 you could just use your post-processing software.)
Processing astrophotography images is a lot to do with a problem called the "signal to noise ratio". The "signal" is all the good data in your image. The bad data is the "noise". You want your images to have the highest amount of signal and the lowest amount of noise (in other words you want the ratio of signal to noise to be as large as possible.)
There are several ways to deal with the noise.
One method is to use the "zone" system -- much like Ansel Adams' zone system except this system only considers four zones (Ansel's system uses many more zones.)
This "zone" system works well when you only have ONE image (such as your image here).
The idea of the "zone" system is that noise is the worst (and most noticeable) in the "dark" areas... and the weakest in the bright areas. So imagine creating four categories...
A very dark zone
A moderately dark or "dim" zone
A middle bright zone
and a very bright zone
Basically if you were to look at your image histogram you can divide it into those four parts and process each part a bit differently than the others.
The "very dark" zone:
You'll get a "bad" signal to noise ratio in the "dark" zone -- likely the noise will be so great that it may actually overwhelm any possible "signal". If you isolate those "very dark" zone pixels you can then apply very strong de-noising to them (you could even throw away any data by just setting it all to your black point but there's a reason you might not want to do that -- more on that later) and you really don't need to fear that you are losing any important parts of the image. This data is mostly just the black background of space.
The "moderately dark" zone
This section may have (and probably has) data (signal) that you don't want to lose. I say "may have" because sometimes what you find in this zone is either light pollution or lens vignetting. So usually you try to normalize the image if possible. PixInsight (not free software unfortunately) can actually build a background "model" that can be used to subtract and eliminate light pollution or lens vignetting... but you can also do this by taking "flat" frames (flat frames are carting by taking a middle-gray exposure of a perfectly evenly illuminate flat subject. The reason for the fuss about the vignetting or light pollution is because the image out of your camera isn't "stretched" -- but as you process the image, you may start "stretching" the histogram to tease out details and help exaggerate or amplify the interesting subject matter in your image. And when you "stretch" the histogram you stretch everything about it... including any lens vignetting or light pollution which may have previously been "subtle" but because very "obvious" as a result of the stretch.
In case, the "moderately dark" zone usually does not contain a lot of interesting detail -- it's often faint. So if you can isolate the pixels in this area, you can apply a moderately aggressive amount of noise reduction without fear that you'll loose much that's important.
The "middle bright" zone
The zone actually doesn't need much of anything -- at least not much that can help it. It has both good news and bad news. The good news is that it won't have as much noise as the darker zones... little enough that it doesn't need nearly as much noise reduction. But the bad news is that it's also not particularly noise-free either -- which means that if you attempted to "sharpen" this zone then you'd only amplify the noisy pixels -- which you don't want to do.
The "very bright" zone
These parts of the image have the best possible signal to noise ratio ... the lowest amount of noise (which means it doesn't need de-noising) and the highest amount of signal. High amounts of signal are good particularly because it means they can withstand a lot of sharpening to help tease out details... without amplifying noise.
How exactly you go about using photoshop to do all of this is a much much longer conversation -- but this is the general idea.
But there is a better way to deal with the noise... which is to use image stacking.
Use of imaging stacking involves actually taking LOTS of images. Ideally you literally let the camera take ... say 25 or so images of the same thing. You can't do this with a camera on a stationary mount because the sky keeps moving (ok that's not quite accurate... the sky stays put and the Earth keeps moving... but the effect as seen from the ground is that the sky seems to be moving). Anyway, it REALLY helps to have a tracking mount to pull this off.
The tracker has a rotating axis and that axis is basically pointed toward the north celestial pole (that pole is about 2/3º away from Polaris -- the pole star.) You can put a ball-head on the tracker's rotating axis and that will let you point the camera anywhere you want in the sky ... just so the whole thing is rotating about the Earth's polar axis. The reason this works is because as the Earth spins from west-to-east... the tracker head is rotating from east-to-west on a rotating axis which is EXACTLY PARALLEL to the Earth's axis and also at the EXACT SAME RATE. This causes whatever your camera is pointed at to be held-fixed on exactly the same spot in the sky ... literally all night long if that's what you want.
In addition to lots and lots of these normal exposures (which we call "light" frames), it's good to collect "dark" frames. These are frames taken with the identical exposure settings and even on the very same night (it turns out the amount of "noise" collected is also somewhat strongly related to the actual temperature of the chip at the time of capture... so it does little good to capture "light" frames on a 90º night outside and then capture the "dark" frames in a 74º air conditioned room several days later because the noise buildup simply will not be comparable. Anyway the major difference between a "light" frame and a "dark" frame is that to capture the dark frame... you simply put the lens cap on -- thus preventing the collection of any "signal" whatsoever.... you now have samples in which every thing in the image is, in fact, "noise". Typically you capture roughly half as many of these "dark" frames as the number of "light" frames and for statistically reasons it works better if this happens to be an "odd" number of images. So with 25 "light" frames you'd capture about 13 "dark" frames (not 12 because it's an even number.)
Why so many?
If you capture a few frames, the "stacking" software can align each frame to each other frame by matching up the positions of the stars. This process is called "registration". It can then "stack" the images -- this process is called "integration" and if you have just a few frames then it will "average" the pixels from the input frames to create that master merged "output" frame. BUT... if you have a LOT of frames, it can use much better statistical methods (it doesn't "average"). It can use, for example.... one of several sigma clipping algorithms. These algorithms basically establish a bell curve for the value of each individual pixel. Suppose I have 11 frames (just an example)... and I've built a statistical model looking at all 11 frames in which I have this nice bell curve... and the value of light from the pixel I'm inspecting falls within a standard deviation on 10 of the 11 frames... but in just ONE of the 11 frames that same pixels appears to be an outlier (statistically). The integration algorithm can assume that something is wrong with the pixel in that 11th frame ... possibly an airplane flew through your image. Possible it was noise. Whatever it is... the software can safely ignore the value of that pixel which represents the outlier and just go with the pixels that are not outliers.
When you do this -- even feeding noisy data in to the stacking software -- you can get very silky smooth images out of the system (and here's the best part...) WITHOUT sacrificing any "sharpness" of the image.
It turns out there can be more than just "light" frames and "dark" frames.
There can also be something called "bias" frames (this is just like a "dark" frame in that you put the cap on the lens and shoot at the same ISO, but the major difference is that it attempts to measure the values from the camera sensor if the camera just takes the shortest possible exposure -- at that same ISO setting. In other words if your "light" and "dark" frames were shot at ISO 800 for 30 seconds then your "bias" frames would ALSO be shot at ISO 800... but they'd be the shortest possible exposure (e.g. 1/4000th sec.)
There can also be something called "flat" frames.
Imagine taking something like a iPad tablet... setting the entire screen to gray... holding it in front of your camera lens (it's actually better if it's out of focus) and snapping several images of it. What you'd get is a bunch of "gray" images (assuming you didn't over or under-expose). Those should be perfectly "flat" in that every pixel should be exactly the same brightness of every OTHER pixel. But the truth is that won't happen... you'll get lens vignetting and the corners will be darker. It might be imperceptibly darker... but when you "stretch" the histogram to bring out details when you process the full image that "slight" variation in brightness will suddenly become very obvious (and annoying). So these "flat" frames allow the software to measure the difference and compensate by removing all vignetting issues from your images.
The "darks" and "bias" images are combined to create a master dark. The "flats" are combined to build a master flat. The software then processes each "light" frame (individually -- before they are combined) against these master dark & flat frames to create something called a "calibrated light" image.
The "calibrated light" images are then matched up to each other by identifying the locations of stars and make sure they all stack perfectly (meaning the computer may slightly nudge the image data around so that the stars in each image will be perfectly aligned to the stars in whichever image you picked as your master (I typically pick the middle image in my series for the reference frame for purposes of alignment). This process is called image registration. Each "calibrated light" goes in... and you get a "registered calibrated light" out.
The last step is to take all of these "restored + calibrated light" images and then merge them together (preferably using one of the aforementioned sigma clipping algorithms) to produce the ultimate master imaged -- this image is the combined result of all the images you shot.
You can NOW go to work on that master image to start doing a few mechanical (non-artistic) steps such as color correction (white balance), background neutralization, etc. And once you complete all those somewhat routine processing steps you can get into the artsy steps in which you tease out the details, saturate the colors, and anything else that appeals to your eye.
It takes a while to learn all of this so don't feel bad if it seem like it's a bit much to take in. Maybe you learn to do one new thing each time you process data and that's progress. Over time the image processing steps actually seem to make sense (initially it seem like rote memorization) and it all becomes easier and faster.