Approve the Cookies
This website uses cookies to improve your user experience. By using this site, you agree to our use of cookies and our Privacy Policy.
OK
Index  •   • New posts  •   • RTAT  •   • 'Best of'  •   • Gallery  •   • Gear  •   • Reviews
Guest
New posts  •   • RTAT  •   • 'Best of'  •   • Gallery  •   • Gear  •   • Reviews
Register to forums    Log in

 
FORUMS Photo Sharing & Visual Enjoyment Astronomy & Celestial 
Thread started 29 Jul 2016 (Friday) 11:49
Search threadPrev/next
sponsored links
(this ad will go away when you log in as a registered member)

First Milky Way SOOC help please

 
dixiedawn
Member
Avatar
195 posts
Gallery: 19 photos
Likes: 479
Joined Aug 2008
Location: AZ
Post edited over 2 years ago by dixiedawn.
     
Jul 29, 2016 11:49 |  #1

Single image, resized only. In case EXIF doesn't show, T3i, EF-S 24mm 2.8 STM, f 2.8, 24mm, 13s, ISO 3200. My only editing software is PS Elements 10 or Gimp. I know focus isn't quite there, will have to work on that. There is a lot of noise, do I need to lower ISO, go a little longer ? In post, am I really going to be able to do much with a single like this, or do I need to stack images ? Is that something that PSE or GIMP can do? I'd appreciate any and all help.

Satellite lower right quarter, 2 faint meteors on left half. I know a darker location would help. Picture was taken in bright green / dark green zone per DarkSiteFinder website.

IMAGE: https://c4.staticflickr.com/9/8431/28015815563_cb6a67e695_b.jpg
IMAGE LINK: https://flic.kr/p/JFEn​UP  (external link) 72816milkyway (external link) by *dxd* (external link), on Flickr

debbie

  
  LOG IN TO REPLY
sponsored links
(this ad will go away when you log in as a registered member)
Celestron
Cream of the Crop
8,494 posts
Gallery: 2 photos
Likes: 338
Joined Jun 2007
Location: Texas USA
     
Jul 29, 2016 12:09 |  #2

You might want to raise the exposure some. Looking at it from my phone it's pretty dark and hard to see . But you have a good capture of that section . Can you open the aperture wider than 2.8 ? If so you might want to go to f1.8 or more if you can . With a 24mm you should be able to get 30secs without trailing problems . Another way is while on tripod turn off the IS and in your settings you should be able to turn on Reduce Noise Long Exposures. Hope this helps some . Night photography is all trial and error . Try different settings to see what works best for you and your camera . Being a T3i I'm not familiar with other than it's newer than my XSi 450D .




  
  LOG IN TO REPLY
MalVeauX
"Looks rough and well used"
Avatar
12,819 posts
Gallery: 1239 photos
Best ofs: 3
Likes: 8506
Joined Feb 2013
Location: Florida
     
Jul 29, 2016 12:36 |  #3

Heya,

In GIMP, you can use layers.

A few things:

Exposure time is fine. 24mm on APS-C would be about 13 seconds without trails. You could squeek a few more seconds maybe, but ultimately will get trailing after 20 seconds or more for sure on APS-C. F2.8 is maximum on that lens, so you're done there too. All you had left was ISO. ISO 3200 on the T3i is not bad, it definitely can work. Here you needed just a bit more exposure overall to get more data into it. For this image, doing a few of the same image (say 6~10 of the same image) and stacking could clean up the use of ISO 3200 a bit more and allow you to stretch curves and push this file more with less noise showing up after you push it hard. Something to try next time.

White balance is up to you. Some use daylight white balance. I like to change temperature to something cooler, and then stretch curves to my desire. That's up to you.

Here, you might want to try to make a layer and push curves to bring up some of the details in the shadow/darker areas. And layer them in with luminosity masks. GIMP does this I believe with layers just like PS does. From there, you can dodge & burn dust trails and various areas of the core to bring out details. And an unsharpen mask can help define it more too.

Very best,


My Flickr (external link) :: My Astrobin (external link)

  
  LOG IN TO REPLY
TCampbell
Senior Member
445 posts
Gallery: 13 photos
Likes: 275
Joined Apr 2012
     
Jul 29, 2016 13:44 |  #4

Considering this is a 'straight out of the camera' shot, it looks pretty good.

I can point out the things that I do notice...

You are right in that the noise is a bit much. More on that later...

Something else I'm noticing are some optical issues near the edges of the frame in which the stars are smearing in a meridonal direction. In other words if you draw a line from the very center of the image to each of these bright stars around the edge, the stars are actually smearing in a direction perpendicular to that line. This is an optical issue with the lens... but don't feel bad... MANY (possibly most) lenses have this problem when you shoot wide-open.

While we typically want to shoot wide-open to collect more data, we can reduce this optical issues by stopping down. But stopping down requires longer exposures to compensate and that ultimately means you typically need the camera to be mounted on a tracking head to allow for much longer exposure times.

Back to the noise...

There have been numerous discussions on ISO for astrophotography... it turns out the difference between shooting (just to use an example and nothing about your image) the difference between ISO 1600 and ISO 3200 (in camera) is that literally the camera simply doubles the values that it saves to the file. It doesn't actually change the sensitivity of the camera sensor. In other words... if you shot at ISO 1600 and simply told your post-processing software to increase the exposure by 1 full stop and then compared the two images, you'd see that they look pretty much the same (probably subtle differences in how the algorithms work but probably nothing you could notice with your eye.)

Given a single unprocessed exposure and the time constraint of the 24mm lens (e.g. 600 ÷ 1.6 crop factor = 375. 375 ÷ 24mm focal length = 15.625 seconds) then using ISO 3200 certainly makes sense. But to get the noise levels down when the needs and constraints of the exposure would normally result in noise, you can shoot multiple exposures and then "stack" them.

But it turns there is a difference in what type of "noise" you get...

Camera sensors use both analog amplification of the information on the sensor... and that's used up until some ISO limit which varies by camera sensor model... and then to go beyond it uses simple digital amplification (not analog) for ISOs beyond that limit. The noise buildup from analog amplification is called "upstream" read noise and the noise buildup from "digital" amplification is called "downstream" read noise.

It turns out that magic limit for your T3i is probably ISO 800 (because every other 18MP Canon sensor seems to be ISO 800 -- including my own 60Da).

Since everything after ISO 800 is "downstream" read noise then it doesn't really make much sense to push ISO beyond ISO 800. In other words, ISO 800 is probably the ideal ISO for your camera when shooting astrophotography and to increase the exposure beyond ISO 800 it's best to increase the exposure time rather than the ISO (because to increase the exposure beyond 800 you could just use your post-processing software.)

Processing astrophotography images is a lot to do with a problem called the "signal to noise ratio". The "signal" is all the good data in your image. The bad data is the "noise". You want your images to have the highest amount of signal and the lowest amount of noise (in other words you want the ratio of signal to noise to be as large as possible.)

There are several ways to deal with the noise.

One method is to use the "zone" system -- much like Ansel Adams' zone system except this system only considers four zones (Ansel's system uses many more zones.)

This "zone" system works well when you only have ONE image (such as your image here).

The idea of the "zone" system is that noise is the worst (and most noticeable) in the "dark" areas... and the weakest in the bright areas. So imagine creating four categories...

A very dark zone
A moderately dark or "dim" zone
A middle bright zone
and a very bright zone

Basically if you were to look at your image histogram you can divide it into those four parts and process each part a bit differently than the others.

The "very dark" zone:

You'll get a "bad" signal to noise ratio in the "dark" zone -- likely the noise will be so great that it may actually overwhelm any possible "signal". If you isolate those "very dark" zone pixels you can then apply very strong de-noising to them (you could even throw away any data by just setting it all to your black point but there's a reason you might not want to do that -- more on that later) and you really don't need to fear that you are losing any important parts of the image. This data is mostly just the black background of space.

The "moderately dark" zone

This section may have (and probably has) data (signal) that you don't want to lose. I say "may have" because sometimes what you find in this zone is either light pollution or lens vignetting. So usually you try to normalize the image if possible. PixInsight (not free software unfortunately) can actually build a background "model" that can be used to subtract and eliminate light pollution or lens vignetting... but you can also do this by taking "flat" frames (flat frames are carting by taking a middle-gray exposure of a perfectly evenly illuminate flat subject. The reason for the fuss about the vignetting or light pollution is because the image out of your camera isn't "stretched" -- but as you process the image, you may start "stretching" the histogram to tease out details and help exaggerate or amplify the interesting subject matter in your image. And when you "stretch" the histogram you stretch everything about it... including any lens vignetting or light pollution which may have previously been "subtle" but because very "obvious" as a result of the stretch.

In case, the "moderately dark" zone usually does not contain a lot of interesting detail -- it's often faint. So if you can isolate the pixels in this area, you can apply a moderately aggressive amount of noise reduction without fear that you'll loose much that's important.

The "middle bright" zone

The zone actually doesn't need much of anything -- at least not much that can help it. It has both good news and bad news. The good news is that it won't have as much noise as the darker zones... little enough that it doesn't need nearly as much noise reduction. But the bad news is that it's also not particularly noise-free either -- which means that if you attempted to "sharpen" this zone then you'd only amplify the noisy pixels -- which you don't want to do.

The "very bright" zone

These parts of the image have the best possible signal to noise ratio ... the lowest amount of noise (which means it doesn't need de-noising) and the highest amount of signal. High amounts of signal are good particularly because it means they can withstand a lot of sharpening to help tease out details... without amplifying noise.

How exactly you go about using photoshop to do all of this is a much much longer conversation -- but this is the general idea.


But there is a better way to deal with the noise... which is to use image stacking.

Use of imaging stacking involves actually taking LOTS of images. Ideally you literally let the camera take ... say 25 or so images of the same thing. You can't do this with a camera on a stationary mount because the sky keeps moving (ok that's not quite accurate... the sky stays put and the Earth keeps moving... but the effect as seen from the ground is that the sky seems to be moving). Anyway, it REALLY helps to have a tracking mount to pull this off.

The tracker has a rotating axis and that axis is basically pointed toward the north celestial pole (that pole is about 2/3º away from Polaris -- the pole star.) You can put a ball-head on the tracker's rotating axis and that will let you point the camera anywhere you want in the sky ... just so the whole thing is rotating about the Earth's polar axis. The reason this works is because as the Earth spins from west-to-east... the tracker head is rotating from east-to-west on a rotating axis which is EXACTLY PARALLEL to the Earth's axis and also at the EXACT SAME RATE. This causes whatever your camera is pointed at to be held-fixed on exactly the same spot in the sky ... literally all night long if that's what you want.

In addition to lots and lots of these normal exposures (which we call "light" frames), it's good to collect "dark" frames. These are frames taken with the identical exposure settings and even on the very same night (it turns out the amount of "noise" collected is also somewhat strongly related to the actual temperature of the chip at the time of capture... so it does little good to capture "light" frames on a 90º night outside and then capture the "dark" frames in a 74º air conditioned room several days later because the noise buildup simply will not be comparable. Anyway the major difference between a "light" frame and a "dark" frame is that to capture the dark frame... you simply put the lens cap on -- thus preventing the collection of any "signal" whatsoever.... you now have samples in which every thing in the image is, in fact, "noise". Typically you capture roughly half as many of these "dark" frames as the number of "light" frames and for statistically reasons it works better if this happens to be an "odd" number of images. So with 25 "light" frames you'd capture about 13 "dark" frames (not 12 because it's an even number.)

Why so many?

If you capture a few frames, the "stacking" software can align each frame to each other frame by matching up the positions of the stars. This process is called "registration". It can then "stack" the images -- this process is called "integration" and if you have just a few frames then it will "average" the pixels from the input frames to create that master merged "output" frame. BUT... if you have a LOT of frames, it can use much better statistical methods (it doesn't "average"). It can use, for example.... one of several sigma clipping algorithms. These algorithms basically establish a bell curve for the value of each individual pixel. Suppose I have 11 frames (just an example)... and I've built a statistical model looking at all 11 frames in which I have this nice bell curve... and the value of light from the pixel I'm inspecting falls within a standard deviation on 10 of the 11 frames... but in just ONE of the 11 frames that same pixels appears to be an outlier (statistically). The integration algorithm can assume that something is wrong with the pixel in that 11th frame ... possibly an airplane flew through your image. Possible it was noise. Whatever it is... the software can safely ignore the value of that pixel which represents the outlier and just go with the pixels that are not outliers.

When you do this -- even feeding noisy data in to the stacking software -- you can get very silky smooth images out of the system (and here's the best part...) WITHOUT sacrificing any "sharpness" of the image.

It turns out there can be more than just "light" frames and "dark" frames.

There can also be something called "bias" frames (this is just like a "dark" frame in that you put the cap on the lens and shoot at the same ISO, but the major difference is that it attempts to measure the values from the camera sensor if the camera just takes the shortest possible exposure -- at that same ISO setting. In other words if your "light" and "dark" frames were shot at ISO 800 for 30 seconds then your "bias" frames would ALSO be shot at ISO 800... but they'd be the shortest possible exposure (e.g. 1/4000th sec.)

There can also be something called "flat" frames.

Imagine taking something like a iPad tablet... setting the entire screen to gray... holding it in front of your camera lens (it's actually better if it's out of focus) and snapping several images of it. What you'd get is a bunch of "gray" images (assuming you didn't over or under-expose). Those should be perfectly "flat" in that every pixel should be exactly the same brightness of every OTHER pixel. But the truth is that won't happen... you'll get lens vignetting and the corners will be darker. It might be imperceptibly darker... but when you "stretch" the histogram to bring out details when you process the full image that "slight" variation in brightness will suddenly become very obvious (and annoying). So these "flat" frames allow the software to measure the difference and compensate by removing all vignetting issues from your images.

The "darks" and "bias" images are combined to create a master dark. The "flats" are combined to build a master flat. The software then processes each "light" frame (individually -- before they are combined) against these master dark & flat frames to create something called a "calibrated light" image.

The "calibrated light" images are then matched up to each other by identifying the locations of stars and make sure they all stack perfectly (meaning the computer may slightly nudge the image data around so that the stars in each image will be perfectly aligned to the stars in whichever image you picked as your master (I typically pick the middle image in my series for the reference frame for purposes of alignment). This process is called image registration. Each "calibrated light" goes in... and you get a "registered calibrated light" out.

The last step is to take all of these "restored + calibrated light" images and then merge them together (preferably using one of the aforementioned sigma clipping algorithms) to produce the ultimate master imaged -- this image is the combined result of all the images you shot.

You can NOW go to work on that master image to start doing a few mechanical (non-artistic) steps such as color correction (white balance), background neutralization, etc. And once you complete all those somewhat routine processing steps you can get into the artsy steps in which you tease out the details, saturate the colors, and anything else that appeals to your eye.

Overwhelmed?

It takes a while to learn all of this so don't feel bad if it seem like it's a bit much to take in. Maybe you learn to do one new thing each time you process data and that's progress. Over time the image processing steps actually seem to make sense (initially it seem like rote memorization) and it all becomes easier and faster.




  
  LOG IN TO REPLY
pdxbenedetti
Senior Member
Avatar
312 posts
Gallery: 2 photos
Likes: 1019
Joined Jul 2015
Location: Salt Lake City, United States
     
Jul 29, 2016 14:10 |  #5

Listen to this guy ^^^^^^^^^^^^^^^

If you want to feel even more overwhelmed, read this:

http://www.clarkvision​.com/articles/nightsca​pes/ (external link)

Take your time, getting good at this hobby doesn't happen overnight, but you have a good start. And shoot as much as you possibly can, putting what you learn to practice and then learning more is the only way to get better.


flickr (external link)
SmugMug (external link)
Facebook (external link)

  
  LOG IN TO REPLY
Celestron
Cream of the Crop
8,494 posts
Gallery: 2 photos
Likes: 338
Joined Jun 2007
Location: Texas USA
     
Jul 29, 2016 14:13 |  #6

TCampbell wrote in post #18081213 (external link)
Considering this is a 'straight out of the camera' shot, it looks pretty good.

I can point out the things that I do notice...

You are right in that the noise is a bit much. More on that later...

Something else I'm noticing are some optical issues near the edges of the frame in which the stars are smearing in a meridonal direction. In other words if you draw a line from the very center of the image to each of these bright stars around the edge, the stars are actually smearing in a direction perpendicular to that line. This is an optical issue with the lens... but don't feel bad... MANY (possibly most) lenses have this problem when you shoot wide-open.

While we typically want to shoot wide-open to collect more data, we can reduce this optical issues by stopping down. But stopping down requires longer exposures to compensate and that ultimately means you typically need the camera to be mounted on a tracking head to allow for much longer exposure times.

Back to the noise...

There have been numerous discussions on ISO for astrophotography... it turns out the difference between shooting (just to use an example and nothing about your image) the difference between ISO 1600 and ISO 3200 (in camera) is that literally the camera simply doubles the values that it saves to the file. It doesn't actually change the sensitivity of the camera sensor. In other words... if you shot at ISO 1600 and simply told your post-processing software to increase the exposure by 1 full stop and then compared the two images, you'd see that they look pretty much the same (probably subtle differences in how the algorithms work but probably nothing you could notice with your eye.)

Given a single unprocessed exposure and the time constraint of the 24mm lens (e.g. 600 ÷ 1.6 crop factor = 375. 375 ÷ 24mm focal length = 15.625 seconds) then using ISO 3200 certainly makes sense. But to get the noise levels down when the needs and constraints of the exposure would normally result in noise, you can shoot multiple exposures and then "stack" them.

But it turns there is a difference in what type of "noise" you get...

Camera sensors use both analog amplification of the information on the sensor... and that's used up until some ISO limit which varies by camera sensor model... and then to go beyond it uses simple digital amplification (not analog) for ISOs beyond that limit. The noise buildup from analog amplification is called "upstream" read noise and the noise buildup from "digital" amplification is called "downstream" read noise.

It turns out that magic limit for your T3i is probably ISO 800 (because every other 18MP Canon sensor seems to be ISO 800 -- including my own 60Da).

Since everything after ISO 800 is "downstream" read noise then it doesn't really make much sense to push ISO beyond ISO 800. In other words, ISO 800 is probably the ideal ISO for your camera when shooting astrophotography and to increase the exposure beyond ISO 800 it's best to increase the exposure time rather than the ISO (because to increase the exposure beyond 800 you could just use your post-processing software.)

Processing astrophotography images is a lot to do with a problem called the "signal to noise ratio". The "signal" is all the good data in your image. The bad data is the "noise". You want your images to have the highest amount of signal and the lowest amount of noise (in other words you want the ratio of signal to noise to be as large as possible.)

There are several ways to deal with the noise.

One method is to use the "zone" system -- much like Ansel Adams' zone system except this system only considers four zones (Ansel's system uses many more zones.)

This "zone" system works well when you only have ONE image (such as your image here).

The idea of the "zone" system is that noise is the worst (and most noticeable) in the "dark" areas... and the weakest in the bright areas. So imagine creating four categories...

A very dark zone
A moderately dark or "dim" zone
A middle bright zone
and a very bright zone

Basically if you were to look at your image histogram you can divide it into those four parts and process each part a bit differently than the others.

The "very dark" zone:

You'll get a "bad" signal to noise ratio in the "dark" zone -- likely the noise will be so great that it may actually overwhelm any possible "signal". If you isolate those "very dark" zone pixels you can then apply very strong de-noising to them (you could even throw away any data by just setting it all to your black point but there's a reason you might not want to do that -- more on that later) and you really don't need to fear that you are losing any important parts of the image. This data is mostly just the black background of space.

The "moderately dark" zone

This section may have (and probably has) data (signal) that you don't want to lose. I say "may have" because sometimes what you find in this zone is either light pollution or lens vignetting. So usually you try to normalize the image if possible. PixInsight (not free software unfortunately) can actually build a background "model" that can be used to subtract and eliminate light pollution or lens vignetting... but you can also do this by taking "flat" frames (flat frames are carting by taking a middle-gray exposure of a perfectly evenly illuminate flat subject. The reason for the fuss about the vignetting or light pollution is because the image out of your camera isn't "stretched" -- but as you process the image, you may start "stretching" the histogram to tease out details and help exaggerate or amplify the interesting subject matter in your image. And when you "stretch" the histogram you stretch everything about it... including any lens vignetting or light pollution which may have previously been "subtle" but because very "obvious" as a result of the stretch.

In case, the "moderately dark" zone usually does not contain a lot of interesting detail -- it's often faint. So if you can isolate the pixels in this area, you can apply a moderately aggressive amount of noise reduction without fear that you'll loose much that's important.

The "middle bright" zone

The zone actually doesn't need much of anything -- at least not much that can help it. It has both good news and bad news. The good news is that it won't have as much noise as the darker zones... little enough that it doesn't need nearly as much noise reduction. But the bad news is that it's also not particularly noise-free either -- which means that if you attempted to "sharpen" this zone then you'd only amplify the noisy pixels -- which you don't want to do.

The "very bright" zone

These parts of the image have the best possible signal to noise ratio ... the lowest amount of noise (which means it doesn't need de-noising) and the highest amount of signal. High amounts of signal are good particularly because it means they can withstand a lot of sharpening to help tease out details... without amplifying noise.

How exactly you go about using photoshop to do all of this is a much much longer conversation -- but this is the general idea.


But there is a better way to deal with the noise... which is to use image stacking.

Use of imaging stacking involves actually taking LOTS of images. Ideally you literally let the camera take ... say 25 or so images of the same thing. You can't do this with a camera on a stationary mount because the sky keeps moving (ok that's not quite accurate... the sky stays put and the Earth keeps moving... but the effect as seen from the ground is that the sky seems to be moving). Anyway, it REALLY helps to have a tracking mount to pull this off.

The tracker has a rotating axis and that axis is basically pointed toward the north celestial pole (that pole is about 2/3º away from Polaris -- the pole star.) You can put a ball-head on the tracker's rotating axis and that will let you point the camera anywhere you want in the sky ... just so the whole thing is rotating about the Earth's polar axis. The reason this works is because as the Earth spins from west-to-east... the tracker head is rotating from east-to-west on a rotating axis which is EXACTLY PARALLEL to the Earth's axis and also at the EXACT SAME RATE. This causes whatever your camera is pointed at to be held-fixed on exactly the same spot in the sky ... literally all night long if that's what you want.

In addition to lots and lots of these normal exposures (which we call "light" frames), it's good to collect "dark" frames. These are frames taken with the identical exposure settings and even on the very same night (it turns out the amount of "noise" collected is also somewhat strongly related to the actual temperature of the chip at the time of capture... so it does little good to capture "light" frames on a 90º night outside and then capture the "dark" frames in a 74º air conditioned room several days later because the noise buildup simply will not be comparable. Anyway the major difference between a "light" frame and a "dark" frame is that to capture the dark frame... you simply put the lens cap on -- thus preventing the collection of any "signal" whatsoever.... you now have samples in which every thing in the image is, in fact, "noise". Typically you capture roughly half as many of these "dark" frames as the number of "light" frames and for statistically reasons it works better if this happens to be an "odd" number of images. So with 25 "light" frames you'd capture about 13 "dark" frames (not 12 because it's an even number.)

Why so many?

If you capture a few frames, the "stacking" software can align each frame to each other frame by matching up the positions of the stars. This process is called "registration". It can then "stack" the images -- this process is called "integration" and if you have just a few frames then it will "average" the pixels from the input frames to create that master merged "output" frame. BUT... if you have a LOT of frames, it can use much better statistical methods (it doesn't "average"). It can use, for example.... one of several sigma clipping algorithms. These algorithms basically establish a bell curve for the value of each individual pixel. Suppose I have 11 frames (just an example)... and I've built a statistical model looking at all 11 frames in which I have this nice bell curve... and the value of light from the pixel I'm inspecting falls within a standard deviation on 10 of the 11 frames... but in just ONE of the 11 frames that same pixels appears to be an outlier (statistically). The integration algorithm can assume that something is wrong with the pixel in that 11th frame ... possibly an airplane flew through your image. Possible it was noise. Whatever it is... the software can safely ignore the value of that pixel which represents the outlier and just go with the pixels that are not outliers.

When you do this -- even feeding noisy data in to the stacking software -- you can get very silky smooth images out of the system (and here's the best part...) WITHOUT sacrificing any "sharpness" of the image.

It turns out there can be more than just "light" frames and "dark" frames.

There can also be something called "bias" frames (this is just like a "dark" frame in that you put the cap on the lens and shoot at the same ISO, but the major difference is that it attempts to measure the values from the camera sensor if the camera just takes the shortest possible exposure -- at that same ISO setting. In other words if your "light" and "dark" frames were shot at ISO 800 for 30 seconds then your "bias" frames would ALSO be shot at ISO 800... but they'd be the shortest possible exposure (e.g. 1/4000th sec.)

There can also be something called "flat" frames.

Imagine taking something like a iPad tablet... setting the entire screen to gray... holding it in front of your camera lens (it's actually better if it's out of focus) and snapping several images of it. What you'd get is a bunch of "gray" images (assuming you didn't over or under-expose). Those should be perfectly "flat" in that every pixel should be exactly the same brightness of every OTHER pixel. But the truth is that won't happen... you'll get lens vignetting and the corners will be darker. It might be imperceptibly darker... but when you "stretch" the histogram to bring out details when you process the full image that "slight" variation in brightness will suddenly become very obvious (and annoying). So these "flat" frames allow the software to measure the difference and compensate by removing all vignetting issues from your images.

The "darks" and "bias" images are combined to create a master dark. The "flats" are combined to build a master flat. The software then processes each "light" frame (individually -- before they are combined) against these master dark & flat frames to create something called a "calibrated light" image.

The "calibrated light" images are then matched up to each other by identifying the locations of stars and make sure they all stack perfectly (meaning the computer may slightly nudge the image data around so that the stars in each image will be perfectly aligned to the stars in whichever image you picked as your master (I typically pick the middle image in my series for the reference frame for purposes of alignment). This process is called image registration. Each "calibrated light" goes in... and you get a "registered calibrated light" out.

The last step is to take all of these "restored + calibrated light" images and then merge them together (preferably using one of the aforementioned sigma clipping algorithms) to produce the ultimate master imaged -- this image is the combined result of all the images you shot.

You can NOW go to work on that master image to start doing a few mechanical (non-artistic) steps such as color correction (white balance), background neutralization, etc. And once you complete all those somewhat routine processing steps you can get into the artsy steps in which you tease out the details, saturate the colors, and anything else that appeals to your eye.

Overwhelmed?

It takes a while to learn all of this so don't feel bad if it seem like it's a bit much to take in. Maybe you learn to do one new thing each time you process data and that's progress. Over time the image processing steps actually seem to make sense (initially it seem like rote memorization) and it all becomes easier and faster.

That's a long read, will read more when I have more time . Working now .




  
  LOG IN TO REPLY
dixiedawn
THREAD ­ STARTER
Member
Avatar
195 posts
Gallery: 19 photos
Likes: 479
Joined Aug 2008
Location: AZ
     
Jul 29, 2016 20:06 |  #7

Thank you everyone, this is all very helpful.

After some reading, and being overwhelmed, and reading some more - then working with the images I have from last night.. I think I'm starting to get the idea, but something isn't working quite right. The processed image is a stack of 4 or 5 frames(made copy from visible) I did use a "black" frame taken last night to remove hot pixels before adding each layer to the stack. If I add any more frames, the overall image starts to become blown out. I know some have posted images here that are stacks of 15+.

I did play around with curves, levels, color balance, unsharp etc on the shown stack.

IMAGE: https://c2.staticflickr.com/9/8775/28023954033_4af20d5d3f_b.jpg
IMAGE LINK: https://flic.kr/p/JGo6​cg  (external link) MilkywayStack (external link) by *dxd* (external link), on Flickr

one original sooc layer

IMAGE: https://c3.staticflickr.com/9/8135/28022919914_aa73ee50ab_c.jpg
IMAGE LINK: https://flic.kr/p/JGhM​MC  (external link) IMG_3349 (external link) by *dxd* (external link), on Flickr

debbie

  
  LOG IN TO REPLY
Gas ­ Hog
I like a good quack in the morning.
Avatar
711 posts
Likes: 981
Joined Jan 2014
Location: Lost and found
     
Jul 30, 2016 03:13 |  #8

TCampbell wrote in post #18081213 (external link)

I was just about to post exactly what Campbell said..you know if I knew what I was doing that is. -?
Excellent read.
Gary


Feel free to not "Like" any of my photos. Until the Like button is corrected..I wont be "liking" Yours!
gear list

  
  LOG IN TO REPLY
Davenn
Senior Member
Avatar
968 posts
Gallery: 32 photos
Likes: 441
Joined Jun 2013
Location: Sydney, Australia
Post edited over 2 years ago by Davenn.
     
Jul 31, 2016 07:58 |  #9

dixiedawn wrote in post #18081492 (external link)
Thank you everyone, this is all very helpful.

............... I think I'm starting to get the idea, but something isn't working quite right. The processed image is a stack of 4 or 5 frames(made copy from visible) I did use a "black" frame taken last night to remove hot pixels before adding each layer to the stack. If I add any more frames, the overall image starts to become blown out. I know some have posted images here that are stacks of 15+.

.

I am a bit worried about the bolded comment
please clarify .... 4 or 5 individually taken exposures or 4 or 5 copies of one exposure

if the first, that's OK. If the later, you cant do that as it defeats the purpose of stacking which is to enhance objects and decrease noise


Dave


A picture is worth 1000 words ;)
Canon 5D3, 6D, 700D, a bunch of lenses and other bits, ohhh and some Pentax stuff ;)

  
  LOG IN TO REPLY
dixiedawn
THREAD ­ STARTER
Member
Avatar
195 posts
Gallery: 19 photos
Likes: 479
Joined Aug 2008
Location: AZ
     
Jul 31, 2016 11:03 |  #10

Davenn wrote in post #18082522 (external link)
I am a bit worried about the bolded comment
please clarify .... 4 or 5 individually taken exposures or 4 or 5 copies of one exposure

4 or 5 individual, did my best to align them. The part about copy of visible is because I made a copy of visible, opened it as a new image, and closed the original stack. I didn't remember how many layers there were LOL I didn't save that one because I need the practice anyways. Then I did other post on the new, copied, image.


debbie

  
  LOG IN TO REPLY
sponsored links
(this ad will go away when you log in as a registered member)

2,226 views & 6 likes for this thread
First Milky Way SOOC help please
FORUMS Photo Sharing & Visual Enjoyment Astronomy & Celestial 
AAA
x 1600
y 1600

Jump to forum...   •  Rules   •  Index   •  New posts   •  RTAT   •  'Best of'   •  Gallery   •  Gear   •  Reviews   •  Member list   •  Polls   •  Image rules   •  Search   •  Password reset

Not a member yet?
Register to forums
Registered members may log in to forums and access all the features: full search, image upload, follow forums, own gear list and ratings, likes, more forums, private messaging, thread follow, notifications, own gallery, all settings, view hosted photos, own reviews, see more and do more... and all is free. Don't be a stranger - register now and start posting!


COOKIES DISCLAIMER: This website uses cookies to improve your user experience. By using this site, you agree to our use of cookies and to our privacy policy.
Privacy policy and cookie usage info.


POWERED BY AMASS forum software 2.1forum software
version 2.1 /
code and design
by Pekka Saarinen ©
for photography-on-the.net

Latest registered member is joshua.nh
755 guests, 403 members online
Simultaneous users record so far is 6430, that happened on Dec 03, 2017

Photography-on-the.net Digital Photography Forums is the website for photographers and all who love great photos, camera and post processing techniques, gear talk, discussion and sharing. Professionals, hobbyists, newbies and those who don't even own a camera -- all are welcome regardless of skill, favourite brand, gear, gender or age. Registering and usage is free.