pdxbenedetti wrote in post #18064351
How about one sentence: not using my tracker with my Nikon D600, use a 24mm f1.4 lens for largest aperture possible, set aperture to f1.4, ISO 1600 (ISOless point for my camera) and take a 12-15 second exposure.
My reasoning:
The two single most important things for getting the highest quality Milky Way images are 1) clear aperture area and B) exposure time. Ultra wide angle lenses (think lenses between 10-20mm) don't offer large aperture areas, generally the largest is f2 or f2.8. The amount of light your lens collects is proportional to the aperture and focal length, the Tokina 11-16mm f2.8 (for example) has a clear aperture area of (11/2.8) which equals 3.92mm. The Rokinon 24mm f1.4 has a clear aperture area of (24/1.4) which equals 17.14mm, this means the Rokinon collects (17.14/3.92)^2 = 19 times as much light. So even though you COULD take a 54 second exposure (using the rule of 600 and not taking DSLR crop sensor factor into account) versus a 25 second exposure with the Rokinon without getting star trails you'd still be collecting 9 times as much light with the 25 second exposure from the Rokinon.
You are absolutely right that the 24mm f/1.4 lens will ultimately collect more light than a 14mm f/2.8 lens, but the math has some errors which over-states the difference.
The area of a circle is computed as: Area = Pi x Radius^2
But it turns out there's more to it than this because a large opening at the back of a very long "tube" might actually collect less light than a smaller opening at the back of a relatively short tube (maybe... depending on the values). This is because a sensor at the end of a very long tube can only capture light if those photons of light were already headed nearly straight down the long length of the tube. Anything at much of angle will be masked out by the walls of the "tube". Whereas a short "tube" allows photons to enter and still hit the sensor from a relatively wide angle. Substitute "lens" for "tube". This value is succinctly captured in the "focal ratio" of the lens.
A lens with a lower focal ratio captures more light, because the ratio factors in both the "tube length" and the "aperture opening" size.
It also turns out that due to the formula of a circle, each time the diameter (or radius - it doesn't matter which) of a circle (or in our case a lens aperture opening) is increased by factor equal to the square root of 2 (roughly 1.41) then the "area" of that circle is exactly doubled. This is why the f-stops on a camera that we think of as "full" stops are not neat integer numbers, but rather are based on powers of the square root of 2.
e.g.:
√2^0 = 1 (anything raised to the zero power is 1)
√2^1 = 1.4 (rounded because f-stops on cameras only use the first two digits as significant)
√2^2 = 2
√2^3 = 2.8
√2^4 = 4
√2^5 = 5.6
√2^6 = 8
√2^7 = 11 (a little more rounding here ... technically it would be 11.2, but again, photography only uses the first two digits as being significant)
√2^8 = 16
√2^9 = 22
and so on. The pattern is that the base is always the square root of two (√2) raised to some power and it works out to all the full f-stops we used in photography.
So a lens which offers an f/1.4 focal ratio gathers exactly twice as much light as a lens at f/2. It gathers four times more light than a lens at f/2.8.
Suppose we use the rule of 600 (instead of 500 .... although the specific number used doesn't matter in order to illustrate the point. Many photographers prefer the lower number to make sure that they don't see any star elongation even when looking very closely). At 600 ÷ 24 = 25 (so you can do a 25 second shot). At 600 ÷ 14 = 42.8 (roughly a 43 second shot). HOWEVER... the 24mm is an f/1.4 lens and the 14mm is an f/2.8 lens. This means that the f/1.4 lens is collect four times more light for each 1 second that the shutter is open as compared to the f/2.8 lens. So ultimately it's "as if" you get to quadruple the exposure time when using the f/1.4 lens. In other words, multiple 25 (seconds) x 4 and you get 100 (seconds). So the 24mm lens collects as much light in 25 seconds as it would take the f/2.8 lens to capture if the f/2.8 lens was allowed to take a 100 second exposure (of course neither lens can take a 100 second exposure on a stationary tripod without very noticeable star elongation.)
This means that it if both lenses were f/2.8 lenses then it would be "as if" somehow the 24mm lens could take a 100 second image whereas the 14mm can only take a 43 second image. So the 24mm f/1.4 is getting to capture an exposure which is literally 2.33 (two and a third) times longer. The magic is in the focal ratio being so very low that it more than makes up for the focal length difference. If it were only an f/2 lens then it wouldn't be so very different (only about 1/6th of a stop better and not really enough to be very noticeable.)
At 35mm f/1.4 the lens is still better than a 14mm f/2.8 but now it's "as if" the 35mm can capture an exposure which is just slightly over 50% longer -- so it's still more than what the 14mm can do but about a 1/2 stop.
Astrophotography is all about collecting as much of the few photons of light at night as possible, to do that you need a lens with a wide aperture with a moderate focal length and you need to take as long as an exposure as possible. Everything else is digitization and manipulation of the signal after it's been converted by your sensor. ISO is a vestigial term from the days of film photography that literally means nothing for digital photography. People tend to have the idea that ISO still has some link to sensitivity (as in the days of film), it doesn't, all it is is a digital amplification of the post sensor signal by your camera. Every modern dslr camera will be the same, at some point the digital noise introduced by increasing ISO outweighs the signal noise from your sensor, that's the point you should stop increasing ISO. For most cameras these days that point is around ISO 800 or 1600, some of them up to 3200. Shooting higher than that point only decreases the dynamic range of your sensor which means you start clipping highlights (and the highlights in astro images are stars). Ever wonder why the majority of stars in most astro images are white instead of a wide variety of colors (blue, red, orange, yellow, and white)? It's because people are shooting at too high of an ISO point, clipping the stars highlights and the color data is lost. You're better off shooting at the ISOless point and then increasing exposure in post, doing that maintains dynamic range and doesn't introduce any more noise than just increasing ISO.
Very good points are made here. This is something that is missed by many photographers who use digital cameras. If you take a shot at, say, ISO 3200 and then take another identical shot at ISO 1600 but pull that shot into photo adjustment software and boost the ISO 1600 exposure by exactly one full stop, you will end up with identical images (or at least you should end up with identical images if your image processing software is working correctly). All the camera does when you boost ISO is take the data that it reads out from the chip and multiplies it by some value based on the ISO (and there's no reason you couldn't just have your computer do that instead of having your camera do that.)
Noise can come from a variety of sources and at low ISOs, many chips actually do get more "signal" and less "noise" as you boost the ISO values... but there's a point where all of that ends. On my 60Da it ends just after ISO 800. On my 5D II and 5D III it ends just after ISO 1600. This means it really doesn't matter how much I boost the ISO after that magic ISO value... I'll be boosting the noise every bit as much as I'm boosting the signal data and the image won't really be better.
So it turns out that what most astrophotographers will learn at some point is that it's really all about trying to improve the signal to noise ratio (SNR). This is why astrophotographers who use telescopes to shoot deep-space objects will take many many images of the same exposure (and may even use dithering) and also take "dark" frames and "bias" frames.
If I take two identical images back to back, the real signal data in each image should be the same... but the background noise should (hopefully) be different. Computer software can then align the frames to match and then "average" the data in the pixels and this will help smooth out the noise (the bad stuff) with the signal data (the good stuff like stars and nebulosity).
There's a Poisson relationship that says your ability to knock-back the noise levels will be based on the square root of the number of images you shoot. So if you shoot 4 images (the square root is 2) then you can do twice as good a job at getting rid of the noise as if you just shot 2. And if you shoot 9 images then you should be able to do 3x better.... or shoot 16 or 25 images to do 4 or 5 times better (respectively).
Something a little more magical can happen if you shoot 10 or more images and then use "sigma clipping" instead of "averaging". Sigma clipping is a statistical way to improve the image which works so well that even if a plane or satellite flew through one of the images it could be completely remediated. They idea is to find statistical outliers in the data. It is as if you decide to have each image "vote" on the color each pixel. If you shoot 10 frames and a given pixel is "black" in 9 out of 10 of the frames, but the pixel is "white" in just 1 frame, then the "white" value will be treated as a statistical anomaly (an outlier) and it can be completely ignored by the software... even though every other pixel in the same image is statistically more in agreement with the same corresponding pixel in all the other images (so you still use the image where the airplane or satellite flew through and in the final combined image the satellite or airplane trail is completely gone -- as if by magic. (ah, the magic of math!).
Camera sensors will have some random noise, but they will also have some pattern noise or stuck pixels. This means noise can pile up in the same spots on every image you shoot and this would fool the computer stacking software into believing the "noise" was actually valid data (dim stars) and are left in the image instead of being removed.
There are two ways to get around this... one way (the most common) is to shoot "dark" frames. These are identical exposures... except the lens cover is on the camera. This creates an image in which the noise piles up in the image... but no actual "light" from the subject reaches the sensor. In other words you just created a measurement of what is exclusively noise and no good "signal". The computer software can now take numerous samples of this "noise" data to build a statistical model of the noise and then "subtract" that noise from every "light" image that you shot.
The other way (which is technically even better but requires more equipment to pull it off) is to use "dithering". This technique means the camera lens (or telescope) is moved just a very very tiny amount BETWEEN each frame that you shoot. The movement is only enough to shift the image by a few pixels in a random direction. When the images are "stacked" by the computer, it uses the positions of stars to register each image frame to the other frames and align (or "register") all the images. This means your "stars" are all neatly aligned which also means any nebulosity is also neatly aligned... but what's mis-aligned is the "pattern noise". This noise is shifted in a random position in every frame because the camera sensor was moved between frames. Since pattern noise and stuck pixels are no longer in the same spots (relative to the stars) the computer can quickly (and accurately) recognize them for what they are... noise ... and eliminate them from the image. You get a much cleaner (lower noise) result.
I mentioned the "dithering" uses some special equipment because typically to pull this off, the camera is mounted to a telescope which also has a guide-scope and guide camera and auto-guiding software. The camera is being controlled by image capture software. The software is coordinating the "guide" system with the "image acquisition" system so that the scope will not be nudged while the camera shutter is open, but the acquisition software will notify the dithering system as soon as it closes the shutter so that the dithering system can tell the guide-software to nudge the scope by a small amount. Then guider then tells the dithering system that the movement is completed... it usually also factors in a "settle down" time (to make sure nothing is vibrating from the recent move). The camera then opens the shutter to capture the next frame. Of course now you have a camera attached to a full-blown equatorial telescope mount (not just a tracker head) and the in addition to your "camera" there is a separate guide-camera and guide scope (tracking a star) and this is typically connected to a computer (laptop) running some image acquisition software as well as guider software. PHD (which really does stand for "Push Here Dummy") is an easy-to-use and free auto-guider application (you still need a guide camera and guide scope) and "Backyard EOS" knows how to fully control the image capture session to nearly any Canon EOS DSLR camera (as long as it isn't an extremely old model.. I think it needs a Digic II or newer processor in the camera. So some of the original EOS models aren't supported.)
If you truly want the highest quality astrophotography images you should buy a tracker so you can actually collect long exposures at low ISO.
Yes, absolutely. The iOptron, the Sky Watcher Star-Adventurer, etc. I use a Losmandy StarLapse but that's a higher-end (more expensive because it's machine-tooled instead of mass-produced and costs around $600. It can handle heavier loads and is designed to hold the camera in such a way as to neutrally balance the load (load won't "shift" as it slowly rotates so this should minimize any flexure issues that might spoil the image.) The Star-Adventurer has an optional counter-weight bar -- also designed to balance the load to minimize flexure. It's an add-on feature (not included in the base product) but for serious users and longer exposure durations it can help you get longer imaging with help keep the stars sharp by minimizing flexure issues.
Think of it this way, a lot of people spend thousands of dollars on equipment for this hobby, you can spend probably a third of what you'd normally spend, buy the Rokinon 24mm f1.4 lens for $500 and an iOptron Skytracker for $300 and produce significantly better results. I kind of laugh thinking about all the money I've spent on photography equipment for this hobby on various lenses and other things, now I pretty much only use one lens for widefield (the 24mm), one lens for slightly zoomed portions of the sky/landscapes (85mm f1.4) and my 150-600mm monster for deep space stuff. Pretty much all my other half dozen lenses are now collecting dust unless I want to use them for non-astro purposes. Plus now instead of taking 200+ shots at night I take maybe 40 shots and I use all of them, much less hard drive space taken up and much less sifting through tons of shots to find the best ones. I never shoot above ISO 800 anymore and I can stop down to f2-f3 to get round stars with a star burst affect thanks to the aperture blades, then take 1-5 minute exposures.
This is just my reasoning, there's more than one way to skin a cat and people are producing quality astro images doing it other ways, but I truly believe that the best image quality (in terms of noise levels, detail, natural looking, accuracy, etc) is produced with the method I described.
I suppose I could sum that up to say "it's not always about the lens -- other factors are important too". This comes up on astrophotography forums (using telescopes instead of camera lenses) all the time. The relative newcomer asks for telescope buying advice and then mentions "I'd also like to do astrophotography with it". But there's a bit of a learning curve and price sticker shock. The mount is usually even more important than the telescope that sits on that mount... and it's not like we just attach a camera, take a photo, and a stunning image comes out... the effort put into the "image data acquisition" process is an art unto itself. There's also the "image integration" process (combining all the data to make one image -- even though you haven't even begun to start processing that image.) And then ultimately there's the "image processing" part (making the image visually appealing) which is another huge learning curve (and more of an "art" than a "science"... the first two parts, acquisition and integration, are more "science and math" than "art". But the last final phases of image processing are basically "art".)