MedicineMan4040 wrote in post #18887011
Malveaux, am I right in believing that the smaller sensor cameras afford me the same benefit I see in APS-C over FF in my
birding images, e.g. the ability to crop in more on the bird target with the APS-C over the full frame? In which case I'd
go for the ASI294MC which is (to my elementary understanding) an mFT sensor versus an APS-C.
Also, full understanind on why cooling is a key ingredient in all of this.
OK, thanks again for your efforts in helping me understand the big picture.
Oh, we're (the better-half and myself) are just now understanding that with CMOS we'll need a tablet or pc to have a screen
to focus with! Yep, babes in the wilderness here in astroland!
Luckily same company ZWO makes a small computer module ASIair that will wifi to a tablet for control and focusing screens.
Heya,
Astrophotography is different. Sensor size is going to largely serve only the function of field of view control. Beyond that, it's all about pixel size and sampling so that you're recording the resolution. Resolution is a function of the aperture (size opening into the instrument, which is 51mm, not the focal-ratio which is often called "aperture" on terrestrial photography gear and forums) related by Dawe's limit, limited by the seeing (air turbulence), and is recorded via sampling which happens by matching the focal-ratio to a pixel pitch size that is it's ideal sampling which will dictate whether you're under or oversampling. dSLR imaging in general is grossly undersampling, where lots of data is essentially lost because the pixels are very large and the focal-ratio is short and so data is literally lost instead of recorded on an individual pixel, as things can be resolved that would have been distinguishable. To get the most resolution with astrophotography, you match pixel pitch to your focal-ratio for a given wavelength of light (or a close approximation of visible light if doing color) and you do not worry about sensor size other than how it effects FOV and you do not worry about number of pixels because you're recording very small distant objects and you don't have the luxury of putting a few million pixels on a single bird's face, instead you will be putting a few thousand pixels, or a few hundred pixels on a deep space object and so every pixel matters and you want to record the data at the best sampling level you can so that the resolution is recorded ideally. So don't think about using a small pixel, small sensor camera, just to be able to crop more. Just focus on best sampling the resolution of the data and then do what you wish with it afterwards for display purposes.
Imaging scale is what matters. The RedCat is for course (small) image scale imaging, meaning, not high resolution fine detail of the objects. That's wide field imaging in general with respect to deep space where most of the objects through this small instrument will be very tiny. But, this imaging scale is going to handle poor seeing conditions without effort unlike a large imaging scale which is totally limited by seeing potentially.
The RedCat is a 51mm aperture, F4.9 focal-ratio instrument. Let's say you have average seeing of 2~4" FWHM, and you will ideally sample around 0.67"~2" per pixel. The ASI071MC sensor has 4.8um pixel size pixels which means the resolution of this combination is 3.96"/pixel. This is significantly under-sampling (meaning you are losing data that could have potentially been resolved and recorded). The pixels are too large for such a short instrument to optimally sample. This is exactly what is going on with a big pixel dSLR sensor and short fast focal-ratio camera lenses. Literally no different. To optimally record resolution with the F4.9 focal-ratio you need much smaller pixels, which you'll find are available down to 2.4um (IXM178 sensor and IMX183 sensor), and 2.4um will result in 1.98"/pixel resolution which is right in the middle of ideal with the RedCat 51's aperture & focal-ratio. So your ideal pixel size is 2.4um~2.9um and there are sensors out there for this. So your ideal sensor is actually the ASI183MC Pro. Getting larger pixels literally is just going to induce undersampling and you're losing data. The beauty of the 2.4um pixel, being so small, is that these sensors allow binning, where you can combine pixels to form larger groups, so you can bin 2x2 and have 4.8um pixels if needed from the same sensor, to best record resolution when paired with a longer focal length (such as a F10~F12 instrument if you ever used a longer telescope).
Cooling is very important, especially with color imaging, because you're going to have the sensor active for very long periods of time and there is significant noise. You're used to imaging bright things that are not confusing to figure out a bright bird feather from a dull background. With deep space, the signal of a DSO is not much different from background signal. So you need lots of signal, stacked, to increase the signal to noise ratio, so that you can differentiate data from noise. There are several stages of noise generation in the process, from read noise, write noise, etc. There's a lot of documentation out there to help you understand what it takes to make sure you're not just recording noise and instead swamping noise with signal so that the data you collect is signal, increasing the signal to noise ratio. This is why we take lots of images of the same spot over and over and stack them. Each time you do it, more random noise is differentiated and removed from the signal that is static (the signal is the light from a DSO). So lots of frames, stacked, do this function. This is integration time. And to further calibrate, there are flat frames (to remove dust, vignetting, etc) which are mandatory, bias frames, dark frames, and even more from there, all working to remove noise and patterns from the data so that your signal from DSO is kept and the artifact and noise from different processes in the camera are not being recorded. Cooling helps by significantly lowering the noise. It's dramatic how much cleaner a cool sensor's data output will be than a hot noisy sensor. When imaging in color, this is especially important because you'll have to expose the sensor for longer to get the same signal amount than a monochrome sensor (due to the bayer matrix). Dithering is another strategy to help remove noise such as walking noise, patterns, etc, which will tell the mount to randomly move slightly between captures, so noise is not in the same place twice, and will thus stack out as random noise (this helps big time with not needing dark frames). Dark frames can be important to remove the amp glow that will be induced by a cooled powered sensor. Flat calibration frames are the most important to remove vignetting and dust, etc. And bias frames are needed to calibrate the flat frames correctly. But cooling helps lower the overall noise significantly right away, so it's highly recommended.
Exposure in deep space isn't the same either. You have to expose enough to get signal, but not oversaturate stars (they then lose their color and lots of data with it). Under a dark sky this is so much easier because you don't have the light pollution to fight against. Under a light polluted sky it will take a LOT more integration time to remove enough noise to get the same quality signal you'd get from much less integration time under a dark sky. So if you're under a dark sky, you will have a much easier time. In general, without getting into how to measure for ideal exposure for your sensor and conditions, just a dirty rule of thumb, you'll look to expose so that the spike on the histogram is generally 1/4th to 1/3rd from the left. More than that and you're likely over-saturating stars and just recording light pollution and more noise. A histogram at 1/4th to 1/3rd from the left is going to appear dark. You may not even see the DSO at all. But it's there. And this is where you build signal to noise (through stacking) to differentiate the signal and noise and then process the signal as data. Total integration time depends on your expectations. If you want a course, noisy image, showing only the brightest parts of a DSO, it will happen with less integration time. But if you want to get more out of it, it will take more integration time. Depends on how dark the sky is, but you can generally get away with lots of targets with a few hours no problem with a dark sky. But if it's not a dark sky, it can take a lot more time. M42 is very bright by comparison. But few DSO are nearly that bright and so will require a lot more signal to noise coming from a lot more time. Your exposures with the F4.9 RedCat will likely be on the order of ~2 minutes under a dark sky depending on the camera and gain parameters you use. Signal to noise increases by the square root of the number of frames stacked, so if you think about it, you need lots of frames to really build up signal on a dim subject.
Ultimately if this doesn't sound like your cup of tea, then just use a camera lens and dSLR with a tracking head that can track for 2~3 minutes and you'll be fine. Doing the above still requires significant processing of the data to draw the DSO out of the data. M42 that you posted is a very bright large DSO. But when you try that on something much dimmer and smaller, you'll see why the above is so important because stretching the data will fall apart when its full of random noise. And really, unless you're into a dedicated deep space imaging platform, I wouldn't actually bother with a telescope and dedicated CMOS USB camera and if you're just going to dabble in 30 minute sessions here and there (and I don't mean for that to sound snobbish or anything, just trying to be realistic with expectations on most subject matter in DSO that will require far more integration time and effort, so that you're not tossing money away on something you aren't really into), tops, in color, then really you probably are better off using a nice camera lens and a dSLR/mSLR and calling it a day because most of the work in a dedicated platform is about differentiating signal and noise and then processing it.
There are headless platforms, miniPCs and RasberryPi and stuff to handle your camera, guiding, and acquisition software if you go that route and then see what's going on via phone/tablet over wifi and an app.
Very best,