I've been experimenting with a bit of astrophotography (5D4 on a tracking mount, with a 70-200 f/2.8 IS II). Obviously this doesn't give much "reach" for galaxies, so I've knocked together a mount that allows me to put a Raspberry Pi V2 camera module (with its lens removed) on the back of the 70-200 lens. My basic question is: is there any point in continuing with this?
The Pi camera module uses an 8MP Sony IMX219 sensor, with a diagonal size of 4.6mm and a pixel pitch of 1.12um.
The sensor on the 5D4 has a diagonal of around 43mm, and a pixel pitch of 5.36um.
I make the crop factor 43/4.6=9.3x. Therefore I should get a framing roughly equivalent to 200x9.3=1,860mm when I use the 200mm lens on the Pi camera.
However, will I be getting enough detail from the lens to make this worthwhile? Based on my (weak) understanding of MTF data and lines per mm (lpmm) values, the gist I get is that the 70-200 is roughly 40 lpmm at 200mm and f/2.8. If this (https://www.optowiki.info …millimeter-to-pixel-size/
) is accurate, then that 40 lpmm is probably not out resolving even the 5D4 sensor (40 lpmm "supporting" a 6.3um pixel size, with the 5D4 already being finer than this at 5.36um). Based on that, there's no way the 70-200 is going to be resolving enough detail to make use of the Pi camera's 1.12um pixel pitch.
However, I understand that these things aren't black and white, and that having the sensor out resolve the lens is not necessarily a bad thing.
I could just crop the 5D4 captures to match the sensor area of the Pi camera, resulting in approx 720x480 (0.35MP) images. What isn't clear to me is if the equivalent capture from the Pi camera (at 8MP) will actually be better - especially once a stack of images are processed to produce on final output?




