The ideal features for a camera that can do 'planetary' imaging are different than the needs for 'deep sky' objects.
Deep sky objects require several very long exposures (e.g. often 5-10 minute long exposures but they could be as short as 30 seconds to 2 minutes). Such long exposures will have many stars in them and the stacking software uses the star positions to 'register' (align) each frame.
For as distant and as small as these objects may seem, they're "large" compared to planets. Most deep-sky objects are anywhere from a few degrees across to at least several arc-minutes across.
The free software that does the stacking is called Deep Sky Stacker. It doesn't do tethering or the image acquisition (it does not control the camera) but once the images are imported, it will stack them.
For planets, it's a bit different... planets are "bright" compared to deep sky objects, but planets are also "small".
Jupiter is currently very bright and at it's largest (the opposition of Jupiter was just last Friday. Opposition is the point when the Earth is between Jupiter and the Sun -- which generally means we're the closest we'll be Jupiter all year and that also means it's the largest and brightest that we'll see Jupiter all year.)
And yet... Jupiter is a mere 44 arc-seconds (angular size) from edge to edge.
That tiny size means that only a tiny area on your camera sensor has the image of the planet and the rest of the camera frame is basically wasted data that won't be used.
There's a program called PIPP (Planetary Image Pre-Processor) which was written by a gentleman who noticed that all that extra data (having full-size frames even though we're only interested in the tiny piece of frame that has our planet) was really bogging down the stacking process. PIPP is able to help you crop the video so that it can speed up the stacking process (it also does format conversions and other handy tasks).
Planetary imaging typically involves recording many frames of video (preferably at a high frame rate). The planets are bright enough that you generally won't see background stars in the same frame (the stars would have required much longer exposures -- so long that the planet would have been over-exposed.)
Having these video frames, the stacking software typically asks you to skim through and find examples of some of the clearest frames (the atmosphere is bit like imaging through water -- it'll distort the images in most frames but a few frames may be steady). The software then looks for other similar quality frames and uses just the best frames for processing (the rest are ignored.)
Since there are generally no stars, we can't use stars as reference points to register each frame. Instead the planet's disk is used for alignment with planet features (points of contrast on the object) being used to help out.
I use a tethering cable that cost less than $10 on Amazon (the "tether tools" brand cable is around $35-40) and having both and comparing the two side by side, I could identify no difference in quality -- the tether tools brand cable is orange.
In terms of software... one of the most popular programs for image acquisition (camera control) is something called Backyard EOS. For Nikon owners it's a program called Backyard NIK (warning to Nikon owners... D3xxx series cameras are not supported last I checked. It requires a D5xxx series or above due to Nikon SDK issues.) These programs require Windows (does not run on Mac - but it would run in a virtual machine (and I've used it that way using VMware Fusion on my Mac.)
The software allows you to control the camera remotely from your computer but it also allows you to control the imaging runs (whether planetary or deep-sky object). It also coordinates with other software. Deep-sky object imaging generally requires auto-guiding and the software does interact with PHD (which really does stand for "push here dummy") auto-guiding software. This way if you've enabled "dithering" for your image capture (nudges the telescope very fractionally between each frame so that any pattern noise or stuck pixels will not show up in the same spot in each frame and can more easily be cleaned up in the stacking process) will only perform the nudge when the camera shutter is closed (thus not smearing your images.) But the capture software and guide-software must "talk" to make this work.
Once the planetary data is captured (video data), the most popular programs for doing planetary processing are AutoStakkert and Registax. Auto-Stakkert is mostly *just* about the stacking. Registax does stacking but offers more in the way of post processing tools that AutoStakkert lacks. This might lead one to conclude that Registax is 'better' (because it does more) but I find a number of planetary images prefer AutoStakkert just to do the "stacking" and then turn to Registax for some post-processing (and then often pull the image into other programs such as WinJUPOS ... and ultimately into PhotoShop, etc. for some final finish work.)
You don't technically "need" any special software to do astrophotography image capture... but it is more convenient than manually operating the camera.
If you're planning to do planetary image capture, I'd seek out a Canon model that supports the "crop" mode video capture.
I know the 60D (and 60Da) as well as the Canon T2i (aka 550D) support this feature. MOST OTHER MODELS DO NOT! The T2i, T3i, T4i, T5i, 60D, 60Da, and 7D all used the same Canon 18 megapixel sensor (but the firmware is different). The quality of image capture will generally be about the same. While camera "body" features are a bit nicer as you get to higher end bodies, the sensors make little difference. So there isn't much point in spending too much money here. The 60Da is a special case... it was optimized for astrophotography and has a different filter which makes the camera significantly more sensitive to full spectrum light (in particular the Hydrogen alpha wavelength which is extremely common in deep-sky emission nebulae but most cameras struggle to capture it.)
Camera "sensors" are sensitive to full-spectrum light, but human eyeballs are NOT equally sensitive to all wavelengths in the visible spectrum. Our eyes are most sensitive to the greens in the middle of the spectrum and significantly less sensitive to the reds near the long wavelength end. So camera sensor filters are designed to trim the light transmission designed to mimic the sensitivity of the human eye so that the photos we take will resemble what our eyes could see. For astrophotography, imagers prefer to remove those filters are replace them will full-spectrum filters so that they aren't limiting the amount of light the camera can collect (thus getting more data in shorter exposure times.)