For full resolution colour detail the Nyquist limit for both the 7DII and the 5DS are around 65 LP/mm, for the 5DSr anything above that level is likely to induce colour moire effects from aliasing. Admittedly because you can play tricks with the fact that for colour you are having to use the RGGB sensel quartets, you can get away with a bit higher resolution from the lens for full colour detail. I would still not want to provide more than about 80 LP/mm of colour signal to the sensor. In almost all other situations where you are converting from an analogue to a digital signal it is considered bad form to to closely approach the Nyquist limit, much better to have some buffer room. Although you don't get aliasing if you keep under Nyquist, you can still get some serious distortions of the signal if you run close. For example if you are sampling a signal at the Nyquist frequency, although you can recover the frequency correctly, the recorded amplitude will be dependent on the phase relationship between the signal and the sampling point. You could actually record the signal with an amplitude varying between 0 and 100% of the actual signal level.
When you look at older cameras their resolution is even lower, a 300D has a Nyquist frequency that is about half that of the modern cameras. It is only the inclusion of the analogue anti aliasing filter on these sensors that stops us from suffering moire from aliasing, be that colour, or simply grayscale, even with relatively cheap lenses. IMO we really need to get to at least 40 Megapixels on APS-C, and 100 MP on 35mm before removing the AA filter really becomes viable, with the current best lenses, which seem to be capable of resolutions between 120 and 130 LP/mm. For full colour resolution on a lens resolving 120 LP/mm you need to be using a 91.3 MP APS-C sensor, if utilising a RGGB Bayer Colour Filter Array. That jumps to 234 MP for the 35mm format.
The other big problem with digital imaging seems to be the constant desire to display the resultant image data on a digital output device*, at a relatively low fixed output resolution. Not only do we do this, but we look at only a generally small area of the image, and a stupidly close viewing distances. The vast majority of computer screens run at around 100 PPI, so your current generation cameras with sensors that contain in excess of 5500 pixels on the long side are effectively being displayed at over 55" wide. For a camera with a 35mm format sensor that is an enlargement factor of at least 39× and on APS-C it is a 62× enlargement. If you take say a 6 Megapixel camera like the 300D with only 3072 pixels on the long edge, your 100 PPI monitor is displaying the image as if it were only 30" wide, or for APS-C an enlargement factor of only 31×.
To make a proper comparison of the images it should be done at a fixed output size, with a variable output resolution to match the physical sizes. So if we pick 30"×20" as our output size your 7DII is going to have about twice the resolution of your 300D. Size matched like this with unsharpened images, the newer sensor will show almost twice as much detail, i.e. resolution, while the apparent sharpness of incontinuous signals, such as edges between large areas of differing tone/colour will actually look the same. As long as you are comparing output at the same physical size, and are using the same sized sensor, the effects on image quality from things like lens aberrations, where you use the same lens; and camera shake, where the amplitude stays constant, will also appear to be the same, regardless of sensor resolution.
*Computer screens are still digital devices when considering the spatial domain, even CRT screens are digital in the spatial domain. When we view a computer or TV screen it is actually our eyes that integrate the digital spatial domain signal to an analogue signal that our brains can interpret.