Ok, here are my thoughts on this:
Motion blur in general – if from camera movement on any axis or subject movement or both – simply has the effect, that a point at the subject plane no longer resembles a point at the rear focus plane (= sensor plane).
While a misfocus will result in the well known circle of confusion that lowers contrast and in turn resolution uniformly in all directions, motion blur will ideally turn a point to a straight line, thus having a well defined direction. Of course, in the case of camera shake this could well be a polygon, if the shutter is still open while the direction of the camera movement changes. But the worst case scenario will be a unidirectional movement, because the end point of the movement will then be farthest away from the starting point, hence the error will be most obvious.
As with the CoC, we need to define an error margin that just gives acceptable results. The classical definition was related to the resolution capability of the human eye and for “normal viewing distance”, which is supposed to be about equal to the diagonal dimension of the picture. So size does not matter and in case of a 8 x 12” print (20 x 30 cm) it is simply assumed that it will be viewed at a distance of 36cm (14,2”). So if we are looking for the error margin according to the classical definition, we can use the respective CoC, that would be roughly 19 µm for a 1.6 crop sensor.
Of course we have to adapt this value depending what we are looking for: If we’re looking for the margin, where an image or image detail will be perceived as blurry for sure, we will need to define an error margin that is equal or slightly higher than the before mentioned 19µm (I concentrate all further comments on the 1.6x crop sensor).
If, on the other hand, we search for a margin that will make sure that the image (subject) will be perceived sharp no matter what, we cannot assume any particular viewing distance, especially since nowadays it is en vogue to view images on the monitor at 100% pixel representation. In this case, we cannot base the error margin on the resolution of the human eye, but look at the image sensor instead. A safe bet will be about 1.4 times the pixel pitch, this will give e.g. 8µm for a EOS 40D.
For linear movement (either the camera or the subject), the calculation is fairly easy. The resulting error will simply be the distance of movement multiplied by the magnification.
Consider you take a picture using a 1:1 macro lens we will capture a image of 22.2 x 14.8 cm and if the subject moves 1mm during the exposure time – and the subject were a single point - the sensor will capture it as a line of 1 mm length.
If the magnification were not 1 as in the previous example, but 10, then the recorded image would now be 22.2 x 14.8 cm instead of mm. Hence, if the subject or the camera moves 1 mm, a single point would result in a line now being only 0.1 mm in length.
We can conclude, with linear movement, the shooting distance is important, because it defines FOV, hence magnification. 1 mm linear camera shake will clearly show in a macro shot, while it will be unobjectionable in a landscape image, where your focus plane is at or near infinity distance.
So we can develop the rules for the motion blur right now. We need the error margin, let’s assume we absolutely want to freeze motion so we use 8 µm for this exercise.
We also need the magnification, which in turn is a function of sensor size, focal length and distance. For the 1.6x crop sensor, 22 mm focal length will make the picture width (in the focal plane) equal to the focus distance. So, if we use a 220 mm lens at a sports event, the image width would be 1/10 of the distance. If we stand 30 metres away from the action, the width of the focal plane will be 3 m. We now have to determine the magnification, that is sensor width divided by the focal plane width: 22.2 mm / 3000 mm = 0. 0074;
Now if we know the permissible error and want to calculate the subject movement necessary to induce this error, we have to divide the error margin by the magnification: 8 µm / 0.0074 = 0.00108 metres = rounded 1 mm. The relative movement between subject and camera (it does not matter if the subject, the camera or both are moving) must not exceed 1 mm during the shutter is open. To be more precise, the peak value of the relative position shift must not exceed 1 mm. But for strictly monotonic linear movement, the first definition is sufficient.
This means, that in the example above, 1 mm movement of subject, or camera, or both – in any case relative to each other - would be allowable during a 1 seconds exposure. A normal walker is supposed to move at a speed of 1-2 m/s, so the shutter speed at this magnification needs to be 1/1000 to 1/2000 s, respectively. This appear to be pretty tough requirements for a relatively slow walker, but keep in mind that we have him fill the frame (real world image size is 2 x 3 metres) and it will take him less than one second to completely move out of the frame. If we choose a longer shooting distance or a shorter focal length, magnification will drop and we will be able to use slower shutter speeds to capture a sharp image of the walker.
If we try to track the walker with our camera, then we finally have the case that both subject and camera are moving. The more accurate the tracking, the lower the speed difference will be. If we can manage to resemble the movement of the walker with 10% error, i.e. the speed difference will never exceed 10% of the walker’s speed, we can get away with a shutter speed of 1/100 to 1/200 s. This assumes that we use a tripod to fight the angular camera shake which would show clearly at shutter speeds this low.
One downside of the tracking is that it will only work if the subject doesn’t change its shape during movement. This would apply to a car, but the walkers legs move at different speeds, so we can hopefully get his face sharp, but the legs (and also swinging arms) will remain blurry. This example shows the difficulties to get a perfectly sharp tight action shot in some fast sports, where the competitors move much faster (and maybe erratically) than the above mentioned walker.
I am too lazy to do the complete math for angular camera shake, and I do not see the point in doing it, because it is fairly individual, how shaky ones hands are. So everyone must gain practical experience with this. But here are the fundamentals:
Angular camera movement is not affected by magnification and shooting distance respectively, but the angle of view, hence focal length. The math should be simple, if we assume that the subject plane is a sphere and ignore the resulting error towards the edges of the frame (real world lenses try to correct this and have more or less flat focus planes). I’ll make it simple: If we e.g. have a standard lens, giving a angle of view of 45 deg, and the camera shake is 1deg, a point will turn into a line of 1/45 of the image diagonal.
So we can try to explore what amount of shakiness has been the basis for the traditional 1/f rule. A lens with 45 deg angle of View on 35 mm film will have a focal length of approximately 43 mm. Therefore the shutter speed should be at least 1/43 s = approx. 23 ms. As far as I remember, CoC for classical 35 mm film is 30µm (these value has been changed at least once during the past decades), so the relative error (related to the film diagonal) due to shake must not exceed 30 µm / 43 mm = approx. 0.0007; The permissible angular shake is therefore 45 deg x 0.0007 = 0.0315 deg = 0 deg 1 min 53 sec
If I’ve made no mistake, this is the angular shake that was expected during a 1/43 s time period, or approx. 1.35 deg/s; Wow – now we know it.
Now I hope, everyone can derive all the formulas, tables and diagrams they need or want from the suggestions above.