Tom as far as I am aware there is only one program that stitches RAW files, and the is Lightroom CC. When stitching the RAW files though it doesn't have any manual options for laying them out. Joining images in this way is usually done by manipulating layers of RGB pixels, and when you are working with RAW files you don't yet have RGB pixels to work with. I think many people were quite surprised when Adobe released the RAW stitching, and also RAW HDR components in LR CC. Compared to the stitching tool in PS CS5 the RAW stitch in LR CC seems to produce the better automated results, at least for the sets of images I have processed in both programs.
When it comes to the final image, even if you set everything up so that you made the exposures correctly, with the camera perfectly level and the system rotating around the correct nodal point in the lens, which for some lenses might even be outside the physical lens, the fact that you are mapping a series of planes on to either a cylinder or sphere will mean that you will almost certainly end up with a few out of place pixels. You have two ways these will be generated too, the first is the mapping from many planes to the cylinder or sphere, the second is mapping the cylinder or sphere back to the plane that will be used for image display.
The first will happen even if you have 100% overlap so that every point in the image was in at least 2 frames. In this situation you would have every other frame just touching, and the one in between would overlap half each of the other two. When unwrapping back to the plane you face the same issues map makers have faced for centuries, that as far as a spherical projection goes, there is no way to do it that doesn't cause distortion of one kind or another. Using a cylinder projection in the first place can help to a degree in this phase, but may cause more problems in the first stage, which is why it is often best to try both if you can.
These out of place pixels are usually small enough that when viewing the image from normal viewing distances, i.e. the diagonal of the print or longer, they won't be visible to the average viewer. The problem you end up with is that eventually the corrective action that you take to fix one error simply induces an error at another location. Of course at this point you do have the option to take the image and use cloning and other painting techniques to hide those final errors if you really cannot live with them. From the way you talk you make it sound like you really could not live with even the smallest error, and would be inspecting a final very large print with a magnifying glass. This is really an impossible goal.
About the only way that I can think of shooting a panoramic image without the stitching and other errors is to use a 6×17 panoramic camera, or a 6×17 panoramic back on a 10×8 large format camera shooting 120/220 film. This will at least give you a decent sized 3:1 ratio negative or transparency to work from. Although it will solve pretty much all of the issues, it is an expensive option for a few casual panoramas. Saying that it seems that if you want to do good stitched panoramas you will be needing to spend around $500 or so on the correct mounting equipment to interface the camera and lens to the tripod legs.
Practically speaking though, if you are willing to accept the odd small places with a few out of place pixels, that will be all but imperceptible when viewed at normal distances, most of the software that has been suggested will do a very good job of it, even when working from good quality JPEG files, let alone 16 bit TIFF/PSD files. Using RAW files is mostly completely unnecessary from a quality point of view, although the DNG file you get from LR is quite a lot smaller than even an 8 bit TIFF/PSD file. What is more it will do it even with hand held shots, although they of course tend to need quite high levels of overlap.