It seems you may be misunderstanding the RAW conversion process a bit. First of all, the camera has cyan, yellow, green and magenta colored pixels, but you most likely want to see them mixed to make all possible colors. In order to do this, the camera has to "guess" (interpolate) the missing color components from surrounding pixels. So some processing has to be done to see a "normal" image.
Furthermore, using the 'camera selected settings' means that the software will try to use the same whitebalance settings as would be used in the camera if you had shot using the JPEG format. The whitebalance settings influence how the CYGM channels are mixed together to form red, green and blue, which define the colorspace that computer monitors work with.
Finally, the capture device is lineair, as it essentially is counting incoming photons. Twice the exposure time will result in twice the light and twice the number read out from the CCD. Since computer monitors are not lineair, they require for example the signal to be quadruppled for the emitted light to double. In order to compensate for this, converted pixel values are 'gamma-corrected', which means that the lighter pixels got extra light to make your screen show them bright enough.
So a lot of processing is done anyway before you get your TIFF file. If you want to learn more about these processes and get most out of your 'digital negatives', I recommend visiting http://www.aim-dtp.net
,
which has more information than you could wish for 