I finally have my first version of processing code written!
3 step process - working at automating the whole thing.
Step 1: Windows Batch file copies .CR2 files onto my laptop's hard drive, sorted into directories: YYYY\YYYY-MM-DD and duplicates same to my server volume
Step 2: Server uploads .CR2 files (anything that has changed) on a nightly basis to my website
Step 3: Browse! Website automatically creates thumbnails (from CR2 files!!!) and automatically creates cached JPG files and displays them from clicking on the thumbnails with full EXIF extraction on the same page.
I still run DPP locally for "print-quality" but for web quality, I'm extracting the built-in image in the CR2 file and reducing it to 50% size and using that as the cached JPEG and source for the 120x120 thumbnail. EXIF pulled by exiftool and consolidated into HTML with the JPG image on a single page.
The best part is, I still have RAW-only (I don't have any JPG files anywhere other than the "cache" folders) and it is all generated on-the-fly on my web server. Fun!
my batch files
exiftool (dumps EXIF in HTML and also used for orientation detection)
activeGallery (heavily modified)
dcraw (used to extract internal thumbnail image from CR2 file)
ImageMagick (resize and quality reduction)
CyberKiko FTPSync (sync my home server via FTP to website)
jpegtran (for automatic rotation of thumbnails)