My background in computer science does make me inclined to encourage people to read and understand how the jpg compression algorithm works, and try programming your own. However, that is the point where most photographers's eyes are going to glaze over and I get weird looks. If you are actually geeky enough to like poking around with math and such, then heading over to https://www.youtube.com/user/Computerphile
and looking up their videos on jpg and images may be an entertaining way to spend some time.
A more realistic approach is to just acknowledge that different levels of compression will affect different photos in different ways, and that there is really no one size fits all setting that will be used in all cases, even if the Lightroom default level is going to cover the vast majority of them.
My advice is: Process and inspect. If you're not familiar with what is going on, then export at a few different settings and just look at what the results are. Pick out which option actually meets your needs, and then roll with that one. Experiment and explore.
The jpg compression algorithm was designed with the idea that things tend to change fairly rapidly within a photo, but not with a hard solid line of contrast. This is why highly textured elements, such as the Reed Window Shades in the link Tim posted above do so well, but then images such as text tend to break down and look weird, especially if you re-encode through the jpg method multiple times.
It does not however deal all that well with very fine and smooth gradients over large areas. The Sunset and Bird image on that link shows very noticeable banding effects on my macbook at the 39-46 slider, which fades to a faint 'sort of splotchiness' for the 70-84 options, and a faint but visible shift when switching between the last two.
As for which to use? I agree with nathancarter's comment on it depending on what you are going to use it for, but disagree with downsampling too far on export before uploading to the web in many cases. If you have a larger selection of data to work with initially, then you have more graceful options on the processing side to do more with it. For example, going with 2048 long side uploads to a website will allow the auto processing to handle things in a more graceful manner when it resamples the image to generate various sized images server side, especially when dealing with arbitrary level conversions that would result in 'splitting a pixel'.
I also encourage anyone dealing with digital imaging to read a short paper "A pixel is not a little square". Very interesting read for anyone who geeks in that direction, but not a paper for everyone.
Canon EOS 7D | EF 28 f/1.8 | EF 85 f/1.8 | EF 70-200 f/4L | EF-S 17-55 | Sigma 150-500
Flickr: Real-Luckless