When creating JPEG files from Adobe software they have two interface scales for the quality setting, 0 to 12 and 0 to 100. In both of these scales there are only thirteen different levels of compression applied. Using the 0 to 12 scale is easy, since each value is actually a difference in actual compression. For the 0 to 100 scale you need to divide 101 by 13 to get the step value, which is 7.76. If you then divide the scale value by the step value, and use the number in front of the decimal point you will get the correct conversion to the 0 to 12 scale. So Q80 and Q85 both give the same level 10 compression.
I have run extensive comparisons of images exported at all 13 levels of JPEG compression, to an 8 bit TIFF. All the images were exported from Lr with the only differences being the level of compression for each of the 13 JPEG files, and of course the change to TIFF for the master image. All of the files were loaded to one psd file as layers, and the blend modes were all set to difference. This allows one to see the actual difference between each colour channel in every pixel. When dealing with the higher quality images the image pretty much looks black, so in order to make the differences more notable I had to add a levels layer too, so that pixel values from 0 to 8 were spread over the whole 0-255 range.
What soom becomes obvious is that when working at level 10 (78 - 85), and above, the differences between the various JPEG levels and the uncompressed TIFF file are all of the same magnitude. Oddly the difference between level 10 and 12 for the JPEG files was greater than the difference from the uncompressed JPEG.
It is important to know that images saved in the JPEG format are not stored as RGB triplets, as they are in a TIFF file, or when working with them in an image editor. Instead they are converted to a monochromatic brightness channel, plus to chromiance channels for the colour. The pixels are subdivided into 8×8 blocks, this then has some clever maths applied to it, a Discrete Cosine Transform, which can is a format that can allow you to remove some data, without unduly affecting the perceived image quality. Even if you don't do the compression stage though, you do get some very small measurable, but invisible, artifacts introduced.
Essentially, compared to an image that hasn't been converted to JPEG, there is no real measurable difference in image quality between Levels 10 and 12. There is however a very large difference in file size on disk. Compared to the original uncompressed data a Level 10 JPEG file is between 40% and 60% smaller. Since a maximum quality JPEG file is going to be the same size on disk as an 8 bit TIFF, and it is possible to reduce the size by using lossless LZW compression, I would pick that option over the JPEG. If JPEG is required I see absolutely no need to ever use higher than Level 10 quality.
Finally when posting images online I now post at a maximum of 1280 px long edge, Level 10 JPEG and sRGB. They give a good size on screen, but at least you can't get a good large sized print from one.
Alan