When you open a in-camera produced jpeg or a raw image in a postprocessing program, say ACR/LR, each of these images has a general range of DR....whether the monitor can show it or not. Each has a range of information that the user can manipulate to produce the output they want.
The easy qualitative answer to your question is that there is a lot more information in the Raw file than in the OOC Jpeg - a lot
For instance, if you underexposed and needed to increase brightness in PP, the 8-bit Jpeg would start showing visible posterization and other artifacts very quickly, while the Raw file would probably continue to look pleasing while you increased brightness a few more stops. That's the easy answer that depends on qualitative words like 'visible' and 'pleasing'.
But there is no easy quantitative answer, that's why I asked it myself in a different form a couple of pages back, and again a couple of posts up - it's so hard that nobody here seems to be up to the task
It depends on the nature of the information and the observer, on noise wrt the size of an ADU, sample size etc. There are too many variables involved and it needs to be addressed by someone who is better versed in Information Science than I am. Even if someone were able to calculate the capacity of these two specific channels, we probably would not know what to do with that bit of information in practice
I can answer only a portion of your question, to help you understand why there is no easy answer:
When you open a in-camera produced jpeg or a raw image in a postprocessing program, say ACR/LR, each of these images has a general range of DR....
The fact is, there is no range of DR inherent in a file of a specific bit-depth, whether the data is encoded linearly or not. With a large enough sample and appropriately sized noise (not too big not too small, just the size of Montreal) the sky is the limit - remember the 1-bit newspaper image?