I am having a little trouble understanding how 2/3rds of the information would be made up digitally in post. Aren't we confusing exposure with brightness, like when an underexposed image's brightness is increased through Compensation in post? The difference can be found in the noise - even just in the luminance channel.
No we are not confusing exposure with brightness, at least I am not.
Let's zoom in on one single pixel
, and for the sake of simplicity let's assume it records values from 0 to 255 for each channel, and let's disregard gamma. From an RGB Foveon type of sensor we may get a Raw recorded data reading of [128,128,128] because 3 channels are sampled, and from a Bayer CFA filtered we may get [128,0,0], or [0,128,0], or [0,0,128] depending on the color of the filter. So for the Bayer CFA we have 2/3rds missing, but the corresponding channel does record something.
Now we do a Bayer CFA demosaicing, which has nothing to do with amplification! The Bayer CFA demosaicing (a very clever interpolation) will use the surrounding sensel positions to estimate the most likely value for the missing channels. When we happen to be watching a uniform patch of gray, then the interpolation between surrounding Green filtered sensels will read 128 all around our pixel, so the interpolation decides that the missing Green channel should probably also be 128. Thus, after one interpolation, we get either [128,128,0], or [0,128,0], or [0,128,128] depending on the color data that was actually sampled. Likewise the interpolation from neighboring pixels suggests that the Red channels that were not sampled are probably 128, which gives us [128,128,0], or [128,128,0], or [128,128,128]. And after interpolating Blue from the surrounding Blue filtered sensels, we'd get [128,128,128], or [128,128,128], or [128,128,128] depending on the filter color.
As you can see, the interpolation guessed right regardless of the filter color (because a uniform patch is simple to interpolate) regardless of the amount of light that was recorded through the filter and the colors that were absorbed by the filter. The missing channel data and the level were interpolated/reconstructed from the surrounding pixels.
So despite of only really sampling 1/3rd of the light at each pixel position, the reconstruction by interpolation gives the same RGB output brightness for both types of sensor.
P.S. When you zoom in on the earlier synthesized CFA images in the middle row, you'll see exactly what I described (single channel colors, either R, G, or B), but there things have more detail. The original brightness of the originals in the first row, has been reconstructed by interpolation.