if you can't name a software then you shall probably refrain from using that as an argument and you shall refrain from generalizing on top of that... in addition what you comparison demonstrates exactly ? some unknown conversion (parameters wise) using unknown software ? there was one PhD here who compared iphone (?) jpg w/ results of the raw conversion from MFDB trying to prove some point...
It's Iridient Developer and I was told by the developer he uses Lab as its color engine. This look is not confined to just this Raw converter but in Lab edits generally speaking as I indicated above.
Now, I'll go on to tell you why I'm right about editing in Lab.
The crux of the matter with Lab was mentioned several times here at LuLa about how well its saturation edits behave "better" than applying in an RGB space, the behavior of which stems from the fact Lab applies saturation by degrees according to how close a color is to absolute gray=(R=G=B/0=a*,0=b*). That's a problem with regards to the adaptive nature of human perception and how we view a scene in accordance with the actual definition of the function of a saturation boost.
What is happening when saturation is being increased?
It's not defined by science as "Make my picture look pretty" slider. Its function is to mimic the spectral reflectance characteristics of full spectrum light on any given object lit by it that we "FEEL" (from memory) is missing in the image. Shadows as defined by lab and a spectro technically exhibit less spectral reflectance and are closer to gray thus should get less saturation.
But we humans don't look just at shadows when we view an entire scene. I have looked just at shadows and some are quite neutral, some are bluish, some are greenish, etc. But they change color in relation to when I view the entire scene because the surrounding other spectral reflectance driven colors induce the adaptive effect.
Also saturation levels especially bumped up high also induce the adaptive effect into seeing less saturation scanning individual areas
of an image. Lab does not calculate for this effect under the hood. It was built defining color one color at a time just like a machine (spectro) defines color which is how a machine can only understand color BY THE NUMBERS, just like a digital sensor defines color by measuring voltage variation of charged pixel cells to define gray luminance for each RGGB combination.
Lab only cares about the numbers when increasing saturation when the data is farthest away from gray not taking into account how a human sees the entire scene which is greatly influenced by adaption by both saturation levels and overall color in the scene.
Like I indicated with the Color Checker Chart when a human zeros in on just one color patch the adaptive effect kicks in and changes the color appearance by comparison to looking at the entire chart as a whole object viewing all the patches at once.
Shadows can be many colors some R=G=B and some not. Some may look neutral but read R>G>B and vice versa. Lab doesn't care what you see. If it's gray by the numbers it doesn't increase saturation. If it is colored it will increase saturation whether it needs it or not. See the example below and manipulation of the appearance of shadows in relation to its numbers.
Which one looks neutral? Which one will increase in saturation equally in relation with the rest of the image
if applied either in Lab or in an RGB space?