I suppose one scenario consistent with both our observations would be the following: ACR loads the RAW file into LSB's, consistent with John's observation, and then does the bayer interpolation. Then sets white and black points and rescales the data to span 2^16 levels (though filling only 2^12 or 2^14 of the possible values). If the conversion from camera color data to XYZ color space (or LAB, Prophoto, etc) is now performed, it will be done at 16-bit precision and since ACR does this via matrix multiplication, the resulting linear combinations of the three color channels will take on all 2^16 possible values, consistent with my observation. So some early calculations would be done at the bit depth of the camera, and later ones at 16-bit depth.

[{POST_SNAPBACK}][/a]

Emil, I don't want to believe that the demosaicing and Bayer interpolation process is done in the native 12 or 14 bit range in ACR.

I cannot tell, but I can tell about how DCRAW works, which is not floating point but clearly is better than the 12-bit/14-bit approach:

1. DCRAW converts 12/14 bit samples into 16 bit (just a x4 or x2 integer multiplication).

2. Next it calculates black offset (when needed, some cameras don't have) and substract it

3. Next DCRAW is aware of the point at which every camera clips its RGB channels (Panopeeper can tell us a lot about this) and scales them so that the clip point on each channel reachs the maximun (65535). In the same operation (after all it's all about multiplications) it applies the white balance (individual multipliers on each channel)

All these steps are done in a very important function of DCRAW's code called

**scale_colors()** (see below), which the only one I have looked at by the way.

4. Next comes the whole development process: Bayer demosaicing, highlight recovery if set, and colour profiling if set.

So unfortunately DCRAW is not floating point, but it is 16-bit all time.

This is a linear TIFF produced by DCRAW (BTW plotted used the program Billl refers to, find a tutorial here: [a href=\"http://www.guillermoluijk.com/tutorial/histogrammar/index_en.htm]HISTOGRAMMAR TUTORIAL[/url]):

**1. THE RAW FILE** This is a real native RAW histogram, prior to demosaicing. All values are in the 0..4095 (i.e. 12-bit) range:

We can see that not the whole 0..4095 range is actually used. This is because of a DC black offset value all cameras have (usually around 250 levels in my 350D). Some brands substract that value before saving the final RAW data,or simply don't produce it, no idea which is the true version.

**2. THE DEMOSAICING PROCESS (RAW DEVELOPMENT)** For demosaicing previous values are scaled by a factor of 2^(16-12)=16 and an additional WB scaling. And from these scaled values, interpolated values (already in the 16-bit range) are calculated:

[span style=\'font-size:8pt;line-height:100%\'](

**NOTE**: for simplicity this is the histogram of only the blue channel.[/span]

[span style=\'font-size:8pt;line-height:100%\']Two blue types were used to distinguish levels according to their origin)[/span]

**3. GETTING INTO PHOTOSHOP** Our image is now on a real 16-bit range. Look now what happens to a real histogram when we load it into PS (ACR outputs the same effect), and save it back in 16-bit TIFF format:

[span style=\'font-size:7pt;line-height:100%\']

**Original**[/span]

[span style=\'font-size:7pt;line-height:100%\']

**After just Open and Save in PS**[/span]

PS looses 1 bit of precision. The Original image produced by DCRAW is linear, that's why it has not gamma holes which will appear afterwards in PS, when converting to a non-linear colour space.

Regards.

PS: DCRAW's scale_colors() function:

void CLASS scale_colors()

{

unsigned bottom, right, row, col, x, y, c, sum[8];

int val, dblack;

double dsum[8], dmin, dmax;

float scale_mul[4];

if (user_mul[0])

memcpy (pre_mul, user_mul, sizeof pre_mul);

if (use_auto_wb || (use_camera_wb && cam_mul[0] == -1)) {

memset (dsum, 0, sizeof dsum);

bottom = MIN (greybox[1]+greybox[3], height);

right = MIN (greybox[0]+greybox[2], width);

for (row=greybox[1]; row < bottom; row +=

for (col=greybox[0]; col < right; col +=

{

memset (sum, 0, sizeof sum);

for (y=row; y < row+8 && y < bottom; y++)

for (x=col; x < col+8 && x < right; x++)

FORC4 {

if (filters) {

c = FC(y,x);

val = BAYER(y,x);

} else

val = image[y*width+x][c];

if (val > maximum-25) goto skip_block;

if ((val -= black) < 0) val = 0;

sum[c] += val;

sum[c+4]++;

if (filters) break;

}

for (c=0; c < 8; c++) dsum[c] += sum[c];

skip_block:

continue;

}

FORC4 if (dsum[c]) pre_mul[c] = dsum[c+4] / dsum[c];

}

if (use_camera_wb && cam_mul[0] != -1) {

memset (sum, 0, sizeof sum);

for (row=0; row < 8; row++)

for (col=0; col < 8; col++) {

c = FC(row,col);

if ((val = white[row][col] - black) > 0)

sum[c] += val;

sum[c+4]++;

}

if (sum[0] && sum[1] && sum[2] && sum[3])

FORC4 pre_mul[c] = (float) sum[c+4] / sum[c];

else if (cam_mul[0] && cam_mul[2])

memcpy (pre_mul, cam_mul, sizeof pre_mul);

else

fprintf (stderr,_("%s: Cannot use camera white balance.\n"), ifname);

}

if (pre_mul[3] == 0) pre_mul[3] = colors < 4 ? pre_mul[1] : 1;

dblack = black;

if (threshold) wavelet_denoise();

maximum -= black;

for (dmin=DBL_MAX, dmax=c=0; c < 4; c++) {

if (dmin > pre_mul[c])

dmin = pre_mul[c];

if (dmax < pre_mul[c])

dmax = pre_mul[c];

}

if (!highlight) dmax = dmin;

**FORC4 scale_mul[c] = (pre_mul[c] /= dmax) * 65535.0 / maximum;**if (verbose) {

fprintf (stderr,_("Scaling with black %d, multipliers"), dblack);

FORC4 fprintf (stderr, " %f", pre_mul[c]);

fputc ('\n', stderr);

}

for (row=0; row < iheight; row++)

for (col=0; col < iwidth; col++)

FORC4 {

val = image[row*iwidth+col][c];

if (!val) continue;

val -= black;

val *= scale_mul[c];

image[row*iwidth+col][c] = CLIP(val);

}

}