In my experience with using a scanner (mine is an Epson Perfection 3200) to measure densities to calibrate my digital negatives for alternative printing processes, I have encountered similar spatial dependency of data. For example, the density on the border frequently seems to be greater than a patch (of the same exposure) in the middle. I chalked it up to some form of chemical/kinetics based phenomenon during develop/fixing processes. Based on your data, it could well be a scanning artifact.
I'd bet it's a similar effect to what I'm seeing. When I first looked at the way prints were illuminated by the scanner my first expectation was that there would be variations in luminance across horizontal scans. LEDs are notoriously variable when trying to match withing 10% let alone 1% or 2%. But much to my surprise, there was almost no horizontal variation. It was under 1% except at the scan edges. Almost certainly the scanners are individually calibrated and horizontal positional gain is written to nvrom during manufacture. I was happy to see this.
It would seem to me that the scanner is not a means to do precision measurement work like a spectrophotometer. For example, I have noticed that where you put the print on the glass can also affect the readings. If I place the print, measure and rotate it around and measure again, there could be as much as 5-10% variation in the readings. So for comparison purposes I try to place the print the same way each time.
:Niranjan.
A spectro is very precise, and provides highly repeatable measurements. Especially the ones that use white LEDs as an illuminant which are more stable than tungsten lamps which have thermal drift time changes. Even so, the latter is consistent within 1% or so and the former perhaps 0.1%.
The problem with spectros is that they can't measure very small areas. They effectively average the reflected light over about 5 mm^2. A scanner, like a camera, can measure luminance over areas of hundred um*2 to better than 1% quite well. But they can't measure spectral info. But they do a reasonable job of measuring colorimetric info. Especially when measuring CYMK printed images or RGB ones from a display (for a camera). This is because the spectral info can be modeled by a linear combination of the CYM inks or RGB colors. Problems arise when scanning or photographing things with more spectral diversity such as colorcheckers, especially the Colorchecker SG. There, the critical factor is how well the scanner/camera meets the L/I criteria.
While I was initially concerned about how much light is reflected by the glass or other structural components unrelated to the paper's actual reflectance, it turns out this is pretty small. No more than about .2%. It's so low that if I scan with nothing on the scanner and the lid open with the room lights off I can see residuals on the glass like fingerprints, or even ink patterns from scans of prior prints. To mitigate this I've started baking my prints at 160F for half an hour after printing to reduce/eliminate the small amount of glycol that is mixed with water in the inks. Letting it dry at room temp. even overnight was not sufficient. Amazing effect actually.
In looking at the actual reflected light ratios, a white sheet of Costco paper measures 1.20 times higher than a white patch that is 3mm square inside an otherwise black inked sheet. For the Baryta paper, the ratio is a bit higher at 1.22. This is consistent with the Baryta spectral measurements which indicate it reflects 10% more light than the Costco paper. 1.22 = 1 + (10% of .2). Note that the ".2" in that equation is the portion of additional light reflected by white paper's surface back onto the frosted LED covers, then back to the paper.
A validation of the hypothesis is doing the same experiment when the "white" is decreased by inking so the reflectance is half that of the uninked paper. The same experiment yields only a 1.10 difference on the Costco papers. This is consistent with halving the light reflected from the LED covers while the full light directly from the LEDs is still hitting the scanned paper surface.
Fortunately, this turns out to be correctable in two phases. The first is the correct each scanned patch by subtracting the contributions of patches around it. This basically involves multiplying the spectral data of each patch by it's positional reflectance contribution (from about 1.2% to .05% depending on position and doing this for the 24 surrounding patches. This goes out to about 20mm around the 6mm patches generated for the iSis and, while there is some impact beyond this, it's minimal and this approach captures over 90% of the scanner, reflected light, additive error. I've got the numbers for this and am incorporating it into the process that creates the scan data for Argyll.
The second phase involves reading the RGB values of a scan's pixels, calculating the XYZ values, and doing a similar subtraction. While this rapidly expands into some really large calculations it can be simplified by resizing the pixels used for the adjustment by 10 to 1 or more.