We frequently see comparisons of gamut, by volume or area, using various tools such as Color Think Pro, Gamutvision, Colorsync etc.
Can someone explain if there’s a standard deviation factor(eg + or - 1.2 DeltaE ) that defines where the gamut boundaries are ?
Is this enshrined in the ICC standards ?
I'll take a shot at answering this. So regarding the accuracy of the 3d gamut projections, I can say that there is a bit of wiggle room for displaying gamut boundaries. First there is really no standardized way to do this. A simple method for drawing the boundary is to round trip color data e.g (lab > rgb > lab) and then plot the most saturated colors at different luminance steps. Now depending on the rendering intent used and the quality of the profile the accuracy of this projected gamut can certainly vary. There is also going to be a bit of a tolerance between the ICC profile's numerical representation of the printer's gamut, and what the actual printed result will be. Conversion "noise" when round tripping in gamut color can easily be about 2∆e, and you can expect another few ∆e between the expected results and the actual results. So the 3d boundary may be a fairly precise map of the ICC gamut but should be considered an approximation of the printer's gamut. The ICC profile is essentially a translation table and it's good at answering questions in the format of Given A what is B. For example trying to do something simple like determining if a single color is out of gamut is not an exact process.
So yes, when you are comparing two profiles, especially a printer and monitor profiles you can expect a bit of wiggle room regarding the actual boundaries of the profiles. Trying to come up with a general percentage as to how much this varies would be difficult since the linearity of the printer, quality and precision of the profile are going to vary widely. It's also going to vary by location in the color gamut. Also comparing gamut volumes is a poor tool for comparing two profiles as the difference in volume might be all in one area of could be spread throughout the profile and two different software packages could create profiles with two different gamut volumes from the same set of measurements.
To respond to a few other things:
There is not a direct correlation between sample patches and profile accuracy or precision. When building a profile most advanced software packages give you the ability to specify the internal bit depth of the profile and the size of the grid used in the LUT. You could measure a target with 3K patches and make a profile with fewer grid points, than a profile made with 50 patches. Also there is not a direct correlation between the measurement patches and the points in the LUT.
Typically printer profiles are LUT based, and monitor profiles are Matrix based.
The ICC spec is built around the D50 illuminate and 2˚ observer.
There is no standard regarding the number of patches needed for building a profile. Most software gives you an option. With RGB profiles I've found diminished return after 729 patches on most modern printers. When building CMYK profiles I generally use the 1617 patch IT87.4 target.
Determining the outer boundaries of a printer with reasonable accuracy does not take that many patches. If a device is fairly linear we can easily draw a line between two colors to extrapolate the gamut of the printer(which is what all 3d plots are doing.)
For very non linear devices more patches can be better. For a very linear device more patches can actually cause problems, especially if it is used for photographic purposes. HP when building the profiling software for the Z series printers was able to use very few patches and get fairly good results because they hardwired certain characteristics of the printer into the profiling model, having only to account for different media types. With general profiling packages they have to be able to account for radically different devices from offset presses and laser printers to inkjet and dyesub printers.
There is little correlation between profile accuracy and measurement device accuracy. A decent device like the i1 are calibrated to a standard well under 1∆e. Using the same software I would not expect measurements from a Munki vs an i1 to produce a profile with any important differences. When trying to certify proofs, or measure color standards where your tolerances are under 1∆e then instrumental differences such as measurement interval (10 vs 20 nm) or light source (xenon vs led) can be important.