I agree. So the question is, should the video be aimed at beginners?
If you're up for it, sure. That makes it a lot harder, and forces the scope to be carefully managed, but it's a worthy objective to explain gamuts and color space conversions to beginners. I'm not sure the "myths" approach is the best way to do that, since it perforce requires you to explain why each myth exists, but, if it works, go for it, but select the myths for their pedagogical value, not their prevalence.
When I worked at IBM Almaden Research Center in the 90s, there was a tradition that some project leaders -- even if a project had only one worker -- would give a one-hour lecture to the Research staff on their research interest. These talks were open to the public, and were often attended not only by IBMers, but by people from Stanford, Berkeley, hp, and PARC. Presenters worked hard on their lectures, and the expectation bar was thus quite high. About two years after I decamped Rolm ahead of the invasion from Siemens and set up shop at ARC, my boss (or the closest thing I had to a boss -- when you’re an IBM Fellow it gets hazy) asked me to do a pitch on my color work. I spent a lot of time using
InDesign [Edit: Micrografx Designer. I now see that there's a Corel program that will read the files, so maybe I can get the graphics up on the web someday] and preparing foils (yes, actual physical overhead projector sheets – IBM wasn’t modern about all things). I figured that no one in my audience would know much about color, but that my life would be made easier because they’d all be very smart, and, on average, have math skills that put mine to shame. That turned out to be wrong. Not the smart and mathematically sophisticated part, but the color-newbie part. Just before the lights went down Efi Arazi and four or five of his staff walked in and sat down in the back row. It was then I knew that I wouldn’t be able to get away with a thing.
Here’s an outline of my presentation. I know it’s not appropriate for the audience you’re talking about, but it does go from assuming nothing about color to a reasonable understanding of what color management (what we called in those days “device-independent color”) is all about. It may give you some ideas for your own presentation. I still have the Designer files, but I can’t figure out how to print them or turn it into, say, Adobe Illustrator files, so you’ll have to use your imagination.[see above addition]
First a statement of the problem.
I showed a diagram of an image capture and reproduction system, with a natural scene, a camera, storage, emissive display showing the scene, and a printer showing the scene. Each of the three images has a viewer, and each viewer is exposed to a set of – thus far undefined -- viewing conditions. The objective is for all three scenes to “look the same”.
I showed a similar diagram with a synthetic image, and displays in various locations. The objective is for all scenes to “look the same”.
Then I showed a block diagram of the then-prevalent method of managing color, where the colorants for the output device are determined at time of capture, and contrasted that to the now-prevalent model, where data is converted upon capture to device-independent form, and each output device is associated with software that converts the colors in the file to colorants appropriate to the device and the viewing conditions.
I showed an illustration of a natural scene, and made the point that the spectra observed by a viewer or a camera are the wavelength-by-wavelength product of the illuminant and the reflectivity of the object in the scene.
Then I showed a diagram of the eye, with the main elements identified. I explained the basic properties.
Another eye diagram, this one with the four types of light sensitive retinal elements identified, giving me a chance to talk about how their densities vary with retinal location and to remark in passing on the relative deficiency in the number of blue cone cells.
Then I showed the response curves of all three types of cone cells versus spectral excitation.
I’m not sure why I did this, except that it has always fascinated me as great adaptation, but I showed a diagram indicating the longitudinal chromatic aberration of the single-element lens in the eye, and showing how having the rho and gamma (although I didn’t use those names) cone cells spectral response so nearly alike minimized the issues associated with the LCA. I even showed text in various colors against colored backgrounds, showing that our visual acuity varies with color. I wouldn’t do this today.
I explained the Trichromatic Principle, and credited Le Blon and Maxwell.
Then I jumped right into a diagram of the color matching experiment, and explained its history and how it worked, introducing the terminology “tristimulus values” and impressing the audience with the fact that, presented with a sample color, 93% of the men and almost all the women set the knobs in about the same place. I showed how projecting colors on the sample side of the screen allowed all colors to be matched.
I showed a graph of the normalized color matching functions versus the wavelength of spectral stimuli, and made the point that color matching was virtually linear, a point that was not lost on this audience. That means that additivity applies, and color matching functions can be used as weighting functions to determine knob settings to match any color whose spectrum is known. I showed a graph of the unnormalized color matching functions versus the wavelength of spectral stimuli, and showed how they added to the photopic luminous spectral efficiency curve.
Because color matching is linear, any linear transformation of the color matching functions carries the same information as the functions themselves. I showed a graph of one interesting linear transform, CIE XYZ, versus wavelength of a spectral stimulus.
Then I showed a two dimensional projection of XYZ, xy, together with the conversion equations and spectral locus.
I showed another xy horse shoe with lines and points showing the “center of gravity” rule for calculating the chromaticity of mixed colors.
I plotted the primaries of an arbitrary CRT on the xy diagram, and showed how repeated applications of the center of gravity rule allowed any chromaticity in the triangle defined by the primaries to be created.
I expanded the primaries as far as they’d go inside the horse shoe, and showed that you couldn’t create all visible chromaticities with positive amount of any set of such primaries, although you could if you allowed negative amounts, such as was done in the color matching experiment.
Then I showed MacAdam’s ellipses plotted in xy, and remarked on how xy emphasized greenish differences and deemphasized bluish ones when compared to the human eye.
I introduced u’v’, gave the equations used for conversion from xy, and plotted the ellipses there.
Moving back into three dimensions, I defined Brightness, Hue, and Colorfullness, showing how they worked on the Munsell Tree, which unfortunately wasn’t visible to everyone due to the size of the room.
Since everyone was already up to speed with u’v’, it wasn’t much of a stretch to move to CIEL*u*v*. I showed them in visual form how that worked. I showed them how to calculate color differences, hue angle, and chroma.
I showed them the math to get from XYZ to CIEL*a*b*, and apologized for the heuristic nature of that space. I pointed out that Luv had one ad hoc moment in its derivation: the transition from xy to u’v’. I showed them how to calculate color differences, hue angle, and chroma.
I compared Lab and Luv. I think I was successful in not exhibiting my preference for Luv.
I showed a list of other important color spaces.
A talked about computational issues in color space choice. These issues seem quaint today, since they were predicated on limited computational resources and low bit depth..
In a foil titled: “We’re not done yet!”, I showed an optical illusion: you’ve seen it: low chroma middle grey yellowish X against a saturated yellow background and the same X against a darkish grey background, with the two X’s connected to prove that they are indeed the same color.
That gave me the opening to talk about viewing conditions, which I mostly ducked. I mentioned viewer adaption to surround and white point, the perception of self-luminosity, image size, and absolute brightness.
Leaving the hard stuff for (much. much) later, I returned to the color reproduction system diagram, talked about the interaction of the spectra in the field of a camera and the camera’s primary sensitivities. I introduced the concepts of illuminant metamerism and capture metamerism. I also introduced the idea of device gamut (for output devices only; the topic is the gamut of input devices is way too hard to get your head around), and made the point that gamut mapping is not an option. If you don’t do it, the device will.
I gave a survey of common gamut mapping algorithms. This was pretty easy to do since the audience understood Luminance/Hue/Chroma color spaces by now. I discussed smart clipping versus compression for OOG colors. I introduced the idea of neighborhood gamut mapping, which I later turned into a useful algorithm.
A lot to cover in an hour, and I elided the whole viewing conditions discussion, but from the questions, I seem to have gotten most of the ideas across.
Your problem is harder than mine was, Andrew, but some of this might help. The progression from the color matching experiment to Lab or Luv (it’s more elegant going to Luv, but you’ll want to take them to Lab) could be a useful introduction to 3D gamut maps.