Luminous Landscape Forum

Raw & Post Processing, Printing => Digital Image Processing => Topic started by: Ishmael. on February 21, 2010, 07:42:28 pm

Title: Correcting images in LAB vs RGB
Post by: Ishmael. on February 21, 2010, 07:42:28 pm
I've recently discovered the power of the LAB color space and it almost seems too good to be true.I'm sure certain adjustments should be done in RGB, but for the most part I am getting significant more powerful images out of LAB than RGB. Given that I'm relatively new to photoshop I would really like to know from you veterans out there why someone would correct an image in RGB instead of LAB.

Thanks

Ish


Title: Correcting images in LAB vs RGB
Post by: digitaldog on February 21, 2010, 09:07:49 pm
Considering that the ideal workflow in terms of quality, flexibility and non destructive workflow would be to do as much tone and color “correction” work at the raw rendering stage, one that can’t support nor needs Lab, one needs to examine just how useful Lab workflows are.

You know the old sayings. If it seems too good to be true, or, if all you know is a hammer, everything looks like a nail, and considering there are no Lab capture or output devices, maybe you could describe your workflow from capture to output in a bit more detail.
Title: Correcting images in LAB vs RGB
Post by: Ishmael. on February 21, 2010, 09:31:02 pm
Quote from: digitaldog
Considering that the ideal workflow in terms of quality, flexibility and non destructive workflow would be to do as much tone and color “correction” work at the raw rendering stage, one that can’t support nor needs Lab, one needs to examine just how useful Lab workflows are.

You know the old sayings. If it seems too good to be true, or, if all you know is a hammer, everything looks like a nail, and considering there are no Lab capture or output devices, maybe you could describe your workflow from capture to output in a bit more detail.


I'm capturing in sRGB with a canon 350d, and processing photos for web output on the photography website I am currently developing. From the reading I've done on LL, I thought that you wanted to work as little as possible with the image in camera RAW....is that incorrect in your experience?
Title: Correcting images in LAB vs RGB
Post by: Schewe on February 21, 2010, 10:06:46 pm
Quote from: Ishmael.
From the reading I've done on LL, I thought that you wanted to work as little as possible with the image in camera RAW....is that incorrect in your experience?

Uh, if you are getting that from reading on LuLa, then you need to check your reading comprehension skills...and the product name is "Camera Raw"...

Who has EVER said do little or nothing in Camera Raw and do everything in Photoshop afterwards...oh, yeah, Dan M. (who also advocates doing as much as possible in Lab cause well, Camera Raw isn't designed for "profession users" and you need to learn esoteric imaging skills to work in Lab–which Dan is happy to teach you).

Seriously, if you are capturing in sRGB maybe you would be better off ignoring Camera Raw–heck you might as well shoot in JPEG cause, well who would EVER want to work in an "ultra-wide theoretical color space such as Pro Photo RGB"?

If you want help, ask...but really, you shouldn't presume to tell us what "we" (here on LuLa) have been advocating unless you actually know what you are talking about...
Title: Correcting images in LAB vs RGB
Post by: Ishmael. on February 21, 2010, 10:44:24 pm
Quote from: Schewe
Uh, if you are getting that from reading on LuLa, then you need to check your reading comprehension skills...and the product name is "Camera Raw"...

Who has EVER said do little or nothing in Camera Raw and do everything in Photoshop afterwards...oh, yeah, Dan M. (who also advocates doing as much as possible in Lab cause well, Camera Raw isn't designed for "profession users" and you need to learn esoteric imaging skills to work in Lab–which Dan is happy to teach you).

Seriously, if you are capturing in sRGB maybe you would be better off ignoring Camera Raw–heck you might as well shoot in JPEG cause, well who would EVER want to work in an "ultra-wide theoretical color space such as Pro Photo RGB"?

If you want help, ask...but really, you shouldn't presume to tell us what "we" (here on LuLa) have been advocating unless you actually know what you are talking about...



Perhaps you should check your reading comprehension skills and double check my original post: "Given that I'm relatively new to photoshop..." I am clearly asking for help and admitting that I dont know a lot about this subject, but apparently you took that as an invitation to demonstrate your vast arrogance.

On another note, if anyone who is slightly more down to earth and helpful would like to clear up this issue for me, I would seriously appreciate it.
Title: Correcting images in LAB vs RGB
Post by: Panopeeper on February 21, 2010, 10:54:24 pm
Quote from: Ishmael.
I've recently discovered the power of the LAB color space and it almost seems too good to be true.I'm sure certain adjustments should be done in RGB, but for the most part I am getting significant more powerful images out of LAB than RGB. Given that I'm relatively new to photoshop I would really like to know from you veterans out there why someone would correct an image in RGB instead of LAB.
1. If you are recording raw data, then the "native" shot is most probably for orientation, normally its quality is not of paramount importance (sometimes is has to be horrendeous). Regarded, that the vast majority of monitors are working in sRGB, some in AdobeRGB, it appears (to me) nonsensical to make the shot in any other color space than in that, which can be seen directly on your monitor.

2. If you are aiming at ETTR, you need a special setup resulting in such raw data, which makes the embedded JPEG pretty much useless anyway, except for judging the exposure.

3. I wonder if anyone can demonstrate, that it is better to develop the image in LAB than in RGB; I don't know of any consideration of basic importance. (If you are working with some special printer requiring LAB color space, then why are you shooting raw?) There are some operations, which are or may be better in LAB, but that is, IMO, mostly not enough to convert the image in LAB and back. Keep in eye, that the vast majority of monitors and printers supports RGB but not LAB.
Title: Correcting images in LAB vs RGB
Post by: Schewe on February 22, 2010, 12:00:32 am
Quote from: Ishmael.
Perhaps you should check your reading comprehension skills and double check my original post: "Given that I'm relatively new to photoshop..."


Uh huh, read that...but I'm pretty darn sure your post indicated that according to YOU, the preponderance of the postings on LuLa seem to indicate a preference to editing in Lab rather that Camera Raw?

If that is what you MEANT to say, then no, you are full of it...
(and there's nothing wrong with my reading comprehension).

Maybe you should spend a little more time around these parts and learn the players...I really and seriously won't apologize for my behavior (here or elsewhere) but I kinda do know what the f$%K I'm taking about...since I kinda write the book on the subject (see: Real World Camera Raw (http://www.adobepress.com/bookstore/product.asp?isbn=0321580133)).

So, if you want to learn, ask your questions without editorial comments...that's what kinda works better here.

Title: Correcting images in LAB vs RGB
Post by: ErikKaffehr on February 22, 2010, 12:29:10 am
Hi,

Well, I guess it's about the opposite. What do you mean with sRGB? That setting on the camera is just ignored (AFAIK) if you shoot "RAW" and use ACR or even better LR (Lightroom). Now if you shoot in camera JPEGs most of the processing is done in camera and 94% of the information thrown away (if your camera has 12 bit ACD). If you are concerned about processing your images yourself you should shoot raw.

Regarding color space, digital cameras can capture a very large color space, using a small color space like sRGB will loose all colors registered by the camera that won't fit in the small color space. That's not necessarily bad, if you don't have colors falling outside sRGB in the image. For instance, 23 out of 24 of the color patches on the famous Xrite Color Checker Card would fall within sRGB.

Printers can print colors falling outside sRGB and sRGB also contains colors that cannot be printed.

The best way of manipulating colors is probably in Lightroom, because that gives you a parametric workflow. There are other products offering parametric workflows, Apple's Aperture and Bibble Labs's Bibble Pro 5. With a parametric workflow you don't manipulate an image, just create a "recepie" how it should be handled. The image will be interpreted according to the recepie as needed.

To keep all colors and tonal info in a PS based workflow you need 16 bit-tiffs. DNG (Digital Negative) images from my 24 MP DSLR are about 25 MByte, a 16 bit TIFF would be 6x24 around 144 MByte. The small file contains all the information in the big one. Sony's own file format "ARW" is less efficient than DNG, about 37.5 MByte/image. Some raw converters don't support DNGs, however.

Nothing wrong with working in Lab. One advantage of LAB that it separates tonality (L) from color (ab). A good raw processor would also do that.


Finally, if you want to use Photoshop and Lab as color space, don't forget to use 16bits and tagging with the correct profile. If you convert from Lab to an RGB color space you are going to loose quite a bit of information, and having those 16 bits/color helps you loosing as little as possible.

Best regards
Erik


Quote from: Ishmael.
I'm capturing in sRGB with a canon 350d, and processing photos for web output on the photography website I am currently developing. From the reading I've done on LL, I thought that you wanted to work as little as possible with the image in camera RAW....is that incorrect in your experience?
Title: Correcting images in LAB vs RGB
Post by: curvemeister on February 22, 2010, 02:17:50 am
Quote from: Ishmael.
I've recently discovered the power of the LAB color space and it almost seems too good to be true.I'm sure certain adjustments should be done in RGB, but for the most part I am getting significant more powerful images out of LAB than RGB. Given that I'm relatively new to photoshop I would really like to know from you veterans out there why someone would correct an image in RGB instead of LAB.

Hi Ish,

I should mention for the benefit of (the large numbers of) people who do not know me,  that I'm firmly on the Dan Margulis side of the fence.  He's my hero, mainly because every single point he makes is illustrated with specific images that benefit from whatever technique he is discussing.  I think color correction by the numbers is the cat's meow.  

That said, there are advantages to each color space, depending on the original image, the techniques and blending modes, you will be using, and where you want to go with it.  RGB is a great color space for channel blending, apply image, and for removing color casts for images with mixed lighting.

Lab is dynamite, and a large number of the people who take a class in Lab will bear hug it for a year or so while discovering its possibilities.  Lab is not subtle, though, so - time permitting - it's generally good to finish up in RGB after making the big moves in Lab.

There is a universe of possibilities, in Lab and with other techniques involving blending layers, and I hope you will ignore the noise that, sadly, is inevitable in these forums, and experiment and look at as wide a variety of techniques as possible.

Mike Russell
Title: Correcting images in LAB vs RGB
Post by: feppe on February 22, 2010, 04:01:11 am
I don't understand why you (and many other) newbies get the cold shoulder or even attacks at them for having misconceptions - we should do a better job of embracing new members and talent rather than going on rants against them.

I think the confusion starts from the capture comments. If you "capture in sRGB" it implies you capture JPEGs, not RAW. But I think you shoot RAW with 350D, it doesn't matter what color space you have used in camera settings - RAW images have no colorspace AFAIK. The colorspace is only applied in Camera RAW (or Lightroom or Bibble or whatever RAW converter you use).

To reiterate: the color space you set in your camera only applies to the JPEGs it saves, and is ignored for RAWs.

So when you move to PS you'll be working in sRGB, aRGB or Prophoto. sRGB is fine for vast majority of cases, but most here advocate using Prophoto just to be safe and convert to sRGB as the last output step. I'm one of them, although I'm pretty sure Prophoto offers only academic improvement in 99% of the cases. It's just that there's not much hassle for using Prophoto vs sRGB and it's a good way to future-proof my files for better output devices (monitors and printers).

Not familiar with LAB, although it has its advocates.
Title: Correcting images in LAB vs RGB
Post by: jbrembat on February 22, 2010, 06:46:48 am
Quote
Now if you shoot in camera JPEGs most of the processing is done in camera and 94% of the information thrown away (if your camera has 12 bit ACD).
Wonderful math  
Title: Correcting images in LAB vs RGB
Post by: crames on February 22, 2010, 09:09:36 am
Quote from: ErikKaffehr
...One advantage of LAB that it separates tonality (L) from color (ab)...
Lightness and color are not fully separated in Lab. One of the biggest disadvantages of Lab as an editing space is that changes in the L channel affect color saturation. Increasing L reduces saturation and decreasing L increases saturation, in a way that looks unnatural to me.

Cliff
Title: Correcting images in LAB vs RGB
Post by: digitaldog on February 22, 2010, 09:20:37 am
Quote from: Ishmael.
I'm capturing in sRGB with a canon 350d, and processing photos for web output on the photography website I am currently developing. From the reading I've done on LL, I thought that you wanted to work as little as possible with the image in camera RAW....is that incorrect in your experience?

I’d say the opposite. You’d want to render the best possible color and tone, certainly globally before moving farther. Its going to ultimately be faster and less destructive. Its like scanning in the old days. Instead of just setting some default scan setting and “fixing” the results in Photoshop, you’d want to use the best possible scanner driver and produce the best possible quality pixels before moving on. I’d recommend you consider leaving sRGB JPEG 8-bit workflows and examining a raw workflow. This article while long, its a superb primer as to the reasons why:
http://wwwimages.adobe.com/www.adobe.com/p...renderprint.pdf (http://wwwimages.adobe.com/www.adobe.com/products/photoshop/family/prophotographer/pdfs/pscs3_renderprint.pdf)
Title: Correcting images in LAB vs RGB
Post by: digitaldog on February 22, 2010, 09:29:27 am
Quote from: ErikKaffehr
What do you mean with sRGB? That setting on the camera is just ignored (AFAIK) if you shoot "RAW" and use ACR or even better LR (Lightroom).


Ish didn’t indicate he’s shooting raw. I suspect he’s shooting JPEGs in sRGB which is OK, but not ideal as many here know.

Ish, one big issue with what may be your 8-bit JPEG sRGB workflow into Lab is the data loss in moving from such a bit depth and color space into Lab and back. With a file that has 256 levels, a conversion from RGB to Lab (in this case in Adobe RGB (1998)) will discard 22 of those levels in the conversion alone. At least doing so from a high bit (more than 8-bits per color) will start off with more data and the net data loss isn’t an issue. 22 levels on a JPEG (which is already reducing your original data every time you save it) doesn’t sound like a lot, but that’s just the color mode conversion, you haven’t taken any additional editing that would alter the numeric values into account. IOW, its a good way to toss data to the point that banding in output results. Now you seem to be saying your output is to the web. That’s a pretty low quality delivery and you may be fine with your workflow. But if you ever intent to print those images to a quality device, you’ve discarded a lot of data (both in bit depth and color gamut) you can never get back.
Title: Correcting images in LAB vs RGB
Post by: Ishmael. on February 22, 2010, 09:32:21 am
Thanks to those of you that have provided useful responses. My earlier postings were a bit vague, let me clear up where I'm coming from. I thought
that Photoshop offered more room for correction than Camera Raw because it allows you to use multiple adjustment layers, smart objects, and seems to have quite a few more ways to sharpen, clarify, & NR an image. Now, I realized that I might be wrong so I thought I would make this posting and ask people who do know......

Secondly, I shoot all my images and RAW and I understand that in-camera settings like sharpness, white balance, color space, ect are not applied unless you're shooting JPEG...but all these can be adjusted once you're in ACR. Now, because at this stage I'm processing for web output, I thought the best workflow for most images was this:

--convert to sRGB because most monitors cannot handle the full gamut of Adobe or Pro Photo

--make minimal adjustments, mainly exposure and fill light in ACR

--open image and convert to LAB color, where I fix up the color cast, contrast, and saturation using multiple curves adjustments layers. Add clarity and noise reduction by converting the background layer to a smart object and applying high pass, luminance NR, and median (for color noise) filters.

--convert back to RGB, resize and resharpen (using Smart Sharpen) for web, and save as JPEG


As a sidenote: I'd like to add that Deke McClelland has stated that converting betwee RGB and LAB is very marginally destructive.



OK That is my workflow and any suggestions or criticism on it is highly appreciated.
Title: Correcting images in LAB vs RGB
Post by: KeithR on February 22, 2010, 09:57:48 am
Quote from: Ishmael.
Thanks to those of you that have provided useful responses. My earlier postings were a bit vague, let me clear up where I'm coming from. I thought
that Photoshop offered more room for correction than Camera Raw because it allows you to use multiple adjustment layers, smart objects, and seems to have quite a few more ways to sharpen, clarify, & NR an image. Now, I realized that I might be wrong so I thought I would make this posting and ask people who do know......

Secondly, I shoot all my images and RAW and I understand that in-camera settings like sharpness, white balance, color space, ect are not applied unless you're shooting JPEG...but all these can be adjusted once you're in ACR. Now, because at this stage I'm processing for web output, I thought the best workflow for most images was this:

--convert to sRGB because most monitors cannot handle the full gamut of Adobe or Pro Photo

--make minimal adjustments, mainly exposure and fill light in ACR

--open image and convert to LAB color, where I fix up the color cast, contrast, and saturation using multiple curves adjustments layers. Add clarity and noise reduction by converting the background layer to a smart object and applying high pass, luminance NR, and median (for color noise) filters.

--convert back to RGB, resize and resharpen (using Smart Sharpen) for web, and save as JPEG


As a sidenote: I'd like to add that Deke McClelland has stated that converting betwee RGB and LAB is very marginally destructive.



OK That is my workflow and any suggestions or criticism on it is highly appreciated.

I would suggest that you invest in Mr. Schewe's book "Real World Adobe Camera Raw" to learn just how powerful a raw workflow can be. Your statement "where I fix up the color cast, contrast, and saturation....." in LAB can be done(along with capture sharpening and all sorts of color corrections) in ACR and be TOTALLY NON-DESTRUCTIVE before you render it into a colorspace. Once you come into PS, you are now working on pixels(in ACR you're not) and any edits you do are destructive. If I have to output to an sRGB space, I do as much of what I can(which is a lot) and then output through the image processor, which I believe is the only place that you can specify sRGB(plain RGB is not the same). As for going into LAB and than back again is destructive, marginal(a subjective term) or not is still destructive.
Title: Correcting images in LAB vs RGB
Post by: digitaldog on February 22, 2010, 10:01:36 am
Quote from: Ishmael.
--convert to sRGB because most monitors cannot handle the full gamut of Adobe or Pro Photo
There are displays that can fully handle Adobe RGB (1998) gamut. But that’s somewhat moot because we are always working with color spaces that have gamut disconnects between what we edit and what we output. And if you think Adobe RGB (1998) is a wide gamut space on your sRGB like display, rethink the gamut of Lab which is HUGE. Assuming you are working with wide gamut capture to wide gamut output devices, displayed on an sRGB display, sure, there are colors you can’t see but you can print! Would you rather throw away colors you can see on the final output device just because extreme colors at the edge of the working space gamut cannot be seen on an intermediate device (the display)? There are few options here. Personally, the print is my final. I’d rather see the colors there and archive them so that in the future, as technology improves, maybe I’ll see them on a display. There are all kinds of other display versus output discontent like the huge differences in dynamic range. The display compared to the final print is simply an imperfect device. We have to live with that.
Quote
--make minimal adjustments, mainly exposure and fill light in ACR
You haven’t mentioned Dan M but others have so I’ll simply say that his suggestions are to zero out all ACR settings and render the raw data, then fix the resulting turd in Photoshop is blatantly stupid.
Quote
--open image and convert to LAB color, where I fix up the color cast, contrast, and saturation using multiple curves adjustments layers. Add clarity and noise reduction by converting the background layer to a smart object and applying high pass, luminance NR, and median (for color noise) filters.
You could do this in any color model (CMYK, RGB etc). The point is, why fix something that doesn’t have to be broken in the first place. Its like a photographer being totally sloppy on film exposure and then having the lab push his film 2 stops. It will work. Someone could probably make a print from it. But is it ideal and good working practice? No. As photographers, we would look down on an instructor who suggest we be sloppy with exposure and fix the issue later in the lab. This is Dan’s take on image processing. Sure, if you start with a turd (or in your case, an image with a color, contrast and saturation issue), you can make it look better after applying some of Dan’s techniques. Just like you can fix the exposure in the lab. But you could render (not fix, but actually create) idealized pixels from the raw to render stage in the first place. Its faster. Its fully non destructive. It provides a history that lives with the original raw data forever. It doesn’t make your files balloon to huge sizes because it simply metadata instructions (tiny text files). Prior to those capturing in Raw and using good raw processors, the ideas Dan proposes were the only option (or as I said above, make a good scan, not a crap scan and fix it in Photoshop). Dan’s got a workflow to sell and if you are caught with crappy, rendered data and no original (raw or film for a scan), his techniques are very, very useful. But short of that, they are simply idiotic. Its like the lab tech that will teach you the intricacies of push processing film because that’s all he knows. Proper exposure simply isn’t on his radar. Look at the god awful originals Dan shows in the before examples and ask yourself, “Do I capture this kind of rubbish”? If so, stick with his techniques. If not, if you believe that GIGO:Garbage In Garbage Out is something to avoid, move on.
Quote
As a sidenote: I'd like to add that Deke McClelland has stated that converting betwee RGB and LAB is very marginally destructive.
Given just that sentence, I could say Deke is wrong. But since he hasn’t defined anything like the original color space, bit depth and the problem that needs to be fixed (and why), I’ll cut him some slack unless you can find the exact quote.
ALL image processing in Photoshop which alters numeric values is destructive. That’s why we work in high bit, use adjustment layers (which will introduce the numeric rounding errors at some point). A good workflow is one that gets you the desired goals as quickly as possible with the best quality data. I simply don’t see why anyone with the intelligence of Dan or those who think he’s a bloody genius would want to start any image processing workflow with anything but ideal data.
Title: Correcting images in LAB vs RGB
Post by: John R Smith on February 22, 2010, 10:48:19 am
Quote from: KeithR
Once you come into PS, you are now working on pixels(in ACR you're not) and any edits you do are destructive.

If you are not working on pixels in RAW, what are you working on? Surely the sensor in the camera must ouput a file which is composed of pixels (one for each sensel) each one representing a location and a colour/lightness value? Or if not, what are they?

John
Title: Correcting images in LAB vs RGB
Post by: joofa on February 22, 2010, 11:26:16 am
Quote from: crames
Lightness and color are not fully separated in Lab. One of the biggest disadvantages of Lab as an editing space is that changes in the L channel affect color saturation. Increasing L reduces saturation and decreasing L increases saturation, in a way that looks unnatural to me.

I think if they had taken an approach such as Gram-Schmidt orthogonalization (http://en.wikipedia.org/wiki/Gram%E2%80%93Schmidt_process) instead of simple vector differences, then the components of the color space would have been better separated. In image/video compression the DCT works along a similar line as it is very close to the orthogonal components of principal components of natural image data.
Title: Correcting images in LAB vs RGB
Post by: digitaldog on February 22, 2010, 12:07:35 pm
As many who knew him would agree, Bruce Fraser was one of the best writers on the subject of imaging and this post, dating back well over a decade is a good read on the subject of using Lab:
Quote
Let me make it clear that I'm not adamantly opposed to Lab workflows. If
they work for you, that's great, and you should continue to use them.

My concern is that Lab has been oversold, and that naive users attribute to
it an objective correctness that it does not deserve.


Even if we discount the issue of quantization errors going from device space
to Lab and vice versa, which could be solved by capturing some larger number
of bits than we commonly do now, (though probably more than 48 bits would be
required), it's important to realise that CIE colorimetry in general, and
Lab in particular, have significant limitations as tools for managing color
appearance, particularly in complex situations like photographic imagery.

CIE colorimetry is a reliable tool for predicting whether two given solid
colors will match when viewed in very precisely defined conditions. It is
not, and was never intended to be, a tool for predicting how those two
colors will actually appear to the observer. Rather, the express design goal
for CIELab was to provide a color space for the specification of color
differences. Anyone who has really compared color appearances under
controlled viewing conditions with delta-e values will tell you that it
works better in some areas of hue space than others.

When we deal with imagery, rather than matching plastics or paint swatches,
a whole host of perceptual phenomena come into play that Lab simply ignores.

Simultaneous contrast, for example, is a cluster of phenomena that cause the
same color under the same illuminant to appear differently depending on the
background color against which it is viewed. When we're working with
color-critical imagery like fashion or cosmetics, we have to address this
phenomenon if we want the image to produce the desired result -- a sale --
and Lab can't help us with that.

Lab assumes that hue and luminance can be treated separately -- it assumes
that hue can be specified by a wavelength of monochromatic light -- but
numerous experimental results indicate that this is not the case.
For
example, Purdy's 1931 experiments indicate that to match the hue of 650nm
monochromatic light at a given luminance would require a 620nm light at
one-tenth of that luminance. Lab can't help us with that. (This phenomenon
is known as the Bezold-Brucke effect.)

Lab assumes that hue and chroma can be treated separately, but again,
numerous experimental results indicate that our perception of hue varies
with color purity.
Mixing white light with a monochromatic light does not
produce a constant hue, but Lab assumes it does -- this is particularly
noticable in Lab modelling of blues, and is the source of the blue-purple
shift.

There are a whole slew of other perceptual effects that Lab ignores, but
that those of us who work with imagery have to grapple with every day if our
work is to produce the desired results.

So while Lab is useful for predicting the degree to which two sets of
tristimulus values will match under very precisely defined conditions that
never occur in natural images, it is not anywhere close to being an adequate
model of human color perception. It works reasonably well as a reference
space for colorimetrically defining device spaces, but as a space for image
editing, it has some important shortcomings.

One of the properties of LCH that you tout as an advantage -- that it avoids
hue shifts when changing lightness -- is actually at odds with the way our
eyes really work. Hues shift with both lightness and chroma in our
perception, but not in LCH**.


None of this is to say that working in Lab or editing in LCH is inherently
bad. But given the many shortcomings of Lab, and given the limited bit depth
we generally have available, Lab is no better than, and in many cases can be
worse than, a colorimetrically-specified device space, or a colorimetrically
defined abstract space based on real or imaginary primaries.

For archival work, you will always want to preserve the original capture
data, along with the best definition you can muster of the space of the
device that did the capturing. Saving the data as Lab will inevitably
degrade it with any capture device that is currently available. For some
applications, the advantages of working in Lab, with or without an LCH
interface, will outweigh the disadvantages, but for a great many
applications, they will not. Any time you attempt to render image data on a
device, you need to perform a conversion, whether you're displaying Lab on
an RGB monitor, printing Lab to a CMYK press, displaying scanner RGB on an
RGB monitor, displaying CMYK on an RGB monitor, printing scanner RGB to a
CMYK press, etc.

Generally speaking, you'll need to do at least one conversion, from input
space to output space. If you use Lab, you need to do at least two
conversions, one from input space to Lab, one from Lab to output space. In
practice, we often end up doing two conversions anyway, because device
spaces have their own shortcomings as editing spaces since they're generally
non-linear.

The only real advantage Lab offers over tagged RGB is that you don't need to
send a profile with the image. (You do, however, need to know whether it's
D50 or D65 or some other illuminant, and you need to realise that Lab (LH)
isn't the same thing as Lab.) In some workflows, that may be a key
advantage. In many, though, it's a wash.

One thing is certain. When you work in tagged high-bit RGB, you know that
you're working with all the data your capture device could produce. When you
work in Lab, you know that you've already discarded some of that data.

Bruce

** this is one of Dan’s reasons that RGB workflows in raw converters or using the so called “Master Curve” (another of his made up terms) is off base. Adobe could easily have made the curves work as Dan demands of them, but much smarter people like Thomas Knoll have discussed that most users find this effect counter to what they expect and desire and now you know why. One can easily counteract this increase in saturation using the appropriate layer blend in Photoshop. Also, there is an article on this site by Mark Segal that goes into this debate in some detail: http://www.luminous-landscape.com/essays/Curves.shtml (http://www.luminous-landscape.com/essays/Curves.shtml). You’ll note that as usual, Dan will not venture out of his private list to address this piece. Peer review isn’t something he wishes to address.
Title: Correcting images in LAB vs RGB
Post by: joofa on February 22, 2010, 12:53:24 pm
Quote from: digitaldog
As many who knew him would agree, Bruce Fraser was one of the best writers on the subject of imaging and this post, dating back well over a decade is a good read on the subject of using Lab:

It would appear to me that while some of the criticism on the Lab space is justified, such as regarding the origins and intended use of Lab space, but in the quoted note by Bruce Fraser, Bruce has heaped upon Lab some issues that have been incorporated in more modern color appearance models (CAMs) and using that in conjunction with Lab space. The distance formulae in Lab space are also becoming increasingly complex (CIE 94, DE 2000, etc.) and are intended to remove some of the shortcomings mentioned in the note above.


Title: Correcting images in LAB vs RGB
Post by: Michael H. Cothran on February 22, 2010, 01:07:47 pm
Quote from: John R Smith
If you are not working on pixels in RAW, what are you working on? Surely the sensor in the camera must ouput a file which is composed of pixels (one for each sensel) each one representing a location and a colour/lightness value? Or if not, what are they?
John

John - To use an analogy, and I hope I'm correct in saying it this way -

There ARE no pixels in a RAW file. Only data. A "blue print," if you will, on how the file should be constructed. Since, in RAW format, nothing has yet been built, it is more advantageous to adjust the blue print (which is what you would be doing in a RAW converter), than to alter or modify the house once built (which is what you would be doing once pixelized in PS).

Michael
Title: Correcting images in LAB vs RGB
Post by: digitaldog on February 22, 2010, 01:15:32 pm
Quote from: joofa
It would appear to me that while some of the criticism on the Lab space is justified, such as regarding the origins and intended use of Lab space, but in the quoted note by Bruce Fraser, Bruce has heaped upon Lab some issues that have been incorporated in more modern color appearance models (CAMs) and using that in conjunction with Lab space. The distance formulae in Lab space are also becoming increasingly complex (CIE 94, DE 2000, etc.) and are intended to remove some of the shortcomings mentioned in the note above.

Agreed but just what applications are you referring to that have incorporated such modern CAMs in the context of this discussion of Lab in Photoshop?
I think Bruce is suggesting modern CAMs address the issues he points out and yet, do we have access to them, nearly a decade after this post?
Title: Correcting images in LAB vs RGB
Post by: joofa on February 22, 2010, 01:20:08 pm
Quote from: digitaldog
Agreed but just what applications are you referring to that have incorporated such modern CAMs in the context of this discussion of Lab in Photoshop?
I think Bruce is suggesting modern CAMs address the issues he points out and yet, do we have access to them, nearly a decade after this post?

I think you are right that Photoshop may not have incorporated more modern models. However, my reply was a general one and not intended towards Photoshop as the quoted text from Bruce Fraser was also more general and perhaps not intended towards Photohsop.
Title: Correcting images in LAB vs RGB
Post by: digitaldog on February 22, 2010, 01:25:22 pm
Quote from: joofa
I think you are right that Photoshop may not have incorporated more modern models.

As far as I know, it doesn’t. And editing in Lab in Photoshop has all the issues and more Bruce points out above.
Title: Correcting images in LAB vs RGB
Post by: Schewe on February 22, 2010, 02:22:28 pm
Quote from: Ishmael.
--open image and convert to LAB color, where I fix up the color cast, contrast, and saturation using multiple curves adjustments layers. Add clarity and noise reduction by converting the background layer to a smart object and applying high pass, luminance NR, and median (for color noise) filters.


All of the above (except for the convert to Lab) would best be done in Camera Raw on a raw file. The advantages would be to offer a far more efficient workflow and optimal final output results.

I won't get into the sRGB vs other color space (nor 8 vs 16 bit) but there is very, very little one can't do in Camera Raw 5.x that has to be done in Photoshop unless you do substantial retouching and/or image assembly. Heck, you can even process out TIFF, PSD or JPEG files from Camera Raw without ever having to open Photoshop (open Camera Raw hosted in Bridge).

Considering Camera Raw was originally written by the same guy who was the primary author of Photoshop (Thomas Knoll) you might think he may have made some advances in image processing this time around...he did.
Title: Correcting images in LAB vs RGB
Post by: Ishmael. on February 22, 2010, 02:48:20 pm
Quote from: Schewe
All of the above would best be done in Camera Raw on a raw file. The advantages would be to offer a far more efficient workflow and optimal final output results.

I won't get into the sRGB vs other color space (nor 8 vs 16 bit) but there is very, very little one can't do in Camera Raw 5.x that has to be done in Photoshop unless you do substantial retouching and/or image assembly. Heck, you can even process out TIFF, PSD or JPEG files from Camera Raw without ever having to open Photoshop (open Camera Raw hosted in Bridge).

Considering Camera Raw was originally written by the same guy who was the primary author of Photoshop (Thomas Knoll) you might think he may have made some advances in image processing this time around...he did.


Thanks Schewe. Does the same go for earlier versions of Camera Raw? I'm using 4.0 on CS3.
Title: Correcting images in LAB vs RGB
Post by: Schewe on February 22, 2010, 03:19:12 pm
Quote from: Ishmael.
Thanks Schewe. Does the same go for earlier versions of Camera Raw? I'm using 4.0 on CS3.

Camera Raw 4.6 (the last version for CS3) doesn't have the benefit of local corrections using gradients of brushes that Camera Raw 5.x does in Photoshop CS4. But for global image adjustments, yes, ACR 4.6 would be good for making important image adjustments.
Title: Correcting images in LAB vs RGB
Post by: HickersonJasonC on February 22, 2010, 08:31:31 pm
Quote from: KeithR
I would suggest that you invest in Mr. Schewe's book "Real World Adobe Camera Raw"

Hilarious advice considering Jeff's typical attack on OP's "reading comprehension." Ironic? LOL in any case!
Title: Correcting images in LAB vs RGB
Post by: Ishmael. on February 22, 2010, 08:51:50 pm
I don't mean to beat this horse to death but I am still not clear on one issue: is it wise to edit in Camera Raw using Pro Photo/16bit  and then convert to sRGB/8bit when I'm saving JPEGs for the web? Or is the conversion back to sRGB just going to undo whatever advantages Pro Photo gave me?
Title: Correcting images in LAB vs RGB
Post by: curvemeister on February 22, 2010, 09:00:26 pm
Quote from: Ishmael.
I don't mean to beat this horse to death but I am still not clear on one issue: is it wise to edit in Camera Raw using Pro Photo/16bit  and then convert to sRGB/8bit when I'm saving JPEGs for the web? Or is the conversion back to sRGB just going to undo whatever advantages Pro Photo gave me?

Camera Raw uses a slightly modified version of ProPhoto internally in any case.  Converting to sRGB can be done at any stage, with no difference in quality.  From a procedural point of view, it saves a step to convert to sRGB coming out of ACR.
Title: Correcting images in LAB vs RGB
Post by: digitaldog on February 22, 2010, 09:18:44 pm
Quote from: Ishmael.
I don't mean to beat this horse to death but I am still not clear on one issue: is it wise to edit in Camera Raw using Pro Photo/16bit  and then convert to sRGB/8bit when I'm saving JPEGs for the web? Or is the conversion back to sRGB just going to undo whatever advantages Pro Photo gave me?

Not at all a problem.
Title: Correcting images in LAB vs RGB
Post by: Ishmael. on February 22, 2010, 10:39:52 pm
All your guys help is much appreciated. It's a huge help to me as I'm learning the different aspects of post-processing.  

 
Thanks

Ish.
Title: Correcting images in LAB vs RGB
Post by: Dale Allyn on February 22, 2010, 10:49:36 pm
Quote from: Ishmael.
I don't mean to beat this horse to death but I am still not clear on one issue: is it wise to edit in Camera Raw using Pro Photo/16bit  and then convert to sRGB/8bit when I'm saving JPEGs for the web? Or is the conversion back to sRGB just going to undo whatever advantages Pro Photo gave me?

Ishmael,

If you expect to do more than minor adjustments in PS (after moving from ACR), and if you expect to use that file for more than one type of output (such as for web and for printing to a quality printer with a wider gamut), you may prefer to convert to sRGB after doing your work on the file in PS. In other words, you may consider doing whatever needed in ACR, if moving to PS do so and work in layers there, then save your final masterpiece as a "master version". From this point you can convert to sRGB for web, and if your monitor is one which allows you to see a difference (i.e. a wide gamut display) you could then make whatever tweaks improve your sRGB file needs, convert to 8-bit, sharpen as appropriate and save for web as your needs require.

This way, you can go back to your "master version" in ProRGB and adjust it for printing to your favorite paper, etc. without needing to redo your work.

Also, I would agree that Real World Camera RAW is a good book to have in your library, as well as buying the video here at L.L. "From Camera to Print".
Title: Correcting images in LAB vs RGB
Post by: Tim Lookingbill on February 23, 2010, 12:58:28 am
If you're concerned about ACR 4.6's limitations I can profess after about a year and half working with this version you can still do quite a bit in grabbing every detail your camera can deliver even when you think that images is hopeless and headed for the trash.

Below is a Raw shot from my Pentax K100D, a $477 DSLR. The image on the left is what the jpeg would've given me if I'ld shot it only in that format. As you can see it's a wreck, but the version on the right was all edited in ACR 4.6 on the Raw file in 16 bit ProPhotoRGB using primarily the HSL and curve tool and an extreme adjust to color temp without touching one pixel. All the edits were saved as one XMP rendering instruction file.

If I'ld done it in Photoshop (which I couldn't since the jpeg is shot to hell) I'ld have to go into separate tool dialog boxes and save individual custom named settings making for a very cluttered tool directory to keep track of on the computer for each image.

I'm not saying you'll be able to do this kind extreme corrections at the get go, but it will make you pause the next time you decide throw away an image because it looks like the image on the left. I started in Photoshop back in 1998 teaching myself photo restoration with Photoshop 4 and 5. I applied that understanding playing around with ACR tools to teach myself to get those kind of results.

I suggest you get to know those tools by doing the same. Then the thought of editing in Lab will soon be a distant memory.

Raw editing RULES!
 
[attachment=20432:JpegVsRa...angeFlwr.jpg]
Title: Correcting images in LAB vs RGB
Post by: Hening Bettermann on February 26, 2010, 05:08:47 am
Hi!

The thread has made it very clear that Lab is no good option as an EDITING color space. But I wonder about 2 other potential uses:

1- Since it is a huge space, would it make sense as a space for archiving images?
2- ColorEyes advocates the use of Lab for monitor profiling. What is the experts view on this?

Kind regards - Hening.
Title: Correcting images in LAB vs RGB
Post by: Tim Lookingbill on February 26, 2010, 01:34:26 pm
#1. No

#2. Not sure why ColorEyes would be telling you that. And besides that Lab space has nothing to do with calibration except act as a color reference space to compare one devices color response to a known mathematical definition of color which is all that computers understand. You're comparing Lab as an editing space as apposed to a color reference space. This is why images saved in Lab don't need an embedded profile to work in a color managed workflow because Lab is the reference space for everything and is also the reason you don't get a dialog box prompt of a "Profile mismatch or missing profile" when opening a Lab image in Photoshop. The problem saving images in that space is not all applications can read and/or display it properly.

Too much of a PITA to deal with. Just stick to RGB. This stuff is complicated enough as it is.
Title: Correcting images in LAB vs RGB
Post by: digitaldog on February 26, 2010, 01:38:15 pm
Quote from: Hening Bettermann
1- Since it is a huge space, would it make sense as a space for archiving images?
2- ColorEyes advocates the use of Lab for monitor profiling. What is the experts view on this?

As to #1, see what Bruce wrote above:
Quote
For archival work, you will always want to preserve the original capture
data, along with the best definition you can muster of the space of the
device that did the capturing. Saving the data as Lab will inevitably
degrade it with any capture device that is currently available.

Generally speaking, you'll need to do at least one conversion, from input
space to output space. If you use Lab, you need to do at least two
conversions, one from input space to Lab, one from Lab to output space.

As to #2, that, as Tim points out makes no sense. Are you sure you’re not thinking about an L* tone response curve (which is full of controversy too and based on my understanding isn’t anywhere as useful or necessary as some would suggest)?
Title: Correcting images in LAB vs RGB
Post by: joofa on February 26, 2010, 02:18:01 pm
Quote from: Hening Bettermann
The thread has made it very clear that Lab is no good option as an EDITING color space.

Unfortunately, many authors, and it would appear to me Bruce Fraser included, do not make a clear distinction between what is the theoretical limitation of a space and what is the current implementation of that space/standard/spec/etc. offering. Conflating the two results in notions that people make judgments on theory and algorithmic correctness based upon what a particular implementation (particular software) is doing. For example even in this forum issues regarding the correctness of stuff such as "optimal" sharpening etc., are being done based upon what a particular software, Photoshop, is doing. Of course, I fully realize that a particular software is what people have in their hands so they have to go by that software. However, that is the responsibility of a technical author to clearly delineate which shortcomings in a certain workflow are coming from a particular implementation and which are actual theoretical bounds.

I think I have a few books from Bruce Fraser at home and I shall go and recheck them, but what is quoted of his writings here on this forum, it appears, that Bruce is heaping criticism on Lab space, many of which, in more modern specs, have been assimilated into a theoretical model with Lab (think color appearance models, CAMs, in conjunction with Lab). In theory, it does not matter if Photoshoop does not have them, and an author should point that out that its Photoshop responsibility to modernize and not necessarily the fault of a particular space.

Quote from: Hening Bettermann
1- Since it is a huge space, would it make sense as a space for archiving images?

This is another one of an implementation issue. You would see comments such as that particular space is limited in "gamut", etc. However, in practise, many of those "shortcomings" happen because of certain decision early in the processing chain (such as clipping negative numbers and numbers greater than 1 (normalized) in color values, etc.). The primaries of a color space span the space and if such strippings, etc., are not done early on, and kept in the file all the way to end of the processing chain, when one is about to output, and then gamut mapping/clipping is done, then many of the restrictions of the "small" gamut of a certain space may be resolved.

In essence, notions such as "huge space" are appearing because it is "huge" in positive numbers and less than normalized 1 (though in Lab space, a and b do go negative). Otherwise, negative numbers did not stop CIE to conduct its spectral tristimulus determination experiments in RGB, and then they moved on to all positive XYZ space, because of concerns at that time regarding negative numbers, which should not affect us these days working with computers.
Title: Correcting images in LAB vs RGB
Post by: digitaldog on February 26, 2010, 02:35:07 pm
Quote from: joofa
In theory, it does not matter if Photoshoop does not have them, and an author should point that out that its Photoshop responsibility to modernize and not necessarily the fault of a particular space.
\

Having color appearance models and having color appearance models implemented in imaging software are two very different things.

What products have such color appearance models?
Title: Correcting images in LAB vs RGB
Post by: joofa on February 26, 2010, 02:46:20 pm
Quote from: digitaldog
Having color appearance models and having color appearance models implemented in imaging software are two very different things.

What products have such color appearance models?

Hi DigitalDog,

I think I was trying to point out if products are not out there that could have certain capabilities of a certain space, then its a product's responsibility to modernize, and not necessarily the fault of the color space.

On a different note, I shall have a look at Photoshop API again to see if some of the above-mentioned stuff may be implemented as plugins which any 3rd part out there can write. It has been a while I looked at Photoshop SDK as I do most of my image manipulation stuff in Adobe After Effects, since I am more used to it, its API/SDK is very clean, and I can do most of the stuff I need to do to images in After Effects.

Title: Correcting images in LAB vs RGB
Post by: Hening Bettermann on February 26, 2010, 03:58:13 pm
Thank you for your answers. Sorry that I had overlooked that Bruce had already answered my 1. question.

> Are you sure you’re not thinking about an L* tone response curve (which is full of controversy too and based on my understanding isn’t anywhere as useful or necessary as some would suggest)?

It is in fact the L* tone response curve which I have encountered when profiling the monitor.

Kind regards - Hening.
Title: Correcting images in LAB vs RGB
Post by: ejmartin on February 26, 2010, 07:15:39 pm
Quote from: crames
Lightness and color are not fully separated in Lab. One of the biggest disadvantages of Lab as an editing space is that changes in the L channel affect color saturation. Increasing L reduces saturation and decreasing L increases saturation, in a way that looks unnatural to me.

Cliff

Is there a good representation in which they are fully separated?
Title: Correcting images in LAB vs RGB
Post by: Schewe on February 26, 2010, 09:30:31 pm
Quote from: joofa
I think I have a few books from Bruce Fraser at home and I shall go and recheck them, but what is quoted of his writings here on this forum, it appears, that Bruce is heaping criticism on Lab space, many of which, in more modern specs, have been assimilated into a theoretical model with Lab (think color appearance models, CAMs, in conjunction with Lab). In theory, it does not matter if Photoshoop does not have them, and an author should point that out that its Photoshop responsibility to modernize and not necessarily the fault of a particular space.

As a friend and colleague of Bruce let me weight in here...

First off, quoting Bruce is by definition a glimpse in history since Bruce has not had the opportunity to revise his opinions since 2006 when he passed away...

On the other hand, nothing about what Bruce had written has changed regarding the 800lbs gorilla in the room, Photoshop. I don't care if "technically" things have changed in specifications or new concepts, the fact is, Photoshop's Lab hasn't changed since the beginning of Photoshop's implementation of Lab–please, correct me if I'm wrong...

Lab has it's uses...less so when dealing with reasonable digital capture (much to the chagrin of Dan Margulis who STILL advocates processing digital captures through Camera Raw with zero image adjustments because, well, Camera Raw isn't useful for professionals).

The fact that Adobe and the Photoshop engineers (who are a pretty smart group) haven't seen a reason or benefit from radically changing Photoshop's implementation of something is telling volumes...

Use Lab for what it's good for–but honestly I have never seen a digital capture that COULDN'T be corrected in RGB either in ACR or Photoshop–but really, it's not very useful to wave your hands and claim some sort of mystical capabilities of Lab.
Title: Correcting images in LAB vs RGB
Post by: ErikKaffehr on February 27, 2010, 02:06:16 am
Hi,

Outputting to sRGB removes the colors falling outside the sRGB gamut. The advantage you keep as much information as possible. Also, whatever you do in ACR or Lightroom (which shares the same processing engine) is guaranteed to be done in the right order, at least according to the views held by Adobe.

Actually, there are a few other issues. ACR/LR does not use Pro Photo RGB but it uses "Pro Photo Primaries in a linear space". So it has the same gamut as Pro Photo RGB but no gamma (or gamma equal one). There are quite a few arguments in favor of editing color as long as possible in linear gamma.

In most cases the differences will be subtle. Dan Margulis, a well know authority on image processing, has the view that 16 bit processing is not needed. Most other image processing experts say that using more bits is beneficial. The way I see it, it's a good approach to keep as much information as possible. Of course, having a parametric workflow based on raw images essentially means that nothing is lost, except in the final stage, when a picture is processed for it's intended use.

Best regards
Erik


Quote from: Ishmael.
I don't mean to beat this horse to death but I am still not clear on one issue: is it wise to edit in Camera Raw using Pro Photo/16bit  and then convert to sRGB/8bit when I'm saving JPEGs for the web? Or is the conversion back to sRGB just going to undo whatever advantages Pro Photo gave me?
Title: Correcting images in LAB vs RGB
Post by: ejmartin on February 27, 2010, 02:33:36 am
Quote from: ErikKaffehr
There are quite a few arguments in favor of editing color as long as possible in linear gamma.

And those are...?
Title: Correcting images in LAB vs RGB
Post by: ErikKaffehr on February 27, 2010, 06:27:32 am
Hi,

I need to check out this. I'm pretty sure I have seen that argument made, but I'm not really sure.

Best regards
Erik


Quote from: ejmartin
And those are...?
Title: Correcting images in LAB vs RGB
Post by: crames on February 27, 2010, 10:02:11 am
Quote from: ejmartin
Is there a good representation in which they are fully separated?
The Chromaticity representation: xyY. This is where color is converted (using the color profile info) to tristimulus values X,Y,Z, and x = X/(X+Y+Z), y = Y/(X+Y+Z). Since little x and y are ratios, if you change Y (Luminance) then X and Z will change proportionately.

A way to separate them in CIELAB might be to divide the a and b coordinate by L, then multiply a and b with the new L after L is changed.

There is also a relative of CIELAB:  CIELUV, which has a true "saturation" correlate, but is probably not a good editing space.

Saturation is the key: article by R.W.G. Hunt (http://www.imaging.org/ist/publications/reporter/issues/Reporter16_6.pdf)
Title: Correcting images in LAB vs RGB
Post by: joofa on February 27, 2010, 10:10:47 am
Quote from: Schewe
First off, quoting Bruce is by definition a glimpse in history since Bruce has not had the opportunity to revise his opinions since 2006 when he passed away...

I did go back and checked out a few books from Bruce Fraser that I have. It has been a while I have looked at them, and I must say that I admired his writings. It would appear to me that Bruce's view point was certainly more comprehensive and not quite narrowly focused as it appears from some of the clippings of his writings quoted in this thread. However, it seems he was more interested in practical issues such as making profiles, etc., and not going into details of color science. But that is fine considering the audience he was targeting.

Quote from: Schewe
The fact that Adobe and the Photoshop engineers (who are a pretty smart group) haven't seen a reason or benefit from radically changing Photoshop's implementation of something is telling volumes...

I don't know why you think that if Photoshop is not doing something then it means they thought it was technically worthless. There are a few other reasons why industry doesn't do many things that seem technically "correct", which could be applicable here, and they are (1) if public is happy with a product then why unnecessarily change a product, and, (2), Photoshop is product that many (most??) people use to make visually pleasing images and not necessarily technically or scientifically correct images.

Consider an example: The NTSC coefficients for converting to grayscale, i.e., 0.299*R + 0.587*G + 0.114*B, are well-known. However, the R,G, and B used typically are non-linear (gamma-corrected) and the coefficients {0.299, 0.587, 0.114}, actually are derived for linear R, G and B. However, there is a certain amount of research NTSC did into why use the same coefficients even in the "technically incorrect" setting of nonlinear, RGB. IIRC, some SMPTE publications have used similar set of coefficients, that were not "technically matched", to the primaries, but perhaps they used for either historical reasons, or they resulted in visually pleasing images.
Title: Correcting images in LAB vs RGB
Post by: crames on February 27, 2010, 10:25:45 am
Quote from: ejmartin
And those are...?
Linear gamma demo by Helmut Dersch (Panorama Tools) (http://www.all-in-one.ee/%7Edersch/gamma/gamma.html)
Title: Correcting images in LAB vs RGB
Post by: digitaldog on February 27, 2010, 10:44:05 am
Quote from: Schewe
First off, quoting Bruce is by definition a glimpse in history since Bruce has not had the opportunity to revise his opinions since 2006 when he passed away...

And yet, at least in terms of what he wrote about Lab above, I don’t believe anything at all has changed, his points are as valid today as they were the day he wrote about them.

There may be better color appearance models today than in 2006 although I’m not privy to this being a fact. And Bruce was fully aware of this development, as I have posts he made about how such models would be better than using Lab.
Title: Correcting images in LAB vs RGB
Post by: joofa on February 27, 2010, 12:11:30 pm
Quote from: digitaldog
There may be better color appearance models today than in 2006 although I’m not privy to this being a fact. And Bruce was fully aware of this development, as I have posts he made about how such models would be better than using Lab.

The issue is not necessarily that Lab space has problems. Many of the problems are well-known. I think CIE's original intention in promoting Lab space was a relatively uniform color space, meant for the specification of color differences in some controlled situations, and not a color appearance space.  The point I am trying to make is that while authors are aware of problems with Lab space, they might not have discussed how to resolve some of them, i.e, some of the efforts that are underway to construct some predictors of color appearance attributes. Bruce Fraser has produced a large body of work and I have not read all of it. I have seen only some of it so my analysis is based upon that part, and it is possible that he may have addressed these issue elsewhere that I have not had access to yet. For e.g., the blue/purple discrimination issue is being addressed in various color difference formulae, which though are still not perfect, but an effort is being made to assimilate them. Similarly, the issue of using "incorrect" normalization of XYZ in Lab, may be seen as a full matrix, instead of a diagonal matrix. ETC.

There are some aspects of color appearance that Lab is incapable of handling. However, the Lab space should be used as a simple model that may be utilized as a benchmark to measure the improvements of more sophisticated models.
Title: Correcting images in LAB vs RGB
Post by: digitaldog on February 27, 2010, 12:21:58 pm
Quote from: joofa
The issue is not necessarily that Lab space has problems.
But it does.
Quote
Many of the problems are well-known.
Yes they are. But the two sentences above from you seem to contradict each other.
Quote
I think CIE's original intention in promoting Lab space was a relatively uniform color space, meant for the specification of color differences in some controlled situations, and not a color appearance space.
Agreed. That’s exactly what Bruce wrote. And even then, there are some issues which he points out too.
Quote
The point I am trying to make is that while authors, such as Bruce Fraser, are aware of problems with Lab space, they do not discuss how to resolve some of them, i.e, some of the efforts that are underway to construct some predictors of color appearance attributes.
The problems with Lab can’t be resolved. Having more robust color appearance models could but they either don’t exist or don’t exist in any products we can use. So its like saying we need anti-gravity machines. That be cool. Until such technology exists, what we need and want is kind of moot. And in the context of this series of discussions around Lab editing, nothing is changed.
Quote
There are some aspects of color appearance that Lab is incapable of handling. However, the Lab space should be used as a simple model that may be utilized as a benchmark to measure the improvements of more sophisticated models.
Indeed. And the problem is, Lab isn’t a color appearance model or at least a very good one, which is the point Bruce makes.
Title: Correcting images in LAB vs RGB
Post by: joofa on February 27, 2010, 12:28:33 pm
Quote from: digitaldog
The problems with Lab can’t be resolved.

In my previous message I have specific examples of a few problems that may be resolved to some extent.

Quote from: digitaldog
Having more robust color appearance models could but they either don’t exist or don’t exist in any products we can use. So its like saying we need anti-gravity machines. That be cool. Until such technology exists, what we need and want is kind of moot. And in the context of this series of discussions around Lab editing, nothing is changed.

DigitalDog, this is where we are running in circles. My point has been consistently not to base arguments on what is offered by current technology or products. I have been saying to make a distinction between theory and implementation of some portion of that theory in available products. In one of my messages above I have mentioned that products such as Photoshop may not necessarily need to modernize since they are used typically for making visually pleasing images and not necessarily scientifically or technically correct images.
Title: Correcting images in LAB vs RGB
Post by: crames on February 27, 2010, 12:47:19 pm
Quote from: digitaldog
The problems with Lab can't be resolved. Having more robust color appearance models could but they either don't exist or don't exist in any products we can use. So its like saying we need anti-gravity machines. That be cool. Until such technology exists, what we need and want is kind of moot. And in the context of this series of discussions around Lab editing, nothing is changed.
Here is your anti-gravity machine: CIECAM02 Plugin (http://sites.google.com/site/clifframes/ciecam02plugin)  

But sorry, only for Windows....
Title: Correcting images in LAB vs RGB
Post by: joofa on February 27, 2010, 01:49:40 pm
Quote from: crames
Here is your anti-gravity machine: CIECAM02 Plugin (http://sites.google.com/site/clifframes/ciecam02plugin)

Wow, Cliff, you have done some very interesting work!

Title: Correcting images in LAB vs RGB
Post by: bjanes on February 27, 2010, 09:08:09 pm
Quote from: digitaldog
The problems with Lab can’t be resolved. Having more robust color appearance models could but they either don’t exist or don’t exist in any products we can use. So its like saying we need anti-gravity machines. That be cool. Until such technology exists, what we need and want is kind of moot. And in the context of this series of discussions around Lab editing, nothing is changed.

Indeed. And the problem is, Lab isn’t a color appearance model or at least a very good one, which is the point Bruce makes.
Of course, unmentioned in this discussion, is the fact that ProPhotoRGB is not a color appearance model either. As this old reference to CIECAM97 (http://scien.stanford.edu/class/psych221/projects/98/ciecam/Description.html) points out, a color appearance model starts out with a tristimulus value and takes viewing conditions, background, and other factors into account in order to predict the appearance of the color of the object under the specified viewing conditions. The source tristimulus values could be expressed in either L*a*b or ProphotoRGB. Bruce Lindbloom has a calculator that can be used to convert between XYZ, L*a*b and various RGB spaces.  No one is saying that ProPhotoRGB is not suitable for editing images.

That CIE L*a*b was developed merely to quantify differences in color is somewhat disingenuous. It is true that it was developed as a perceptually uniform space where a given distance between colors would be perceptually uniform, but it was derived from the the CIE 1931 XYZ color space and inherits the attributes of that space. Under the specified viewing conditions, L*a*b coordinates will accurately predict the appearance of a color just the same as with the original 1931 scheme. Problems arise when the these conditions are not met.
Title: Correcting images in LAB vs RGB
Post by: digitaldog on February 27, 2010, 09:51:53 pm
Quote from: bjanes
Of course, unmentioned in this discussion, is the fact that ProPhotoRGB is not a color appearance model either.
No, its not, its an RGB working space (which apparently isn’t obvious).
Quote
That CIE L*a*b was developed merely to quantify differences in color is somewhat disingenuous. It is true that it was developed as a perceptually uniform space where a given distance between colors would be perceptually uniform, but it was derived from the the CIE 1931 XYZ color space and inherits the attributes of that space.

Except it isn’t fully perceptually uniform. But yes, its design was to predict (report) color differences with a numeric value, as Bruce quotes above. Not as an editing space (again, as Bruce mentioned). At the time, Photoshop and such image processing was the realm of science fiction (let alone a task anyone at the time even contemplated).
Title: Correcting images in LAB vs RGB
Post by: digitaldog on February 27, 2010, 10:04:18 pm
Quote from: bjanes
Under the specified viewing conditions, L*a*b coordinates will accurately predict the appearance of a color just the same as with the original 1931 scheme. Problems arise when the these conditions are not met.

Keep in mind that Lab was just an attempt to create a perceptually uniform color space where equal steps correlated to equal color closeness based on the perception of a viewer. The CIE didn't claim it was prefect (cause its not). Most color scientists will point out that Lab exaggerates the distance in yellows thereupon it underestimate the distances in the blues. Lab assumes that hue and chroma can be treated separately. There's an issue where hue lines bend with increase in saturation perceived by viewers as an increase in both saturation and a change in hue when that's really not supposed to be accruing. Further, according to Karl Lang, there is a bug in the definition of the Lab color space. If you are dealing with a very saturated blue that's outside the gamut of say a printer, when one uses a perceptual rendering intent, the CMM preserves the hue angle and reduces the saturation in an attempt to make a less saturated blue within this gamut. The result is mathematically the same hue as the original, but the results end up appearing purple to the viewer. This is unfortunately accentuated with blues, causing a well recognized shift towards magenta. And as I alluded above, its important to keep in mind that the Lab color model was invented way back in 1976, long before anyone had thoughts about digital color management.
Title: Correcting images in LAB vs RGB
Post by: digitaldog on February 27, 2010, 10:09:59 pm
Quote from: bjanes
No one is saying that ProPhotoRGB is not suitable for editing images.

It most certainly can be (its not prefect and there are caveats). For one, you can define “colors” with numeric values that we can’t see (hence, they are not colors).
Title: Correcting images in LAB vs RGB
Post by: joofa on February 27, 2010, 11:54:43 pm
Quote from: digitaldog
Most color scientists will point out that Lab exaggerates the distance in yellows thereupon it underestimate the distances in the blues. Lab assumes that hue and chroma can be treated separately. There's an issue where hue lines bend with increase in saturation perceived by viewers as an increase in both saturation and a change in hue when that's really not supposed to be accruing.  ..... The result is mathematically the same hue as the original, but the results end up appearing purple to the viewer. This is unfortunately accentuated with blues, causing a well recognized shift towards magenta.

Andrew, there has been progress in resolving some of these issues, for e.g. by warping Lab space to correct for non-linearity in hue. Please see the following reference, for e.g:

G.J. Braun, F. Ebner, and M.D. Fairchild, "Color Gamut Mapping in a Hue-Linearized CIELAB Color Space," Proc. of IS&T/SID 6th Color Imaging Conference, Scottsdale, pp. 163-168 (1998).
Title: Correcting images in LAB vs RGB
Post by: Peter_DL on February 28, 2010, 07:02:56 am
Lab became less interesting for me (years ago)
when I discovered the RGB-based HSL blend modes in Photoshop
i.e. Luminosity and Saturation.

But then came Simon Tindemans (http://21stcenturyshoebox.com/essays/color_reproduction.html) and his HS-L*/Y curves.
It’s not a Color Appearance Model but still one step ahead compared to current technology – IMO.

Peter

--


Title: Correcting images in LAB vs RGB
Post by: ErikKaffehr on February 28, 2010, 07:47:14 am
Quote from: ErikKaffehr
Hi,

Outputting to sRGB removes the colors falling outside the sRGB gamut. The advantage you keep as much information as possible. Also, whatever you do in ACR or Lightroom (which shares the same processing engine) is guaranteed to be done in the right order, at least according to the views held by Adobe.

Actually, there are a few other issues. ACR/LR does not use Pro Photo RGB but it uses "Pro Photo Primaries in a linear space". So it has the same gamut as Pro Photo RGB but no gamma (or gamma equal one).

In most cases the differences will be subtle. Dan Margulis, a well know authority on image processing, has the view that 16 bit processing is not needed. Most other image processing experts say that using more bits is beneficial. The way I see it, it's a good approach to keep as much information as possible. Of course, having a parametric workflow based on raw images essentially means that nothing is lost, except in the final stage, when a picture is processed for it's intended use.

Best regards
Erik
Title: Correcting images in LAB vs RGB
Post by: ErikKaffehr on February 28, 2010, 07:48:58 am
Hi,

I didn't find the info I was thinking about, so I modified my original posting.


Quote from: ejmartin
And those are...?
Title: Correcting images in LAB vs RGB
Post by: EsbenHR on February 28, 2010, 08:26:13 am
Well, we know L*a*b*, in its original form, sucks as a color appearance model.
We use anyway in all kinds of settings it was not designed for, mostly due to lack of something better.

I doubt any color-space (existing or future) designed to retain a simple relationship between the number of just-noticeable-differences between any two colors and an Euclidean distance can simultaneously perform well as an appearance model that works well for editing.
Also, I see no reason one should exist.

It is a bit like making a flat map of the earth: you can preserve lengths, angles or areas but you can not get it all simultaneously.

What we lack, in my opinion, is a dataset that can be used to create a color-space suitable for a given application.
Unfortunately, that is a huge and expensive job to do right. It would likely take hundreds of test-persons (including "enough" people for the common types of color blindness) and a huge amount of time to exhaustively test the entire visual range for sensitivity and color matching under various relevant viewing conditions.

Until we have such a (public) data set, I think the choice of color-space remains a subjective choice where any choice is arguably right.
Title: Correcting images in LAB vs RGB
Post by: BernardLanguillier on February 28, 2010, 10:04:16 am
Quote from: Ishmael.
I've recently discovered the power of the LAB color space and it almost seems too good to be true.I'm sure certain adjustments should be done in RGB, but for the most part I am getting significant more powerful images out of LAB than RGB. Given that I'm relatively new to photoshop I would really like to know from you veterans out there why someone would correct an image in RGB instead of LAB.

There is little doubt that a conversion from RGB to LAB and back is an entropic process that results in less color information (some pixels that were initially carrying different color values end up carrying the same). The only question of importance though is whether this is really an issue on actual photographs.

There are some things that the LAB space is very useful for, and that is improve the color separation in mostly mono-chromatic images with dominantly redish or greenish tints. Some will argue that the same can be done in RGB, but I am still to find a method as fast and straightforward as a symmetric steepening of the curve in a or b channels in LAB color space. This being said, it should be used mostly on 16 bits images and is better kept as one of the last steps in a workflow before printing.

For some reason that still eludes me, it seems that some people think that it is a duty for a photographer to chose a camp for or against LAB... as if we didn't have enough of that with Canon and Nikon discussions.

Cheers,
Bernard
Title: Correcting images in LAB vs RGB
Post by: bjanes on February 28, 2010, 10:36:41 am
Quote from: digitaldog
It most certainly can be (its not prefect and there are caveats). For one, you can define “colors” with numeric values that we can’t see (hence, they are not colors).
Of course, the apices of the ProPhotoRGB color triangle have to be well outside the CIE horseshoe so that the visible colors can be encoded--this leads to encoding inefficiency, but memory and disc space these days is cheap and this inefficiency is not a significant disadvantage. In L*a*b, the a and b values also extend beyond the visible range. In both systems, injudicious editing can result in imaginary colors.

A more fundamental problem with an RGB space is that it does not take into account how information is processed in the retina and the brain. The retina has three types of photoreceptors corresponding to red, green and blue (ignoring tetrachromats for the time being) and this is the basis for the Young-Helmholtz theory of color vision. Opponent processing occurs in the neural network of the retina and in the brain as is taken into account by Hering theory of color vision: red opposes green and yellow opposes blue. This opponency is taken into account in the L*a*b model. How this relates to color processing is not immediately apparent and perhaps others can comment.

It was formerly thought that opponency was hard wired in the retina, but recent studies (see Scientific American, February 2010) have shown that opponency can be overcome in certain situations and red-green and blue-yellow can be perceived as new colors. How would these colors be expressed in current models?

Hue twists with changes in luminance in L*a*b are well known and must be taken into account in any color appearance model. Bruce Lindbloom has published a uniform perceptual LAB space using lookup tables. RGB spaces use a power curve (gamma) for perceptual uniformity of luminance. The L* tone curve of L*a*b is designed to be perceptually uniform an an exponent of 2.2 for a power curve most closely approaches the L* curve; the exponent of 1.8 used for ProPhotoRGB is suboptimal (see Bruce Lindbloom's companding calculator). Computing power is now sufficient so that one can edit one color space and have the results on the screen and in the info palette simulate another space. For example, in Lightroom the working space is linear ProPhotoRGBx, but the RBG values and screen preview are for ProPhotoRGB with an sRGB tone curve. The actual working space may become less relevant and its imperfections can be corrected in the color appearance model. So far as I know, most CAMs use L*a*b or similar CIE space as the reference space (not ProPhotoRGB).
Title: Correcting images in LAB vs RGB
Post by: bjanes on February 28, 2010, 10:41:16 am
Quote from: bjanes
Of course, unmentioned in this discussion, is the fact that ProPhotoRGB is not a color appearance model either.

Quote from: digitaldog
No, its not, its an RGB working space (which apparently isn’t obvious).
And L*a*b is a reference color space, not a CAM, which should be equally obvious, but is ignored by you.
Title: Correcting images in LAB vs RGB
Post by: digitaldog on February 28, 2010, 12:50:42 pm
Quote from: bjanes
And L*a*b is a reference color space, not a CAM, which should be equally obvious, but is ignored by you.

I don’t recall implying that Lab was a CAM...
Title: Correcting images in LAB vs RGB
Post by: BernardLanguillier on February 28, 2010, 05:12:21 pm
Quote from: DPL
Lab became less interesting for me (years ago)
when I discovered the RGB-based HSL blend modes in Photoshop
i.e. Luminosity and Saturation.

But then came Simon Tindemans (http://21stcenturyshoebox.com/essays/color_reproduction.html) and his HS-L*/Y curves.
It’s not a Color Appearance Model but still one step ahead compared to current technology – IMO.

Thanks for the link.

Cheers,
Bernard
Title: Correcting images in LAB vs RGB
Post by: Hening Bettermann on March 01, 2010, 09:16:19 am
Hi Cliff

This CIECAM02 plug-in sounds very interesting to me. On your site, I read

"Another motivation to create the plug-in was to explore the use of CIECAM02 as a perceptually-uniform image editing space. For example, it is often desirable to be able to increase the colorfulness of an image, uniformly for all hues (see Evans), and without causing hue shifts."

Does this mean that you have implemented Bruce Lindblooms Perceptually Uniform Lab space?

Kind regards - Hening.
Title: Correcting images in LAB vs RGB
Post by: crames on March 01, 2010, 06:23:48 pm
Quote from: Hening Bettermann
Does this mean that you have implemented Bruce Lindblooms Perceptually Uniform Lab space?
No, the Lindbloom space has been implemented by him as a color profile (for a fixed viewing condition).

CIECAM02 is something else: a comprehensive color appearance model, which predicts how colors look under different viewing conditions. (Link to the CIE) (http://www.colour.org/tc8-01/) It is much more uniform than CIELAB, probably about as uniform as Lindbloom's profile - blues do not turn purple, etc.

Unfortunately it can be 10 times more complicated than regular color management, which is not a good thing. But it has the potential to solve some sticky problems for photographers, like screen-to-print matching.
Title: Correcting images in LAB vs RGB
Post by: papa v2.0 on March 03, 2010, 01:08:22 pm
CIELAB  was developed as a color space to be used for the specification of color differences on reflective media. It was not exactly perceptually uniform and this non-uniformity was addressed in subsequent colour difference formula.
I dont think it was  designed with image editing in mind. Hence the problem.
CIELAB is however used in the ICC profile format as one of the  Profile connection Spaces.

CIECAM02 is a more perceptually uniform colour space and is used for gamut mapping. It is quite  complicated to use requiring several input and and output parameters. I have been using it in a RAW image pipeline with reasonable results. My approach is to use  a camera sensor RGB to XYZ matrix (optimised by error reduction in CIECAM02 space) to achive accurate scene referred colorimerty before passing to CIECAM02.
I have not however used it as an editing space although it wouldnt be to hard to implement or design an interface for such. Cliff Rames has produced a good CIECAM02 plug in for photoshop as mentioned earlier in this thread.

This can also now be done using ACR (see Creating scene-referred images using Photoshop CS3  and  (http://www.color.org/scene-referred.xalter) ) and then passing to the CIECAM02 plugin. The scene viewing conditions would then need to be entered.

CIECAM02 has several colour spaces for example:

JMh Lighness Colourfulness and Hue
JCh Lighness Chroma and Hue
QMh Brightness Colourfulness and Hue
QCh Brightness Chroma and Hue
Jab Lightness  redness greenness  and blueness yellowness
also has correlates for saturation

CIECAM02 is used as gamut mapping space and might yet be a ICC PCS.
Title: Correcting images in LAB vs RGB
Post by: Hening Bettermann on March 03, 2010, 04:51:23 pm
Hi papa v2.0!

Thank you very much for your post and this highly interesting link. I am very eager to explore this, substituting Raw Developer for ACR. If I find the learning curve of CIECAM02 too steep, I may just use the profiles and use Simon Tindemans Tonability plug-in to begin with.
http://21stcenturyshoebox.com/tools/tonability.html (http://21stcenturyshoebox.com/tools/tonability.html)

Good light! - Hening.