I've only scratched the surface so far, but I'm favorably impressed.
The sky and subject masks—the new "A.I." features (i.e., based on machine-learning)—offer a quick head start for making selections. Sometimes they nail the selection automatically, but sometimes you need to fiddle with it manually, for example with the brush tool; but that's also true in Photoshop.
Actually, however, I think an even greater benefit of the new masking design is that it makes it possible to create a sophisticated stack of non-destructive local adjustments that can be quite complex but are easy to work with. You create these sequentially, and prior selections can become the basis for modified or intersecting subsequent ones. Describing it makes it sound more complicated than it is in practice.
(When I say the local adjustments are sequential, I'm referring to the dependency relationship among them not how the edits are actually applied when you export an image. For example, you can select the sky, then apply a gradient to part of the sky, then make adjustments only to the part of the sky affected by the gradient. So the edits are additive from your perspective. Presumably Lightroom will actually apply them according to its own internal logic when you export or print the image.)
Another feature I really like is the ability to attach your own label to each selection in the stack. That facilitates moving from one selection to another in order to tweak each one as you work toward the final appearance you want.
I've attached an example of an image that I edited using only local adjustments. The first attachment shows the appearance of the imported raw file in Lightroom's Develop module. The stack of selections is visible in the upper right corner of the second attachment. You read it from bottom to top:
- First, I used Select Subject to select only the heron. Lightroom made a perfect selection automatically. I then inverted the selection so that I now had a selection that included everything except the heron. I labeled this "background."
- Next, I used a Luminance Range to select just the brightest parts of the waterfall (and labeled the selection "waterfall"), then edited that selection to bring out the detail in that part of the water.
- The next step was to create a selection so I could pull up the shadows in the rocks. I used another Luminance Range for that.
- Next, I used Select Subject again, but didn't invert it, so I could make some minor adjustments to the heron.
- Finally, I wasn't satisfied with the detail in the water below the falls at the lower left of the image, so I used a brush to select it and made some adjustments to that area.
After I had made initial edits for each of the selections, I went back to tweak several of them a bit. It only dawned on me after the fact—I was playing with this file to figure out how the new masking feature worked—that I never made any global adjustments to the image. Also, despite the use of machine learning to select and invert the subject, all the adjustments were non-destructive and could be modified at any time.
The entire process took about five minutes. When I originally edited this image back in 2012, I probably spent several hours working on it in Photoshop.