When Adobe decided to improve it’s revenues by converting purchasers into renters earlier this month, I found myself wondering what I would do if I had the opportunity to create an alternative to Photoshop.
The question has been percolating in my head for a while now, and when I discovered this thread, I decided to try sharing some of my thoughts, despite the conventional wisdom that it is pointless to post into a thread that is already several pages long.
By way of background, I’ve been using Photoshop for a very long time. As much as I like Lightroom, I use Photoshop at some point in the production of almost all my exhibition images. My technical expertise is in software development and digital signal processing. I was pretty interested in Live Picture when it was introduced, until I saw the price.
If I could, I’d want to hire about six particular people from Adobe for the project (naturally, that would include Mr. Knoll, and a few other key people). That would mean total clean-room development, but starting with a fresh slate might not be such a bad thing. There’s a lot of pretty well thought out stuff in Photoshop, so building something better would be far from trivial – but doable with the right people and resources, IMHO.
When I edit images, I think in terms of regions of interest, and transformations on them.
Photoshop has much more precise ways to select a region of interest than Lightroom does, wherein lies much of its appeal for me. In Photoshop, selections can be either vector-based, or mask-based. Some tasks lend themselves to one kind of selection, and some to another. Either way, the partial selection (variable transparency) concept is very nice.
Smart filters in Photoshop, and adjustment brushes in Lightroom have the desirable ability to be modified at any point, which is something I’d like to generalize to all image operations in a Photoshop replacement.
The layer paradigm seems natural to me, perhaps because I think of layers in terms of the very efficient bitblt operations that are performed between them. I’ve noticed though, that a surprising number of people don’t find them intuitive.
Whether layers are intuitive or not, one of the things that almost everyone does in Photoshop from time to time is “copy merged to new layer”, which breaks the ability to treat the layers beneath the new layer as being editable. I’d like to make that unnecessary.
I’ve been thinking of an alternative approach, in which regions of interest don’t have to fit into the layer stack approach. Perhaps you’re editing a photo of someone, and you are adding local contrast to her left iris. You might want to do a few things to the same selection.
I’d like to be able to define “left iris” as a named selection, including a bit of feathering around the edges, and then be able to perform one or more transformations on that selection. The selection and its transformations would be one “thing” in a collection of such things. Perhaps they would look like a layer stack, as they do in Photoshop, but they wouldn’t have to. They could be items in a list, nodes in a node tree, pages in a book, or something entirely new. Whatever they looked like, you could presumably click on something, and immediately see what part of the image it affected, and be able to modify any of the transformations that had been made on it. Lightroom sort of takes this approach, but the Lightroom approach doesn’t have a useful way to organize the adjustments to an image, so it bogs down after a while: there are just too many pins on the image to tell what’s what.
I’d like the selection and transformations to survive through other operations that happened to include the same region of interest. For example, if I were to slightly enlarge the entire eye, I’d like the transformations on the iris to remain editable. If I had removed distracting reflections in a few windows in a photo of a building, and then applied a transformation on the whole building to straighten its vertical lines, I’d still like to be able to modify the transformations on the individual windows.
I don’t see any reason that operations that we’ve come to think of as being pixel oriented have to be stored as bitmaps. Filters such as liquefy, and content-aware healing can still be thought of as being essentially parametric.
Naturally, I think of “snapshots”, and alternative branches as being intrinsic functionality (think of what a version control system does).
For the sake of efficiency it might very well be that the software would have to create a series of bitmaps as part of its imaging processing pipeline (I think that it would) – but I see that as an efficiency optimization, rather than as a necessary part of the visual paradigm. So even if the internal representation were something like bitmaps and bitblt operations between them, you’d never have a need to make an explicit new bitmap of a particular state via “Merge Visible” and its variants.