Pages: 1 ... 9 10 [11] 12 13 ... 22   Go Down

Author Topic: If Thomas designed a new Photoshop for photographers now...  (Read 186663 times)

walter.sk

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1433
Re: If Thomas designed a new Photoshop for photographers now...
« Reply #200 on: May 13, 2013, 10:33:11 am »

This is a joke right? You are going to define what a PHOTOGRAPHER is for the rest of us? Who appointed you guru? Ever seen a straight print of Ansel Adams "Moonrise"? One of the most famous photographs in history-by your definition he's not a photographer because he manipulated the crap out of it. Ever heard of Jerry Uelsmann, a very important figure in the history of PHOTOGRAPHY?
+1
Logged

hjulenissen

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2051
Re: If Thomas designed a new Photoshop for photographers now...
« Reply #201 on: May 13, 2013, 10:56:58 am »

Jeff, let me work through an example to make sure that what I'm saying is clear. More than occasionally, but not often, I find that it's not practical to do what I want to do in Ps. For that image processing, I write Matlab code. I start out with images in 16-bit TIFF files, so when they come into Matlab, they are 16-bit gamma-compressed unsigned integers. Because I'm lazy and I want to get the most out of my time spent programming, I immediately convert them to 64-bit linear floating point representation. That way I don't have to worry about overflow or underflow, or the loss in precision that can occur when, for, example, subtracting one large number from another to yield a small number.

I use objects, some times one with several methods, sometimes many. I think of the objects as analogous to layers: they take an image, group of images, or part of am image in, do something to it, and leave something for the next object. The methods are parameterized, so I don't have to start all over to tweak the algorithms, but, that can't be dispositive, because with smart objects I can tweak layer settings in Ps. The order of operations is rigidly defined by the flow of the programming.

I think of what I'm doing as pixel processing.

Am I wrong?

Jim
MATLAB is a perfect example of an expressive scripting language capable of expressing any imaginable image processing operation. Since the native datatype in MATLAB is double-precision floats, that is the obvious choice for processing in MATLAB (other datatypes is possible, but less neat).

So in principle, both Lightroom and Photoshop could (I guess) be reduced to fancy, snappy, interactive GUI front-ends (something that MATLAB blows at) that has as output a set of MATLAB (or MATLAB-like) instructions that can be interpreted by MATLAB (or the open-source Octave) to transform an image. Chances are large that (many of) the Adobe R&D image processing people use MATLAB in order to do prototyping of new algorithms. I guess that most algorithms are fundamentally discrete approximations to continous ideal behaviour, although table-lookups and the like can also be done.

So why is this a bad idea for a product? It would probably be painfully slow, doing generic vector/matrix calls to a double-precision library is never going to be as fast as (potentially) hand-coded SSE/AVX vectorized intrinsics/assembler/Intel libraries for integer 8/16-bit datatypes where the implementer knows just how much error can be tolerated.


Funda

-h
Logged

keithrsmith

  • Full Member
  • ***
  • Offline Offline
  • Posts: 118
Re: If Thomas designed a new Photoshop for photographers now...
« Reply #202 on: May 13, 2013, 11:16:23 am »

One question that needs to be thought about is  "What would Adobe do if a new "Photoshop" appeared"

I think they would quickly rush out an "Enhanced Elements" with the important things that are missing added ( see this thread for suggestions)
16 bit , all adjustment layers, all colour spaces, etc. and sell it at an attractive price which would effectively kill of the competition.

I believe that the main mistake Adobe has made in this whole Cloud issue is to not have a stand alone Photoshop.  This is the one app out of the whole suite that seems to be causing the most issues - the main one that i can see being the the fear of not being able to revisit psd files created by the latest greatest version, once your subscription has lapsed.
It is also the app that many part time, and amateur uses have and for which there is no easy alternative.  For almost all of the other apps - video , audio,...  there are viable alternatives, and the market is predominantly professional, plus it is IMO much less likely that old projects will be revisited in the way that old PSD's may be.

Lets hope Adobe see sense and reinstate a perpetual licence photoshop - an enhanced elements will do.

Keith
Logged

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word
Re: If Thomas designed a new Photoshop for photographers now...
« Reply #203 on: May 13, 2013, 11:19:05 am »

So why is this a bad idea for a product? It would probably be painfully slow, doing generic vector/matrix calls to a double-precision library is never going to be as fast as (potentially) hand-coded SSE/AVX vectorized intrinsics/assembler/Intel libraries for integer 8/16-bit datatypes where the implementer knows just how much error can be tolerated.

I was using the example as a way to crystallize the discussion, not as a concrete product proposal, but, thanks for bring practicality into the picture.

I don't think doing intermediate calcs in FP (maybe not DP FP, but FP) is necessarily impractical. We are seeing a proliferation of DSP-derived processors on graphics adapters. Many of those processors support FP, and there is a trend to make the results of calculations available to programs running in the main processors. Indeed, you can buy add-in cards that do DSP-like processing that have no connection to a display; they're expensive and power hogs, but that should change. Image processing is relatively easily parallelized.

Doubling or quadrupling the precision of representation will cause image processing programs to want more memory, but that's getting cheaper all the time. (I somewhat sheepishly admit to buying a machine with 256 GB of RAM for Matlab image processing.)

Another thing about the intermediate image processing that could ameliorate its inherently slower speed than custom-tweaked code: a lot can be done in the background. In order for a program to feel crisp to the user, all that's necessary is to update the screen fast. The number of pixels on the screen is in general fewer than the number in the file, so there's less processing to keep the screen up to date than to render the whole file. Just in case the user decides to zoom in, the complete image should be computed in the background. This also avoids an explicit rendering step, which could be an annoyance for the user.

All this background/foreground stuff makes life harder for the programmers. On the other hand, think of the time they'll save not tweaking code.

Blue sky, right?

Jim

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word
Re: If Thomas designed a new Photoshop for photographers now...
« Reply #204 on: May 13, 2013, 11:24:48 am »

It would probably be painfully slow, doing generic vector/matrix calls to a double-precision library is never going to be as fast as (potentially) hand-coded SSE/AVX vectorized intrinsics/assembler/Intel libraries for integer 8/16-bit datatypes where the implementer knows just how much error can be tolerated.

The thing that Jeff said that started me on this way of thinking was -- paraphrasing -- you don't want to do in a pixel processor what you can do in a parametric processor partially because of limited precision in the pixel processor causing potential damage to the image.  That implies that the implementer doesn't always know just how much error can be tolerated.

Jim
« Last Edit: May 13, 2013, 11:34:00 am by Jim Kasson »
Logged

Ronald Nyein Zaw Tan

  • Guest
Re: If Thomas designed a new Photoshop for photographers now...
« Reply #205 on: May 13, 2013, 12:29:24 pm »

As a portraitist specializing in men's fashion and beauty photography, I need the Liquefy and the Puppet Warp. I use Calculations and Apply Image to manipulate luminosity masks and use them creatively to address tonalities and shape in my photographs of men. I need Gaussian Blur and the High-Pass Filter. I could live without the Custom Filter and the "Deconvolution Sharpening." Come to think of it, depending on what kind of an image I am working, I use the tools and commands in Photoshop. The Content-Aware tools in CS6 saved me time in texture repairing and background repairs on a few occasions.

Get rid of the video and 3D and 8-Bit filters (I cannot use them anyway).

It is OK, if this version of Photoshop does not come bundled with ACR. I don't use ACR. For RAW processing, I am using PhaseONE CaptureONE PRO 7.1.1.
« Last Edit: May 13, 2013, 12:32:35 pm by Ronald Nyein Zaw Tan »
Logged

32BT

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3095
    • Pictures
Re: If Thomas designed a new Photoshop for photographers now...
« Reply #206 on: May 13, 2013, 12:34:41 pm »

Problem is not so much the precision of the rendering pipeline, problem is stacking.

Especially if one of the steps in the stack involves blur in one way or another (think USM, local contrast enhancement, etc). In an interpreted pipe-line this would not only increase sampling requirements disproportionally and exponentially, but it disrupts the parallelism of the graphics cards internals. Additionally, some of the newer sharpening techniques rely on iteration. If you want to implement those types of functions it becomes progressively problematic if the entire pipeline is interpreted.

Secondly, what you want to determine also is the effect the user expects to see when they change some previous step.
If they use a parametric brush on some particular location in the image, and they decide to turn on lenscorrections, or apply a  perspective correction, then what should the position (and form) of the brush do? And what about if you stack images for panorama stitching and the user does the same?

Note how simple misunderstandings can occur:
If I ask you to "blend" image A and B, do you interpret that as:
1. start with A and blend B on top (not commutative),

or do you interpret that as:
2. create a mix of A and B (commutative).

What about if A and/or B have masks?


And finally, the size requirements for a photoshop image are usually significantly different than what our graphics cards currently are based on. Even if hardware will improve and become cheaper etc, you should still expect a 10 year time frame if at all, because the graphics cards are based on certain output requirements for gaming, video, and medical imaging. Well, I suppose password hacking could be added, but I'm not sure how that will affect the imaging capabilities of graphic cards.

But, any workflow that allows one to go back to previous steps could be called "parametric", and as such, as long as the expectations of the user are reasonable when deciding to redo a previous step, the application could be entirely "parametric". And a final result could be rendered based on recomputing the entire chain.





Logged
Regards,
~ O ~
If you can stomach it: pictures

Ralph Eisenberg

  • Jr. Member
  • **
  • Offline Offline
  • Posts: 83
Re: If Thomas designed a new Photoshop for photographers now...
« Reply #207 on: May 13, 2013, 01:32:40 pm »

Until the current version of ACR, my primary Raw converter had been Capture One (although I have owned all versions of Lightroom, which I sometimes use for printing). With the release of ACR 7, this has changed, so that I now generally make my conversions via Bridge, with the image opening as a smart object in PS CS6. (As an aside, I followed the upgrade cycles without skipping). I then have done secondary editing in PS, appreciating the ability to return to ACR to tweak my image when necessary. I would follow most of the suggestions made above for an image editor, but as is clear, I would hope for some kind of capability which did not rely on Lightroom, unless it would be possible to view images without the need to import them into a Lightroom catalogue. I make use of adjustment layers (and some blending modes) and the ability to make local corrections painting on masks. I'm very pleased with the sharpening and noise reduction tools in ACR, and with ACR in general, although I do miss some features of Capture One Pro for viewing and selecting Raw images. I certainly appreciate the fact that the Curves tool in ACR works just as it does in PS. The Healing Brush tool, Spot Healing Brush, and content-aware capabilities are very useful to me. For portrait retouching the Liquify filter has been a help. Of course, having the printing and soft-proofing capabilities of Lightroom in this image editor would be a plus, but I have most often gotten by with doing this in PS.
Thanks to Jeff Schewe for starting this thread, and naturally to Michael Reichmann (whose health I hope is improving) for the web site and much more that make this possible.
« Last Edit: May 14, 2013, 02:50:09 am by Ralph Eisenberg »
Logged
Ralph

Robert55

  • Jr. Member
  • **
  • Offline Offline
  • Posts: 80
Re: If Thomas designed a new Photoshop for photographers now...
« Reply #208 on: May 13, 2013, 01:39:23 pm »

I don't know if I add much to the discussion stating that but I'd add another vote for just adding a few things to LR - mainly compositiong (panorama*, HDR, focus stacking, and may be elements removal à la "Statistics").
These are the only reason I fired PS in the past year, I think. I personally don't use much pixel editing, partly because I do it worse than parametric ones, partly due to the file size and time penalty involved.

* for panorama stitching, please at least add a module to interactively choose perspective and projection before actual stitching to the photomerge routines! A toll to add control points as in more full-featured stitchers as Hugin or PTGui would be nice, but is less necessary.
?
For me, these are the things I only go to PS to nowadays. I'd also like something I'll call 'color stacking', for situations where part of your image has a warm and another a cool color temp [like a mountain valley partially in shadow].
Logged

rasterdogs

  • Jr. Member
  • **
  • Offline Offline
  • Posts: 92
Re: If Thomas designed a new Photoshop for photographers now...
« Reply #209 on: May 13, 2013, 01:49:08 pm »

I was using the example as a way to crystallize the discussion, not as a concrete product proposal, but, thanks for bring practicality into the picture.

I don't think doing intermediate calcs in FP (maybe not DP FP, but FP) is necessarily impractical. We are seeing a proliferation of DSP-derived processors on graphics adapters. Many of those processors support FP, and there is a trend to make the results of calculations available to programs running in the main processors. Indeed, you can buy add-in cards that do DSP-like processing that have no connection to a display; they're expensive and power hogs, but that should change. Image processing is relatively easily parallelized.

Doubling or quadrupling the precision of representation will cause image processing programs to want more memory, but that's getting cheaper all the time. (I somewhat sheepishly admit to buying a machine with 256 GB of RAM for Matlab image processing.)

Another thing about the intermediate image processing that could ameliorate its inherently slower speed than custom-tweaked code: a lot can be done in the background. In order for a program to feel crisp to the user, all that's necessary is to update the screen fast. The number of pixels on the screen is in general fewer than the number in the file, so there's less processing to keep the screen up to date than to render the whole file. Just in case the user decides to zoom in, the complete image should be computed in the background. This also avoids an explicit rendering step, which could be an annoyance for the user.

All this background/foreground stuff makes life harder for the programmers. On the other hand, think of the time they'll save not tweaking code.

Blue sky, right?

Jim

Does this mean I'd need more powerful hardware?
Logged

D Fosse

  • Guest
Re: If Thomas designed a new Photoshop for photographers now...
« Reply #210 on: May 13, 2013, 01:52:13 pm »

Just chiming in to say I'd buy this thing unseen within 30 minutes of announcement.

I agree with everything said so far... ;D
Logged

kirkt

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 604
Re: If Thomas designed a new Photoshop for photographers now...
« Reply #211 on: May 13, 2013, 01:56:03 pm »

I would like to see a "new" version of "photoshop" substantially change the workflow paradigm, whatever the resulting toolset is.  Specifically, I think we, as image processing folks, tend to work on an image sequentially - whatever that sequence is.  Open raw image > make adjustments > send to Photoshop > apply adjustment layers with masks > reduce image size > output sharpen, etc.

Whatever.  The idea is, there is a sequence to the workflow and, often, portions of that sequence require revisiting, revision, branching into a new variation, etc.

I think a node-based workflow, where one can piece together these operations in a logical flow, and revisit, rearrange, preview and create variations, with a real-time preview of any and all node outputs, would be a nice paradigm shift.  I would have no problem working on a "smart preview" version of an image, from raw conversion, all the way to output sharpening at final resolution, with the ability to render portions of it all along the node chain to see a 100% res sample to check my work.  Once my node chain is set up and I like the preview of the resulting changes, I could render a full-res version.  This is pretty standard for many render/modeling applications and video/compositing.  There is no reason why 2d image workflow has to be any different.

I think that 2D image workflow could benefit from this approach as well because it would promote variation - just create a branch off of the workflow and develop it separately.  It would ease automation - you can visualize your process and simply add an input node as a directory of images in front of your established chain of nodes to batch process images.  It could leverage the nascent "Smart Preview" raw technology that appears to be developing for the Cloud sync and smart device editing workflow.  THis node-based workflow fully preserves the "non-destructive" aspect of editing - the node-based edits are "parametric" until you finally commit to rendering them as full-res output - the original image is untouched, even if you chose to make pixel-based changes - this could be a node where a rendered proxy is part of the workflow, etc..  You could add output nodes along the way to render draft images of the stages of the edits, instead of having to save sequential PSDs to potentially have to revisit and revise.  The entire creative process is archived and editable - you could have template node structures for commonly used tasks, or commonly shot lighting conditions, looks, etc.  You could even save that entire node chain as ... you guessed it, a node, for use in other more complex chains - this would be like an action, but more flexible.

Of course, I would hope people could write their own nodes and third-party developers could write all sorts of "plug-ins" (nodes) or adapt currently existing products into a node-based form.  I see Lightroom as a node in this paradigm.

I apologize if this has already been mentioned in this thread, I know I am not inventing anything new here.  However, if there is to be a new photoshop, or yet-to-be-named image editor, I think a new workflow approach is in order and would save huge amounts of time and effort in the image processing workflow.

best - thanks jeff for starting this thread - I appreciate the chance to participate.

kirk
« Last Edit: May 13, 2013, 02:26:12 pm by kirkt »
Logged

gerryrobinson

  • Newbie
  • *
  • Offline Offline
  • Posts: 18
    • Gerry Robinson Photography
Re: If Thomas designed a new Photoshop for photographers now...
« Reply #212 on: May 13, 2013, 02:02:02 pm »

Jeff
Great thread!
For me I round trip from LR to Photoshop for the following:
compositing
merge to pano
focus stacking
cloning /healing (content aware)
actions
adjustments via layer masks
sculpting
progressive sharpening

Would love to see stuff like this worked into LR's workflow as seamlessly as possible.
A lot of the cameras out there,(especially the ones newer than my 20D) shoot video.
I think video support like CS6 has would be welcome.
If I could just open up LR, work on a image and not have noticed I'd roundtripped anywhere
that would be my Ideal workflow.
Gerry
Logged

s4e

  • Newbie
  • *
  • Offline Offline
  • Posts: 37
Re: If Thomas designed a new Photoshop for photographers now...
« Reply #213 on: May 13, 2013, 03:10:13 pm »

I would like to see a "new" version of "photoshop" substantially change the workflow paradigm, whatever the resulting toolset is.  Specifically, I think we, as image processing folks, tend to work on an image sequentially - whatever that sequence is.  Open raw image > make adjustments > send to Photoshop > apply adjustment layers with masks > reduce image size > output sharpen, etc.

Whatever.  The idea is, there is a sequence to the workflow and, often, portions of that sequence require revisiting, revision, branching into a new variation, etc.

I think a node-based workflow, where one can piece together these operations in a logical flow, and revisit, rearrange, preview and create variations, with a real-time preview of any and all node outputs, would be a nice paradigm shift.  I would have no problem working on a "smart preview" version of an image, from raw conversion, all the way to output sharpening at final resolution, with the ability to render portions of it all along the node chain to see a 100% res sample to check my work.  Once my node chain is set up and I like the preview of the resulting changes, I could render a full-res version.  This is pretty standard for many render/modeling applications and video/compositing.  There is no reason why 2d image workflow has to be any different.

I think that 2D image workflow could benefit from this approach as well because it would promote variation - just create a branch off of the workflow and develop it separately.  It would ease automation - you can visualize your process and simply add an input node as a directory of images in front of your established chain of nodes to batch process images.  It could leverage the nascent "Smart Preview" raw technology that appears to be developing for the Cloud sync and smart device editing workflow.  THis node-based workflow fully preserves the "non-destructive" aspect of editing - the node-based edits are "parametric" until you finally commit to rendering them as full-res output - the original image is untouched, even if you chose to make pixel-based changes - this could be a node where a rendered proxy is part of the workflow, etc..  You could add output nodes along the way to render draft images of the stages of the edits, instead of having to save sequential PSDs to potentially have to revisit and revise.  The entire creative process is archived and editable - you could have template node structures for commonly used tasks, or commonly shot lighting conditions, looks, etc.  You could even save that entire node chain as ... you guessed it, a node, for use in other more complex chains - this would be like an action, but more flexible.

Of course, I would hope people could write their own nodes and third-party developers could write all sorts of "plug-ins" (nodes) or adapt currently existing products into a node-based form.  I see Lightroom as a node in this paradigm.

I apologize if this has already been mentioned in this thread, I know I am not inventing anything new here.  However, if there is to be a new photoshop, or yet-to-be-named image editor, I think a new workflow approach is in order and would save huge amounts of time and effort in the image processing workflow.

best - thanks jeff for starting this thread - I appreciate the chance to participate.

kirk
Very interesting ideas Kirk!

I too very much support the idea of keep the parametric model and combine it with use of "smart preview" to make performance acceptable.
Logged

MarkM

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 428
    • Alaska Photographer Mark Meyer
Re: If Thomas designed a new Photoshop for photographers now...
« Reply #214 on: May 13, 2013, 03:23:16 pm »

I think a node-based workflow, where one can piece together these operations in a logical flow, and revisit, rearrange, preview and create variations, with a real-time preview of any and all node outputs, would be a nice paradigm shift.

Yes, me too! It would be really interesting to see what would happen if somebody like The Foundry (http://www.thefoundry.co.uk) decided to compete in this space. It is one of the few companies that could enter with a product that everyone (including Adobe) would have to take seriously. Considering the node-based workflow in high-end products like Nuke that they have already developed, I would think an image editor would be a pretty natural fit. Having said that, I could imagine that their management may not be particularly interested in selling a product to the mid and lower end of the industry where customer support becomes death by a million cuts and prices have to be considerably lower.
Logged

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word
Re: If Thomas designed a new Photoshop for photographers now...
« Reply #215 on: May 13, 2013, 04:13:00 pm »

Does this mean I'd need more powerful hardware?

One of the delightful givens of the entire history of electronic computation has been exponential growth of absolute processing power, and exponential growth of processing power per inflation-adjusted dollar. Trees don't grow to the sky, and I suppose that this can't continue forever. In fact, there has been some mild slowing over time. The doubling period initially cited by Gordon Moore in his Electronics article was a year, then amended to 18 months, and now thought by some to be two years.

However, although clock rates have stopped increasing because of power dissipation considerations (I remember a conference presenter in 1968 saying, "Contrary to popular opinion, the computer of the future will not be the size of a room; it will be the size of a light bulb -- and it will glow just as brightly." He was assuming advances in materials science that haven't come to pass yet.), transistor counts just keep right on climbing as the number of processors on a chip multiply.

The VP Manufacturing of Convergent Technologies, a 1980s company that drowned in the wake of the introduction of the IBM PC (they were selling an incompatible 8086-based computer at the time), used to have a motto on his wall: "Believe in  miracles? We count on them." I feel the same way about increasing processing power.

Jim
« Last Edit: May 13, 2013, 04:19:08 pm by Jim Kasson »
Logged

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word
Re: If Thomas designed a new Photoshop for photographers now...
« Reply #216 on: May 13, 2013, 04:18:06 pm »

I think a node-based workflow, where one can piece together these operations in a logical flow, and revisit, rearrange, preview and create variations, with a real-time preview of any and all node outputs, would be a nice paradigm shift.  

Nicely said, Kirk.

Jim

LKaven

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1060
Re: If Thomas designed a new Photoshop for photographers now...
« Reply #217 on: May 13, 2013, 04:34:19 pm »

I would like to see a "new" version of "photoshop" substantially change the workflow paradigm, whatever the resulting toolset is.  Specifically, I think we, as image processing folks, tend to work on an image sequentially - whatever that sequence is.  Open raw image > make adjustments > send to Photoshop > apply adjustment layers with masks > reduce image size > output sharpen, etc.

Whatever.  The idea is, there is a sequence to the workflow and, often, portions of that sequence require revisiting, revision, branching into a new variation, etc.

I think a node-based workflow, where one can piece together these operations in a logical flow, and revisit, rearrange, preview and create variations, with a real-time preview of any and all node outputs, would be a nice paradigm shift.  I would have no problem working on a "smart preview" version of an image, from raw conversion, all the way to output sharpening at final resolution, with the ability to render portions of it all along the node chain to see a 100% res sample to check my work.  Once my node chain is set up and I like the preview of the resulting changes, I could render a full-res version.  This is pretty standard for many render/modeling applications and video/compositing.  There is no reason why 2d image workflow has to be any different.

I think that 2D image workflow could benefit from this approach as well because it would promote variation - just create a branch off of the workflow and develop it separately.  It would ease automation - you can visualize your process and simply add an input node as a directory of images in front of your established chain of nodes to batch process images.  It could leverage the nascent "Smart Preview" raw technology that appears to be developing for the Cloud sync and smart device editing workflow.  THis node-based workflow fully preserves the "non-destructive" aspect of editing - the node-based edits are "parametric" until you finally commit to rendering them as full-res output - the original image is untouched, even if you chose to make pixel-based changes - this could be a node where a rendered proxy is part of the workflow, etc..  You could add output nodes along the way to render draft images of the stages of the edits, instead of having to save sequential PSDs to potentially have to revisit and revise.  The entire creative process is archived and editable - you could have template node structures for commonly used tasks, or commonly shot lighting conditions, looks, etc.  You could even save that entire node chain as ... you guessed it, a node, for use in other more complex chains - this would be like an action, but more flexible.

Of course, I would hope people could write their own nodes and third-party developers could write all sorts of "plug-ins" (nodes) or adapt currently existing products into a node-based form.  I see Lightroom as a node in this paradigm.

I apologize if this has already been mentioned in this thread, I know I am not inventing anything new here.  However, if there is to be a new photoshop, or yet-to-be-named image editor, I think a new workflow approach is in order and would save huge amounts of time and effort in the image processing workflow.

best - thanks jeff for starting this thread - I appreciate the chance to participate.

kirk

Yes, though I wrote this earlier in the thread, it's nice to see someone else pick up on this and elaborate.  It would be the key advance in architecture and workflow that this tool needs.  Using a dataflow architecture, you can implement most any request made here.  Not only that, but you can also provide different top-level user interfaces to suit the needs of different users.  It'd be a win all around.  Eric Chan, reminder to PM me.

MHMG

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1285
Re: If Thomas designed a new Photoshop for photographers now...
« Reply #218 on: May 13, 2013, 05:22:53 pm »

+1 to Jeff Schewe's remarks about Live Picture (LP). Way ahead of it's time (including soft proofing), and with features that still haven't been duplicated by other image editing programs, for example, a brush tool behavior that PS and LR still don't have as an option (as far as I have been able to figure out). Brush size was fixed relative to screen/window size so that when you zoomed in on an image, the brush size stayed the same...i.e, just like a real brush in hand, not a virtual brush stuck to the size of a set number of image pixels which change size on screen as image magnification is changed. Also, there was no visual on-screen distinction between viewing an image at odd magnifications versus evenly divisible units of pixel count (i.e. 25%, 50%, 100%, etc) as there still is today in both PS and LR4. I could inspect for image output sharpness at any desired magnification. This feature undoubtedly had to do with the interpolation processing of the pyramid file structure (remember the HP/Kodak Flashpix format initiative based on the IVUE pyramid file format structure used in LP?) as well as a superb anti-aliasing screen-draw algorithm present in the LP software.
 
My personal recollection of that era is perhaps a bit foggy now, but LP'a exclusive use of the "layers" metaphor (like cartoon animation cell overlays) and non destructive editing gave it a photographer's "darkroom dodging and burning" feel to it that PS simply didn't have, in large part because personal computer hardware wasn't capable enough at the time to give PS any real-time fluidity when working with big image files. So times have changed for better and for worse, but I still personally view LP as the cleanest and most elegant software I've ever used on a computer, bar none. Any programming teams wanting to produce a new image editor would do well to grab some old MAC hardware and take some time to play with the final version  LP version 2.6. Pity that LP got managed by bean counters and marketing 'experts" into an untimely death.

As others have already stated, the current version of LR and PS are very mature, and whether by corporate marketing decree, or by simple software evolutionary cycles, I still need PS for layers and mask sophistication that LR doesn't possess at this time. I can't do everything I need to do in LR. Part of this has to do with my active interest in fine art printmaking.  A wedding or sports photographer, for example, who needs to deliver high quality files, and lots of them, to the client is going to be thrilled with LR.  But someone wanting to sculpt a single image to the very highest print standard (that's my goal) still needs PS to accomplish this very personal and somewhat obsessive/compulsive level of finesse!

Speaking of blue sky stuff, I can't think of an easier and thus better metaphor for "layers and masks".  Why do we need to throw out this concept simply because it has been around for a long time in the imaging industry? It is brilliant, so IMHO, any truly competent image editor needs to have it. I'm aware of OnOne Perfect layers, but LR without layers and mask sophistication on a par with PS makes it incomplete and insufficient for my needs. Its absence in LR is the only reason I have to keep returning to PS. While on the subject of "tried and true" image editing features like layers and masks , I can think of a parallel debate going on currently with computer OS software designers. It has to do with "files and folders".  Many OS designers now say the files and folders concept is an antiquated metaphor and confusing to the young generation of smart phone users.  New mobile OS's for smartphones and tablets are increasingly being designed by software teams that feel we should dispense with this time honored analogy to paper filing cabinets for records management. Seriously? The files and folders paradigm works, and it ported very well to digital records management. Why throw it away and hide where our files are kept so that each individual application has to outsmart us to find our files? Stupid, stupid, stupid. This movement to do away with the files and folders concept will cause all sorts of file migration (and migraine) headaches for digital librarians and archivists in the near future. Hence,  a personal plea to all the software engineers following this thread.... KEEP both The "boringly conventional" files/folders and layers/masks concepts solidly in place on whatever new image editing software program you choose to give us.

Lastly, I'd like to see to small but refined updates to both PS and LR. I'd like a more robust info tool pallette that allows us to see LAB and LCH values not only for source file being edited but for destination file as well and , delta E, and delta AB values between the source and destination. PS still can't do that. It can give CMYK "proof" values but not LAB or LCH for the destination profile, only LAB values for the source image data. And we need much much better metadata viewers and editors. A floating palette than can be customized to show/hide metadata fields of our own choosing and with metadata editing right on that palette. PS and LR, indeed just about all software on the market today, are simply awful on metadata organization and viewing. Plenty of room for improvement there.

cheers,
Mark
http://www.aardenburg-imaging.com
« Last Edit: May 13, 2013, 05:53:12 pm by MHMG »
Logged

LKaven

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1060
Re: If Thomas designed a new Photoshop for photographers now...
« Reply #219 on: May 13, 2013, 06:03:29 pm »

Speaking of blue sky stuff, I can't think of an easier and thus better metaphor for "layers and masks".  Why do we need to throw out this concept simply because it has been around for a long time in the imaging industry? It is brilliant, so IMHO, any truly competent image editor needs to have it.

It's not a question of throwing out this metaphor.  In my mind, it's a question of having this metaphor built as one possibility among many on top of a generalized architecture (dataflow).  Photoshop layers should not be the ground abstraction, it should be in an upper-level abstraction.  I think you'll see in the long run there are many better ways to go that are also "layerlike" but don't follow slavishly from the original photoshop implementation.  The original implementation is an ongoing hack.  ["Apply image?..."groups"?...."smart objects"?  Ad hoc blend modes?   It's pretty far from brilliant in today's software design curriculum.]  You can do much better without throwing out the things you like about it.  You can even have a "compatibility module" for those who want to preserve their historical files.
Pages: 1 ... 9 10 [11] 12 13 ... 22   Go Up