Pages: 1 ... 10 11 [12] 13 14 ... 22   Go Down

Author Topic: If Thomas designed a new Photoshop for photographers now...  (Read 186664 times)

jrsforums

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1288
Re: If Thomas designed a new Photoshop for photographers now...
« Reply #220 on: May 13, 2013, 06:53:58 pm »

While on the subject of "tried and true" image editing features like layers and masks , I can think of a parallel debate going on currently with computer OS software designers. It has to do with "files and folders".  Many OS designers now say the files and folders concept is an antiquated metaphor and confusing to the young generation of smart phone users.  New mobile OS's for smartphones and tablets are increasingly being designed by software teams that feel we should dispense with this time honored analogy to paper filing cabinets for records management. Seriously? The files and folders paradigm works, and it ported very well to digital records management. Why throw it away and hide where our files are kept so that each individual application has to outsmart us to find our files? Stupid, stupid, stupid. This movement to do away with the files and folders concept will cause all sorts of file migration (and migraine) headaches for digital librarians and archivists in the near future. Hence,  a personal plea to all the software engineers following this thread.... KEEP both The "boringly conventional" files/folders and layers/masks concepts solidly in place on whatever new image editing software program you choose to give us.


cheers,
Mark
http://www.aardenburg-imaging.com

if I remember correctly, the original Lightroom beta (alpha?) did not use the physical folder/files structure we have now.  Don't remember exactly what it was, but many complained about it t the Adobe team, smartly, changed it.

John
Logged
John

Torbjörn Tapani

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 319
Re: Sv: If Thomas designed a new Photoshop for photographers now...
« Reply #221 on: May 13, 2013, 08:45:16 pm »

Good lens correction. Mustasch type distortions, deconvolution of motion blur, CA, Coma, sharpness maps to even out edge sharpness or field curvature, or correcting nervous bokeh, oval highlights, flare /veiling removal etc. Stuff characteristic of lenses that can be anticipated.

Selecting sets of images and creating stitches and/or stacks still editable as RAW like for smart objects, removing objects, random noise, dark frame subtractions and the like. Stitches could be spherical panos or whatever. Combination of lens corrections, stitch and stack maybe could produce a Brenizer bokeh pano still editable as RAW (even spherical hah that would be awesome).

While we're at it could a focus bracketed stack maybe have live DoF control for tilt/shift effects. Lytro lite. Focus peaking to see where we place fake focus or to apply correct amounts of sharpening.

Frequency based retouching tools, like a live apply image with a slider for radius and quick way of viewing high/low pass but it's just done on the fly. Like a pair of channels with a bias between them. Then a retouching brush with options for content aware, texture, tone, clone, not eleventy different ones. Were you to sample a swatch it's just a brush.
Logged

Gulag

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 336
Re: Sv: If Thomas designed a new Photoshop for photographers now...
« Reply #222 on: May 13, 2013, 09:35:51 pm »

Good lens correction. Mustasch type distortions, deconvolution of motion blur, CA, Coma, sharpness maps to even out edge sharpness or field curvature, or correcting nervous bokeh, oval highlights, flare /veiling removal etc. Stuff characteristic of lenses that can be anticipated.

Selecting sets of images and creating stitches and/or stacks still editable as RAW like for smart objects, removing objects, random noise, dark frame subtractions and the like. Stitches could be spherical panos or whatever. Combination of lens corrections, stitch and stack maybe could produce a Brenizer bokeh pano still editable as RAW (even spherical hah that would be awesome).

While we're at it could a focus bracketed stack maybe have live DoF control for tilt/shift effects. Lytro lite. Focus peaking to see where we place fake focus or to apply correct amounts of sharpening.

Frequency based retouching tools, like a live apply image with a slider for radius and quick way of viewing high/low pass but it's just done on the fly. Like a pair of channels with a bias between them. Then a retouching brush with options for content aware, texture, tone, clone, not eleventy different ones. Were you to sample a swatch it's just a brush.

My uneducated curiosity is whether or not your print will command seven or eight figure a pop if all those items on your wishlist come true in the next release. in the meantime,

Logged
"Photography is our exorcism. Primitive society had its masks, bourgeois society its mirrors. We have our images."

— Jean Baudrillard

plugsnpixels

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1027
    • http://www.plugsandpixels.com
Re: If Thomas designed a new Photoshop for photographers now...
« Reply #223 on: May 14, 2013, 02:16:55 am »

This is a noble effort and I've been through exercises like this before (with another graphics/imaging/vector app), and what I learned was every person desires a different feature set because everyone's work is a bit different. We ended up thinking that a modular approach might work best, where you have the core app and install advanced modules of interest (not plug-ins, they would still be gravy) as time goes on. The modules could be from the same developer or third parties.

Another thing that came to mind when reading this thread was, aren't any other existing apps sufficient for photographers specifically? Or are they more geared toward creative post-processing than utility work?

It seems to me that adding additional functionality to Lightroom is the best way to go forward quickly, assuming it will remain subscription-free and its developers are given latitude to make it the best it can be for this specific purpose.
Logged
Digital imaging blog, software discounts:
www.plugsandpixels.com/blog

hjulenissen

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2051
Re: If Thomas designed a new Photoshop for photographers now...
« Reply #224 on: May 14, 2013, 02:17:28 am »

Problem is not so much the precision of the rendering pipeline, problem is stacking.

Especially if one of the steps in the stack involves blur in one way or another (think USM, local contrast enhancement, etc). In an interpreted pipe-line this would not only increase sampling requirements disproportionally and exponentially, but it disrupts the parallelism of the graphics cards internals. Additionally, some of the newer sharpening techniques rely on iteration. If you want to implement those types of functions it becomes progressively problematic if the entire pipeline is interpreted.
There are GPU implementations of stuff like blurring that seems to exploit the hardware quite well.

If you have a pipeline of N jobs, each divisible into M (partially overlapping input or output) threads, this might or might not map well into a given GPU hardware. I don't think that the precense ofsharpening means that GPU is out of the question.

Doing stuff on the GPU seems to be difficult, error-prone, and a significant percentage of applications seems to not map well into current GPUs (meaning that the power consumption and price of a GPU does not defend using it)
Quote
Secondly, what you want to determine also is the effect the user expects to see when they change some previous step.
If they use a parametric brush on some particular location in the image, and they decide to turn on lenscorrections, or apply a  perspective correction, then what should the position (and form) of the brush do? And what about if you stack images for panorama stitching and the user does the same?

Note how simple misunderstandings can occur:
If I ask you to "blend" image A and B, do you interpret that as:
1. start with A and blend B on top (not commutative),

or do you interpret that as:
2. create a mix of A and B (commutative).

What about if A and/or B have masks?
A good point.

I guess that Lightroom solves this the "easy" way by having a fixed pipeline.

A Photoshop substitute might have to be more flexible. I still think that it is possible to have a "default" pipeline (Lightroom-esque), and to be able to move components around in the pipeline. Perhaps a "linear" mode would be possible (in which edits are applied in the same order that they are tweaked by the user).
Quote
And finally, the size requirements for a photoshop image are usually significantly different than what our graphics cards currently are based on. Even if hardware will improve and become cheaper etc, you should still expect a 10 year time frame if at all, because the graphics cards are based on certain output requirements for gaming, video, and medical imaging. Well, I suppose password hacking could be added, but I'm not sure how that will affect the imaging capabilities of graphic cards.
Current graphics cards have >1GB of memory, I dont think that buffer storage is the issue. Rather, the (in) flexibility of the processing hardware, the state of the implementation languages, the debugability and the testing matrix caused by significantly different hardware on the market seems like obstacles.
Quote
But, any workflow that allows one to go back to previous steps could be called "parametric", and as such, as long as the expectations of the user are reasonable when deciding to redo a previous step, the application could be entirely "parametric". And a final result could be rendered based on recomputing the entire chain.
The problem you hinted at would still be a problem? If I did sharpening as step#2, then choose to "redo" sharpening as step#274, where in the re-rendered pipeline should the sharpening be applied?

-h
« Last Edit: May 14, 2013, 02:36:37 am by hjulenissen »
Logged

hjulenissen

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2051
Re: If Thomas designed a new Photoshop for photographers now...
« Reply #225 on: May 14, 2013, 02:20:59 am »

The thing that Jeff said that started me on this way of thinking was -- paraphrasing -- you don't want to do in a pixel processor what you can do in a parametric processor partially because of limited precision in the pixel processor causing potential damage to the image.  That implies that the implementer doesn't always know just how much error can be tolerated.

Jim
It is probably simpler to calculate visibility thresholds in a fixed pipeline than in a non-fixed pipeline.

Or perhaps photoshop is simply limited by prior architecture decisions and the need for speed?

-h
Logged

Schewe

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 6229
    • http:www.schewephoto.com
Re: If Thomas designed a new Photoshop for photographers now...
« Reply #226 on: May 14, 2013, 02:30:36 am »

Another thing that came to mind when reading this thread was, aren't any other existing apps sufficient for photographers specifically? Or are they more geared toward creative post-processing than utility work?

Not really...Adobe and Photoshop has had a really long run of being best of bread–which has pretty much dried up any competition. I downloaded GIMP and Pixelmator to test them out...yep, both will do some interesting things, nope, neither are a replacement for Photoshop. Seriously, Photoshop's position in the industry has, minimized the 3rd party development. Could the Photoshop CC decision change things? Yep...but don't count on substantial changes really quickly.

Logged

hjulenissen

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2051
Re: If Thomas designed a new Photoshop for photographers now...
« Reply #227 on: May 14, 2013, 02:30:56 am »

I was using the example as a way to crystallize the discussion, not as a concrete product proposal, but, thanks for bring practicality into the picture.
I guess that I saw that.
Quote
I don't think doing intermediate calcs in FP (maybe not DP FP, but FP) is necessarily impractical. We are seeing a proliferation of DSP-derived processors on graphics adapters. Many of those processors support FP, and there is a trend to make the results of calculations available to programs running in the main processors. Indeed, you can buy add-in cards that do DSP-like processing that have no connection to a display; they're expensive and power hogs, but that should change. Image processing is relatively easily parallelized.
Doing single-precision fp on the cpu is somewhat simpler than doing fixed-point on the cpu. But slower. If the vector "pump" is 128 bits (SSE) or 256 bits (AVX) a fair guess would be that 32-bit sp float would be 1/4 the speed of 16-bit integer. Now, this is not entirely true because not all vector arithmetic is done one a single cycle, and one floating-point operation may map to >1 fixed-point operation. But I think that it is accurate enough for this discussion.

So would 0.25x speed be worth it for having 32-bit float instead of 16-bit integer?
Quote
Another thing about the intermediate image processing that could ameliorate its inherently slower speed than custom-tweaked code: a lot can be done in the background. In order for a program to feel crisp to the user, all that's necessary is to update the screen fast. The number of pixels on the screen is in general fewer than the number in the file, so there's less processing to keep the screen up to date than to render the whole file. Just in case the user decides to zoom in, the complete image should be computed in the background. This also avoids an explicit rendering step, which could be an annoyance for the user.

All this background/foreground stuff makes life harder for the programmers. On the other hand, think of the time they'll save not tweaking code.
I (and you?) tend to focus on the processing pipeline, the image processing mathematics and such. This tends to make up a surprising small percentage of the people, resources and codelines in a commercially successful application. There are umpteen factors that affect peoples happiness with a product.

There is well-defined theory that makes (many) image processing-tasks into satisfying "riddles" that appeals to me, while still (usually) ultimately judged by our vision. I don't think there is anything like that in user-interaction, marketing, QA and all of the other activities that goes into a product like Photoshop (I don't claim to know)?

-h
Logged

Wayland

  • Full Member
  • ***
  • Offline Offline
  • Posts: 106
  • Trust me I'm a Viking
    • Waylandscape
Re: If Thomas designed a new Photoshop for photographers now...
« Reply #228 on: May 14, 2013, 02:49:59 am »

Not really...Adobe and Photoshop has had a really long run of being best of bread–which has pretty much dried up any competition. I downloaded GIMP and Pixelmator to test them out...yep, both will do some interesting things, nope, neither are a replacement for Photoshop. Seriously, Photoshop's position in the industry has, minimized the 3rd party development. Could the Photoshop CC decision change things? Yep...but don't count on substantial changes really quickly.

Photoline deserves a good look at though. Given a couple of years more development along the lines it has now and I think it could be a very viable replacement.
Logged
Wayland. [/S

thoricourt

  • Newbie
  • *
  • Offline Offline
  • Posts: 4
Re: If Thomas designed a new Photoshop for photographers now...
« Reply #229 on: May 14, 2013, 03:25:07 am »

Since we are in blue sky country, as a companion application to LR, I would like to have/do the following:

must haves
-----------
ACR integrated & updated regularly
Bridge keyword/metadata setup
adjustment layers (current list is OK + WB)
smart objects for ALL filters, including HDR tone-mapping
ALL filters for 16bit
16 & 32 bit
Sharpen on steroids (as a minimum unsharp mask, smart sharpen, camera shake)
source/creative/output sharpening (à la PK Sharpener)
Blur
Noise reduction
Layers: all existing layer functions, styles
ACR adjustments as layers
channels
mask tweaking (refine edge style)
crop, crop overlays
lens correction: wide angle adaptation, CA, etc.
blending: panorama, hdr, stacking for DOF & NR
gradients (linear, radial)
type
brush with current functionality
tone adjustment: levels, curves
color adjustment: hue, saturation, color picker, color match, color conversion
cloning, spotting and healing tools + content-aware, content aware move
eraser
filters: gaussian blur, liquify, dust & scratches, median, high pass, warp, apply image
all existing selection tools + quick mask, luminosity (à la Lobster)
pen/path
color management, soft proofing
printing
save as to formats ensuring files can be read in 50 years time (e.g. layered or flattened tiff)
actions, batch processing
info, histogram
third party plug-ins compatibility
history
user defined number of undos
preferences save
user defined actions, brushes, etc saved in one location and importable in application updates
tablet integration
mini-bridge or some sort of browser
64bit & multicore
perpetual license
equivalent price as in the US

nice to haves but not required
------------------------------
Bridge
Filter gallery
read video only to extract image
Puppet warp
editable keyboard shortcuts
all brush parameters as currently implemented
choice of user interface color scheme

shouldn't have
----------------
3D
video
face recognition

It is "funny" that we have LR and PS, and we still are defining a third application. Seems to me a waste of time and energy since we have already two tools that more or less pleased everyone...before CC.
But hey since Jeff asked, I am more than grateful & happy to have my say.

Good day to you all!
Logged

plugsnpixels

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1027
    • http://www.plugsandpixels.com
Re: If Thomas designed a new Photoshop for photographers now...
« Reply #230 on: May 14, 2013, 03:40:27 am »

Yes, PhotoLine would be one of the top contenders. But for years users have been trying to impress upon the developers the need for a decent GUI, standardizing of tool and menu labeling and a website overhaul with better tutorials. Without these few take it seriously enough to even try it.
Logged
Digital imaging blog, software discounts:
www.plugsandpixels.com/blog

32BT

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3095
    • Pictures
Re: If Thomas designed a new Photoshop for photographers now...
« Reply #231 on: May 14, 2013, 03:43:47 am »

There are GPU implementations of stuff like blurring that seems to exploit the hardware quite well.

"seems" being the operative word. There are implementations based on the lower resolution versions in a mipmap, but that doesn't scale and align properly. A good example of such bad blurring can be found in Apple's Core Image.

Perhaps a "linear" mode would be possible (in which edits are applied in the same order that they are tweaked by the user).

Yes, linear or nodal are both possible, as long as there is a reasonable expectation as to what happens when returning to previous or earlier edits. Some form of caching is going to be required.

Current graphics cards have >1GB of memory, I dont think that buffer storage is the issue. Rather, the (in) flexibility of the processing hardware, the state of the implementation languages, the debugability and the testing matrix caused by significantly different hardware on the market seems like obstacles.

Yes, to all of your points, but as for storage: if you turn a 80 Mpx MFDB file into a 32bit floating point rep with 4 components,
what do you get? And then you need to also store all kinds of crap of the processing pipeline, intermediate caching of results, etc…
And then you want to create a panorama stitch with HDR with maybe 16 of those files…

The problem you hinted at would still be a problem? If I did sharpening as step#2, then choose to "redo" sharpening as step#274, where in the re-rendered pipeline should the sharpening be applied?

Yes, but there can be a clear difference between "re-editing" an existing adjustment, and "adding" a new adjustment. However, I am not much of a proponent of flexibility for the sake of flexibility. It is not particularly useful to keep stacking sharpening upon sharpening, and then have the user complain about the result.

Implement a clear set of processing steps to guarantee optimal results,
and implement reasonable flexibility to guarantee creativity.

I personally believe that LR doesn't currently find the right balance between the two, and adding pixel editing could force this to be re-designed, without entirely having to start from scratch.
Logged
Regards,
~ O ~
If you can stomach it: pictures

hjulenissen

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2051
Re: If Thomas designed a new Photoshop for photographers now...
« Reply #232 on: May 14, 2013, 03:53:21 am »

I personally believe that LR doesn't currently find the right balance between the two, and adding pixel editing could force this to be re-designed, without entirely having to start from scratch.
I would really like a (if need be pixel-level) plugin that really plugged into the LR processing chain, as opposed to exporting/reimporting. I.e. When I adjust the exposure compensation slider in Lightroom, the raw file would be re-processed on the fly with LR blocks, then the external plugin, then final LR blocks. Ideally, the external editor would only generate a set of LR-compatible scripts (MATLAB, Python, OpenCL, whatever) that would run equally well on anyone elses LR installation (or LR version 11).

Of course, this could lead to me having to redo plugin settings (such as applying spatial warping in front of a pixel-operation), but that would be on an as-needed basis.

-h
Logged

plugsnpixels

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1027
    • http://www.plugsandpixels.com
Re: If Thomas designed a new Photoshop for photographers now...
« Reply #233 on: May 14, 2013, 03:56:57 am »

Oscar, I just realized that was you! I have a couple of your old plug-ins listed on my site. Glad to see you're back! Let's list the new apps. You too Schewe!
Logged
Digital imaging blog, software discounts:
www.plugsandpixels.com/blog

hjulenissen

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2051
Re: If Thomas designed a new Photoshop for photographers now...
« Reply #234 on: May 14, 2013, 04:00:18 am »

"seems" being the operative word. There are implementations based on the lower resolution versions in a mipmap, but that doesn't scale and align properly. A good example of such bad blurring can be found in Apple's Core Image.
Are you saying that there are no high-quality, reasonable efficient (overlapping i/o) image processing algorithms running on GPUs? I looked into this a few years back, and expected there to have been some progress.
Quote
Yes, to all of your points, but as for storage: if you turn a 80 Mpx MFDB file into a 32bit floating point rep with 4 components,
what do you get? And then you need to also store all kinds of crap of the processing pipeline, intermediate caching of results, etc…
And then you want to create a panorama stitch with HDR with maybe 16 of those files…
*It seems that 5-6 GB is available right now.
*If you are working on massive projects, could not the software work on sensible tiles, dumping intermediate results to system memory?
*Perhaps users stitching multiple 80 MP MFDB images would be willing to purchase several GPUs?

http://www.nvidia.com/object/personal-supercomputing.html

GPUs are no doubt being hyped, and many customers have unreasonable expectations ("why does not Adobe rewrite Photoshop in CUDA, then it would be 100x faster"). But I am hoping that something good will come out of it. Perhaps the unification of CPU and GPU (first physically, then memory access, then instructions) will make it easier to re-use hw resources made for games and such in graphics and other dsp applications.

-h
Logged

32BT

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3095
    • Pictures
Re: If Thomas designed a new Photoshop for photographers now...
« Reply #235 on: May 14, 2013, 04:15:17 am »

Oscar, I just realized that was you!

Yes, it is me  8)

Thanks.
Logged
Regards,
~ O ~
If you can stomach it: pictures

LKaven

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1060
Re: If Thomas designed a new Photoshop for photographers now...
« Reply #236 on: May 14, 2013, 04:27:42 am »

GPUs are no doubt being hyped, and many customers have unreasonable expectations

My computer does a thermal shutdown when I try to play SimCity 5 on anything but the lowest quality setting.   :(

32BT

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3095
    • Pictures
Re: If Thomas designed a new Photoshop for photographers now...
« Reply #237 on: May 14, 2013, 04:42:14 am »

Are you saying that there are no high-quality, reasonable efficient (overlapping i/o) image processing algorithms running on GPUs? I looked into this a few years back, and expected there to have been some progress.

I wouldn't be qualified to answer.

The few examples I have seen are either doing blur incorrectly and/or are caching results.

Clearly, progress is going fast, and video resolutions are getting higher, and quality demands and expectations in our industry are getting lower, so eventually it will all merge. But if the card is doing caching logic, the application might as well control or copy that behaviour in some meaningful, user-centric way.

Logged
Regards,
~ O ~
If you can stomach it: pictures

32BT

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3095
    • Pictures
Re: If Thomas designed a new Photoshop for photographers now...
« Reply #238 on: May 14, 2013, 04:43:54 am »

My computer does a thermal shutdown when I try to play SimCity 5 on anything but the lowest quality setting.   :(

Generating bitcoins without your consent...
Logged
Regards,
~ O ~
If you can stomach it: pictures

NikoJorj

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1082
    • http://nikojorj.free.fr/
Re: Sv: If Thomas designed a new Photoshop for photographers now...
« Reply #239 on: May 14, 2013, 05:14:14 am »

But it has never been Adobe's way.

They always wanted to keep Photoshop esential for some tasks.
You mean, there are neither localized corrections nor softproofing for real in LR? That's all made up bu Kubrick in a Hollywood studio?  ;D


I think a node-based workflow, where one can piece together these operations in a logical flow, and revisit, rearrange, preview and create variations, with a real-time preview of any and all node outputs, would be a nice paradigm shift.  I would have no problem working on a "smart preview" version of an image, from raw conversion, all the way to output sharpening at final resolution, with the ability to render portions of it all along the node chain to see a 100% res sample to check my work.
Bringing some parametric goodness to pixel editing : that would rock!


Good lens correction. Mustasch type distortions, deconvolution of motion blur, CA, Coma, sharpness maps to even out edge sharpness or field curvature, or correcting nervous bokeh, oval highlights, flare /veiling removal etc. Stuff characteristic of lenses that can be anticipated.
Focus-related issues might be tougher to implement (at least without bracketing) but yes, the deconvolution of lens/capture defects (diffraction seems quite easy and motion blur is on its way) seems a really interesting development too.
« Last Edit: May 14, 2013, 05:27:48 am by NikoJorj »
Logged
Nicolas from Grenoble
A small gallery
Pages: 1 ... 10 11 [12] 13 14 ... 22   Go Up