1) Compositing is just a common task for which to compare two architectures.
Agreed, and photoshop today can certainly be considered a compositing tool.
However, in the graphics industry we used to make a distinction between pixel graphics and vector graphics, because images would require about 300pixels per inch for quality printed output, but vector graphics, or text, would generally require 2400pixels per inch.
So, compositing images and text in a single composition for output would immediately run into this particular split.
A split that PostScript and PDF both have encountered in one way or another.
PDF can be considered a compositing language. I can store commands to start with a clean sheet of white of particular size, paste an image on top, and add some text on top of the image.
Great, but on output it has to make a decision on the resolution requirements of these elements. Not to mention that the output device resolution is the determining factor. But then PDF introduced transparency. And now you suddenly have to decide how to render between elements prior to output. (e.g. blending decisions between image and vectors).
I'm sure we make great strides with GPU based processing, but will it muster 2400ppi rendition in real time? Not that something like that is always necessary during preview, but it does serve as an example of production-centric requirements.
3) N-dimensional dataflow is efficient for tasks that aren't computing intensive.
That would immediately disqualify it as viable for image processing.
It also exploits inherent parallelism for efficient use of multiple cores, threads, or networked processors.
On the contrary, the strongest argument in favour of nodal editing imo, is the fact that you can easily create aliases.
Make a composition with aliases, and as soon as you edit the original, all the aliases will automatically look similar.
But the entire point of that (for photographers) is stacking of aliases to add creative effects. For example:
1. Open an image,
2. Duplicate an alias on top of this image,
3. Apply a large-radius blur to the alias,
4. Apply a mask to the alias.
So, now you have a simplified soft-focus effect.
If you then add a small colorcorrection to the original image, it will automatically transfer to the alias representation.
Exactly what you would want. The entire stack including the colorcorrection remains a script. Can be saved for future use etc, so those are certainly desirable traits.
But…, the application of a colorcorrection on the original, which then should be used for the alias layer which requires a blur, is a serial sequence that is killing to parallelism and the kind of processing in GPUs. GPUs are primarily quick under very specific circumstances, one of which is "resident, read only" source data. So if you want read/write caching, the advantages of GPU processing start to crumble very quickly. (Look up any of the pitfalls about "concurrent" or multi-threaded processing).
Okay, enough of the geekspeak already. This is all solvable by a bunch of bright programmers, but I thought it might be illustrative, of both the useful capabilities, but also the complexities involved.
4) Pixel editing can be done using a choice of methods. You can do it just as before. But N-dimensional dataflow allows for journaling for infinite undo, or baking in just as in photoshop.
Certainly, and I believe this is what most people really mean when they mention "parametric editing". They simply want to be able to revisit earlier edits and re-adjust. They understand the disadvantage of stacking several Geometry corrections, versus re-editing a single Geometry correction. (The latter only requires a single re-sampling operation).