Okay, attached is version 1.1.4, this time I've only done some cosmetic script changes, like implementing a Default response input that can be accepted by just hitting the <Enter> key. It unfortunately doesn't show those defaults when just hitting the <Enter> key (unless I create a lot more verbose input text), but it always uses the same ones. I've chosen the default inputs to be: 800x800 as output size 'fit within' maxima, choose the optimized Down-sampling algorithm, and use a sharpening Amount of 100.
These default choices are based on a common requirement for most photographers, to produce web publishing output from large input image files. If one has predominantly different default requirements, they are relatively simple to change in the Batch script file, and of course different keyboard input can always be given to override the default choices for the image at hand.
I plan on working some more on Profile conversions (if applicable) and output compression quality (currently produces uncompromised but large minimum compression amount output, in case one needs to work some more on the image and re-compress with as little loss as possible). But these are more icing on the cake, and I have some other things that I want to address as well (like really prevent clipping when extreme sharpening is required, and optimizing the up-sampling quality). I am already working on those as well.
An interesting tidbit; one can also use a negative sharpening amount with the 'optimized for down-sampling' algorithm, which will obviously does the reverse of sharpening, it will blur the finest detail/edges.
Actually, this could also be a consideration for upsampling where we want to use a sharpness preserving filter method, but want to reduce some edge halo overshoots with a targeted blur. The difficulty is that, in contrast with down-sampling to less than 50% of the original size, the upsampling (blur) radius to use is variable. Also, blurring will not be restricted to edges (unless an edge mask is used), but will also affect smoother regions with the same size detail as the filter. Those pesky trade-offs again ...
Another thing that tends to produce a high level of control with upsampling is, to enlarge to a larger than required size, and sharpen that upon down-sampling to the final size. Of course that will require more memory (larger convolution kernels), and thus takes longer to execute, in addition to the parameter tweaking requirement depending on size (might be a simple function of size, don't know, just thinking aloud).
Some operations, in particular convolutions on large image sizes, could be sped up by performing them as FFT operations in the frequency domain instead of the spatial domain. However, that quickly makes working in floating point precision a requirement rather than an option (and it claims much more application/operational memory, also since image sizes internally scale to integer powers of 2 with padding). A Q16 version of ImageMagick may lose too much precision, although I haven't tested how severe the lost is for our type of operations. Too much stuff to do, can't do it all.
Cheers,
Bart