Pages: 1 [2] 3 4   Go Down

Author Topic: New Lens Correction Software  (Read 33446 times)

deejjjaaaa

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1170
New Lens Correction Software
« Reply #20 on: November 25, 2009, 04:19:13 pm »

Quote from: bproctor
I've got a simple solution:  Don't use Jonathan's program.  The rest of us will remain interested and supportive of his efforts.
this is not about Jonathan's program, if you did not understand - this is about DNG converter and claims that nothing is lost... you just do not know what else was lost by people who did not save their original raw files, is lost now by the same people or will be lost in future, that's it
« Last Edit: November 25, 2009, 04:20:08 pm by deja »
Logged

Jonathan Wienke

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 5829
    • http://visual-vacations.com/
New Lens Correction Software
« Reply #21 on: November 25, 2009, 06:53:03 pm »

Quote from: deja
no, I am not and sky is in fact falling as usual - please read what I am referring to :

...

Do you object ? Does DNG converter irreversibly strips the data during conversion or not ? Very simple question.

Citing bugs specific to converting the RAWs from a few particular camera models to DNGs does not mean the entire DNG concept is a bad idea or that the "no RAW data is lost when converting to DNG" principle is generally false. Given the hundreds of undocumented, proprietary RAW formats Adobe has had to reverse-engineer to get DNG to where it is now, what's surprising is that such glitches aren't far more common than they are.

I am a single individual, and do not have the time or inclination to learn how to properly read hundreds of different undocumented and proprietary RAW formats. DNG allows me to shift my focus as a programmer from dealing with RAW format hell to the actual core functionality of the program--correcting lens aberrations and distortions. If you have any realistic suggestions for alternative input file formats that will allow me to continue focusing on the actual program instead of properly parsing hundreds of different input file formats (which would probably put YOUR camera on the "not supported" list), I'm all ears. But if not, STFU and quit wasting my time and LL's bandwidth. DNG may not be a perfect solution, but IMO it's telling that the DNG denigrators have yet to offer a realistic alternative input file format...

And BTW, converting to DNG doesn't mean you need to erase or alter the original RAW. So if a particular version of DNG Converter doesn't convert said file properly, and the bug is fixed in a later version, why yes, the improper conversion IS reversible--simply re-convert the RAW with the new version of ACR or DNG converter. Very simple answer.
« Last Edit: November 25, 2009, 06:56:12 pm by Jonathan Wienke »
Logged

deejjjaaaa

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1170
New Lens Correction Software
« Reply #22 on: November 26, 2009, 11:37:10 am »

Quote from: Jonathan Wienke
Citing bugs specific to converting the RAWs from a few particular camera models to DNGs

how do you know that it is few ? did you test the rest ?

Quote from: Jonathan Wienke
does not mean the entire DNG concept is a bad idea

communism is a nice idea too... theoretically.

Quote from: Jonathan Wienke
or that the "no RAW data is lost when converting to DNG" principle is generally false.

well, the problem is that implementation was always flawed before, is flawed now and still Adobe along w/ some DNG fans are claiming that conversion does not miss anything... while it the real life DNG conversions are losing the data and you just do not know what else is lost as it is a closed source.


Quote from: Jonathan Wienke
Given the hundreds of undocumented, proprietary RAW formats Adobe has had to reverse-engineer to get DNG to where it is now, what's surprising is that such glitches aren't far more common than they are.

well, that is one reason why people should stay away from a buggy software like DNG converter

Quote from: Jonathan Wienke
I am a single individual, and do not have the time or inclination to learn how to properly read hundreds of different undocumented and proprietary RAW formats. DNG allows me to shift my focus as a programmer from dealing with RAW format hell to the actual core functionality of the program--correcting lens aberrations and distortions. If you have any realistic suggestions for alternative input file formats that will allow me to continue focusing on the actual program instead of properly parsing hundreds of different input file formats (which would probably put YOUR camera on the "not supported" list), I'm all ears. But if not, STFU and quit wasting my time and LL's bandwidth. DNG may not be a perfect solution, but IMO it's telling that the DNG denigrators have yet to offer a realistic alternative input file format...

the source code of dcraw is open, it is mirrored and enhanced by http://www.libraw.org


Quote from: Jonathan Wienke
And BTW, converting to DNG doesn't mean you need to erase or alter the original RAW. So if a particular version of DNG Converter doesn't convert said file properly, and the bug is fixed in a later version, why yes, the improper conversion IS reversible--simply re-convert the RAW with the new version of ACR or DNG converter. Very simple answer.

well, you in fact just do not know if DNG converter converts it properly or not, so you never should erase the original raw... not yesterday, not today, not tomorrow... which simply means that DNG is unsuitable for archiving, unless you are archiving the original raw file as well.

Logged

Jonathan Wienke

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 5829
    • http://visual-vacations.com/
New Lens Correction Software
« Reply #23 on: November 26, 2009, 07:29:39 pm »

Quote from: deja
how do you know that it is few ? did you test the rest ?

It works perfectly for every camera I've tried; 4 Canon camera models, 4 or 5 Nikon models, a Hasselblad MFDB, and a couple of digicams.

Quote
problem is that implementation was always flawed before, is flawed now and still Adobe along w/ some DNG fans are claiming that conversion does not miss anything... while it the real life DNG conversions are losing the data and you just do not know what else is lost as it is a closed source.

You are full of crap. On the cameras I've tested, there is zero difference between converting the original RAW and converting a DNG; comparing converted images gives a pixel-for-pixel match. If RAW data was being lost, there would be a detectable difference somewhere. And DNG is not closed source; you can download all of the specifications as well as source code needed to read and write DNG files fro free from Adobe. If you have questions about what is happening to the data, you have the ability to look at the code and see exactly what it is doing to your images.

Quote
well, that is one reason why people should stay away from a buggy software like DNG converter

It works just fine for most of the cameras out there, or people wouldn't be using Adobe software.

Quote
the source code of dcraw is open, it is mirrored and enhanced by http://www.libraw.org

Thanks, but no thanks. It's not as easy to integrate into external projects, and it would require me to update my software every time a new camera is released. With the freely downloadable DNG SDK, I only need to update the file parsing code when a new version of the DNG spec is released, which is far less frequent than the release of new cameras.

Quote
well, you in fact just do not know if DNG converter converts it properly or not, so you never should erase the original raw... not yesterday, not today, not tomorrow... which simply means that DNG is unsuitable for archiving, unless you are archiving the original raw file as well.

Only for the few cameras that don't get converted properly. Whenever you do a file conversion or any sort of copying, you should always verify the copied/converted files are good before deleting the originals. It's not that hard to do. If you're really paranoid, you can step through the operation of the DNG SDK source code and verify with whatever level of detail you desire how correctly your RAWs are being converted. The fact is, I've tested numerous cameras from several different manufacturers, and had zero problems with DNG. BTW, the Library of Congress disagrees with you, and recommends DNG for long-term image archiving.

I'm done discussing this subject with you. Your arguments are based on fearmongering and falsehoods, and you don't have any constructive alternative input file format suggestion to offer. You are hereby added to my ignore list.
Logged

Jonathan Wienke

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 5829
    • http://visual-vacations.com/
New Lens Correction Software
« Reply #24 on: December 06, 2009, 02:41:43 pm »

Getting back to the original focus of the thread, I've been working on the interpolation algorithms used to correct barrel/pincushion distortion and chromatic aberrations. I posted a demonstration program that allows you to open a JPEG, TIF, or BMP file and view it rescaled from 6-6400% of its original size. It's just a tech demo, so it doesn't have any of the following features:

  • File save capabilities
  • Color management
  • Batch processing
  • Deconvolution or blur removal of any kind (yet!)
  • Instant solution for world hunger

The program DOES do the following:
  • Opens an 8-bit-per-channel JPEG, BMP, or TIF file
  • Applies random distortion adjustment parameters to the image
  • Allows a zoom setting from 6.25-6400%
  • Displays the image at the selected zoom factor with a simple bilinearly-interpolated version for side-by-side comparison
  • Looks really cool

Here is a screen shot:


You can download the ZIPped setup folder here:
http://visual-vacations.com/media/PixelClarity.zip

You'll need the latest .Net framework on your machine for this to work.

Known issues:
Error on startup due to a missing database file. Click continue and all should be well. The missing database will eventually be used to store PSF data.

Some very minor aliasing is sometimes visible at magnifications around 50%.

I'm looking for feedback on the quality of the interpolation. I've designed things to maximize sharpness and minimize aliasing, jaggies, and other artifacts. Please let me know how well you think I've achieved these goals, and why or why not. Thanks in advance!
Logged

Jonathan Wienke

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 5829
    • http://visual-vacations.com/
New Lens Correction Software
« Reply #25 on: December 07, 2009, 08:17:40 pm »

I've fixed the missing database file error, and a few small bugs in the interpolation code. I also added a check box so the barrel/pincushion adjustments are optional.

The download link is the same, simply download the updated ZIP file, extract, and then run the new installer.
Logged

stewarthemley

  • Guest
New Lens Correction Software
« Reply #26 on: December 08, 2009, 03:48:16 am »

Continue to ignore the negative comments, Jonathan. If you get it to work it will be a worthwhile program. Is it Mac or PC?
Logged

Jonathan Wienke

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 5829
    • http://visual-vacations.com/
New Lens Correction Software
« Reply #27 on: December 08, 2009, 09:25:41 am »

Quote from: stewarthemley
Continue to ignore the negative comments, Jonathan. If you get it to work it will be a worthwhile program. Is it Mac or PC?

PC for now. Have you downloaded the interpolation demo?
Logged

Jeremy Payne

  • Guest
New Lens Correction Software
« Reply #28 on: December 08, 2009, 09:35:24 am »

Quote from: Jonathan Wienke
I've fixed the missing database file error, and a few small bugs in the interpolation code. I also added a check box so the barrel/pincushion adjustments are optional.

The download link is the same, simply download the updated ZIP file, extract, and then run the new installer.

It worked for me on both Vista 64/Business and W7/RC ... I didn't get to rigorously compare to other interpolations, but will compare against CS4 on my nice monitor tonight.
Logged

stewarthemley

  • Guest
New Lens Correction Software
« Reply #29 on: December 08, 2009, 10:45:31 am »

Quote from: Jonathan Wienke
PC for now. Have you downloaded the interpolation demo?

No, I'm Mac. Guess I'll have to be patient.
Logged

Jonathan Wienke

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 5829
    • http://visual-vacations.com/
New Lens Correction Software
« Reply #30 on: December 08, 2009, 12:00:11 pm »

Quote from: stewarthemley
No, I'm Mac. Guess I'll have to be patient.

Or you could try one of the Windows-on-a-Mac options...

I uploaded a new version that decreases memory requirements and gracefully handles out-of-memory errors that may occur if you open a very large image file. The link is the same: http://www.visual-vacations.com/media/PixelClarity.zip
Logged

Jonathan Wienke

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 5829
    • http://visual-vacations.com/
New Lens Correction Software
« Reply #31 on: December 08, 2009, 10:26:55 pm »

I'm shifting focus to the actual deconvolution stuff now, so it will probably be a while before I post any more updates. But in the meantime, if anyone could post feedback on the interpolation, I'd appreciate it.
Logged

minnend

  • Newbie
  • *
  • Offline Offline
  • Posts: 2
New Lens Correction Software
« Reply #32 on: December 09, 2009, 02:27:49 am »

Quote from: Jonathan Wienke
I'm shifting focus to the actual deconvolution stuff now, so it will probably be a while before I post any more updates. But in the meantime, if anyone could post feedback on the interpolation, I'd appreciate it.

Hi Jon.  Good luck with your program!  It's quite ambitious but would be a very useful tool.  I'd love to see you open-source the project and/or collaborate with other projects that have already addressed some of these challenges (hugin comes to mind).

I grabbed your program and ran it on my XP box.  It appears to work properly and the interpolation looks good.  I compared to results from imagemagick using the mitchell and lanczos filters.  Honestly, it's hard to tell a difference.  I'm generally of the opinion that box and bilinear filters are bad, but once you get passed that level of complexity, you immediately enter the realm of (rapidly) diminishing returns, especially for a general purpose filter.

The one major distinguishing factor is that your program was very slow.  I'm running a Core2 2.1Ghz machine so not the fastest but hardly a slouch.  I tested with an 800x533 image and it took several seconds to interpolate.  Imagemagick is *not* known for its speed but it was significantly quicker on the same image.  I'm assuming you're focusing on quality and not speed / optimization at this point, but I thought I would be remiss to not mention it.

I'm interested in which approach to deconvolution you're planning to implement.  There's been some *very* interesting research on single-image blind deconvolution lately.  I know you're not going that route from the previous discussion, but you may want to search for those papers if only because they're fascinating.  Are you aiming for spatially-variant PSF estimation?  You did mention a target with multiple point sources, but I wasn't sure if that was to help get a more robust single PSF estimate or if you wanted multiple estimates across the image.  If the latter, are you thinking of region-based deconvolution or will you interpolate the PSF for fully continuous variation?  Finally, there's the algorithm itself... RL?  It seems to add too much ringing unless your PSF is *perfect*.  Again, there's been some nice research in recent years (Siggraph, CVPR, etc.) on using natural priors, edge-preserving filters, and multiscale methods to improve the results, sometimes dramatically.
Logged

Jeremy Payne

  • Guest
New Lens Correction Software
« Reply #33 on: December 09, 2009, 03:24:02 pm »

Quote from: Jeremy Payne
It worked for me on both Vista 64/Business and W7/RC ... I didn't get to rigorously compare to other interpolations, but will compare against CS4 on my nice monitor tonight.
I compared it to the default bicubic interpolation in the main CS4 window.

At 50%, it was hard to have a preference ... the lack of color management made it harder, but I might give CS4 a small edge.

At 200%, again - hard to pick, but I would say here there was more of a clear edge to CS4.

At 400%, they are VERY different.  I think I prefer yours - looks a bit more natural and certain less "pixely".

Hope that helps ... I could devise a more rigorous set of tests ... but if you have a test script in mind, I'd do some more ...
Logged

Jonathan Wienke

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 5829
    • http://visual-vacations.com/
New Lens Correction Software
« Reply #34 on: December 09, 2009, 09:07:32 pm »

Yes, it is slow; the design is biased more toward quality than performance. That said, I've been working on ways to speed things up without compromising output quality. For upsizing, I'm using a cubic spline based algorithm, but it uses splines going vertically, horizontally, and crisscrossed diagonally to reduce the appearance of pixelation and jaggies. Part of the reason for this approach is to build something that should work well for Bayer interpolation, so that I have an alternative to the interpolation done by ACR. For downsizing, I'm using a weighted-averaging scheme tuned to maximize detail without crossing the line into aliasing. I have run across some instances where the jaggie suppression isn't working right on heavily sharpened images or text that isn't anti-aliased, so I'll probably chase that down and beat it into submission before shifting gears.

Deconvolution is based on an array of PSFs labeled by camera, lens, focal length, and distance from what I'm calling the "logical center" of the image. When using a non-shift lens, the logical center and the physical center of the image are the same. But when using a shift lens, the logical center moves away from the physical center of the image in the direction and amount of shift. Each PSF is a set of splines. Each spline is tagged with an angle (deviation from logical center), and points on the spline are tagged with a distance from the "master pixel" and a  percentage of signal from the "master pixel" that is expected to spill over into a "blur pixel" at that angle/distance. PSF data is generated by analyzing an image of a target consisting of small white dots (or possibly small light sources) on a black background, arranged in a rectangular grid so that the distortion, CA and vignetting characteristics of the lens can be analysed as well as blur.

Deconvolution is a two-step process:
  • Estimating the portion of a pixel's value that is true signal, rather than blur from elsewhere. This is done by scanning all the neighboring pixels within the PSF radius of the "master pixel, and using the PSFs to calculate a probable maximum signal value for the "master pixel". For example, if a master pixel has a value of 10% of maximum and has a large number of nearby pixels within the PSF radius that have a value of 0, and the PSF values are non-zero, then it can be safely assumed the master pixel's true value is zero, because if it had a non-zero signal value, some of that signal would have had to spill over to the neighboring pixels, causing them to have non-zero values. By comparing the neighboring pixel values to the corresponding PSF values, a maximum limit for the signal value of the master pixel can be calculated.
  • Transferring signal from the "blur pixels" to the "master pixel". Once the estimated signal value is established, the PSF can be used to calculate the amount of signal that spilled from the master pixel to each neighboring pixel within the PSF radius.

During deconvolution, each master pixel has a custom PSF interpolated from the nearest PSFs stored in the database. Not necessarily EVERY pixel, but interpolation is done often enough to avoid any noticeable borders or changes in the image where deconvolution switches from one PSF to another.

I suppose there's already a name for this general algorithm, but right now I don't know what it is.
« Last Edit: December 09, 2009, 09:20:11 pm by Jonathan Wienke »
Logged

Jonathan Wienke

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 5829
    • http://visual-vacations.com/
New Lens Correction Software
« Reply #35 on: December 09, 2009, 09:32:12 pm »

Quote from: Jeremy Payne
I compared it to the default bicubic interpolation in the main CS4 window.

At 50%, it was hard to have a preference ... the lack of color management made it harder, but I might give CS4 a small edge.

At 200%, again - hard to pick, but I would say here there was more of a clear edge to CS4.

At 400%, they are VERY different.  I think I prefer yours - looks a bit more natural and certain less "pixely".

Hope that helps ... I could devise a more rigorous set of tests ... but if you have a test script in mind, I'd do some more ...

That is helpful. When downsizing, I'm trying to retain as much detail as possible without aliasing, and when upsizing, I'm trying to maintain maximum sharpness, detail, and contrast without causing halos or clipping, and to give heavily upsized areas a smooth, natural-looking, "out of focus" appearance without any obvious pixel-based artifacts. The goal is to be able to go all the way to 6400% without getting any "digital looking" artifacts. It's not quite there yet, but fairly close.

The interpolation has to be able to handle upsizing and downsizing simultaneously and seamlessly. With barrel distortion, pixels that are halfway between the center and corners need to be moved toward the center, so when correcting this, the middle of the image is being downsized (pixels packed more closely together) and the edge of the image is being upsized (pixels stretched farther apart).
Logged

ejmartin

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 575
New Lens Correction Software
« Reply #36 on: December 14, 2009, 08:48:43 am »

Quote from: Jonathan Wienke
I'm shifting focus to the actual deconvolution stuff now, so it will probably be a while before I post any more updates. But in the meantime, if anyone could post feedback on the interpolation, I'd appreciate it.


You might be interested in Bart van der Wolf's investigations of downsampling methods if you weren't already aware:

http://www.xs4all.nl/~bvdwolf/main/foto/do...le/example1.htm
http://www.xs4all.nl/~bvdwolf/main/foto/do...down_sample.htm

Also there is some good discussion in the IM webpages:

http://www.imagemagick.org/Usage/resize/
Logged
emil

Jonathan Wienke

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 5829
    • http://visual-vacations.com/
New Lens Correction Software
« Reply #37 on: December 14, 2009, 10:19:15 am »

Interesting stuff in IM's web page. What I'm doing for downsizing is a heavily modified box filter; if a pixel falls completely within the "box" it contributes fully to the box value, but if it intersects the edge of the box, then the pixel's value is split between the adjacent boxes. I'm doing a bit of weighting so that if a pixel is not perfectly centered on the edge of the box (which would evenly split the pixel value between boxes) the split gets exaggerated somewhat, so that a 60/40 split might get increased to ~70/30. By tuning the "exaggeration factor", you can significantly increase sharpness without causing too much aliasing, eliminating the need for a separate sharpening step.

For upsizing, I'm using a modified natural cubic spline function. Each pixel is a "knot" for splines running vertically, horizontally, and diagonally. To interpolate a pixel, I'm doing something similar to bilinear interpolation, except I'm blending the spline values from the 4 surrounding pixels instead of the pixel values themselves, and I'm blending the diagonal spline values as well as horizontal/vertical. I'm still fine-tuning the blending function to give the most natural "out-of-focus" look to heavy enlargement and the least jaggies and other obvious pixel-based artifacts.

It's nice to see that I'm not really "reinventing the wheel" all that much...
Logged

Jonathan Wienke

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 5829
    • http://visual-vacations.com/
New Lens Correction Software
« Reply #38 on: December 20, 2009, 07:50:40 pm »

I uploaded a new version with minor tweaks to interpolation and major changes to the under-the-hood design to significantly reduce memory use and speed things up a bit. It's still not super-fast, but will handle much larger image files before running out of memory.
« Last Edit: December 20, 2009, 07:52:40 pm by Jonathan Wienke »
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
New Lens Correction Software
« Reply #39 on: December 26, 2009, 08:29:44 am »

Quote from: Jonathan Wienke
I uploaded a new version with minor tweaks to interpolation and major changes to the under-the-hood design to significantly reduce memory use and speed things up a bit. It's still not super-fast, but will handle much larger image files before running out of memory.

Hi Jonathan,

First of all, thanks for the initiative and for making the first trial available. I wanted to give your software a try with my torture test (http://www.xs4all.nl/~bvdwolf/main/foto/down_sample/down_sample.htm). Unfortunately I ran into a problem with your 0_0_1_8 version, errors referring to DotNet at startup but I have the latest ones installed (.Net Framework V1.1,  and 3.5 SP1), and I have no complaints from other software (including Visual Studio). I can't find a reference to version 2 being installed any more, is that what your application is depending on?

If you want to try and clear the issue, feel free to send me a PM so we don't clutter this thread.

Cheers,
Bart
« Last Edit: December 26, 2009, 08:30:53 am by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==
Pages: 1 [2] 3 4   Go Up