My understanding that 72-100 megalux-hours in test roughly equates to about 40+ WIR years on display, using the tabular conversion table in Aardenburg's reports.
Mark, I am definitely missing something, but WIR published that Exhibition Fibre passes at 45 hours in bare-bulb display, but a mere 32 hours for Platine. Exhibition goes on to a rather amazing 150 years when under UV protection. (I'm referring to the tests using Epson's UltraChrome K3 inks) How does WIR consider when a print has reached the unacceptable point for light fade? I see in your published reports something vague about "noticeable" fading. Your testing that Baryta Photographique does a bit worse than Platine agrees with WIR's own, but this discrepancy with Exhibition Fiber is quite large. I am surprised by these published values.
You did correctly interpret how to translate megalux hours of exposure into WIR "years on display" (i.e, divide megalux hours by 2 in order to extrapolate to "display years" based on the WIR illumination assumption of 450 lux for 12 hours per day). However, even when making the same illumination assumptions for prints on display, there's still another huge distinction between the two laboratories' predicted display times owing to the different "failure criteria" that are also required to calculate these ratings. Aardenburg relies on the I* metric for color and tonal accuracy retention in order to calculate its Conservation Display rating (CDR), whereas WIR uses a legacy densitometric criteria set with 17 different endpoints to determine its WIR display rating. The WIR densitometric failure criteria were developed during the silver halide era of color photo finishing and are thus reasonably suited to that era of color photography. They are not that well adapted for modern multi-color inkjet systems which in part explains why media like Exhibition Fiber or ink sets like Canon Chromalife 100 can get seriously misranked in the WIR tests. My friend and colleague, Henry Wilhelm, fully understands the ramifications of the legacy WIR densitometric test. We co-developed much of the I* metric technology together, but I can only assume he and his clients are waiting on a new international standard of some sort before switching to a different type of testing protocol. I had no such constraints when founding Aardenburg Imaging, so I chose the more robust I* metric as the evaluation method. It's an open source metric (Henry and I both believed it needed to be), so the various ISO and ASTM committee's currently working on digital light fastness standards are more than welcome to adopt the I* metric if they want. That said, the committee politics is such that I doubt any superior light fade testing standard will ever be published.
Verbal descriptions of visual changes taking place are always challenging, but the AaI&A CDR is probably best described as relating to any measurable change in the image which produces only "little or no noticeable fading" (i.e. print remains in excellent condition) whereas the WIR criteria set is better described as predicting "easily noticeable and often objectionable fading"(i.e, the print will be in only satisfactory to poor condition). It is regrettable that the WIR reports use the unfortunate phrasing "before noticeable fading occurs". That's not what the WIR criteria set actually spells out. Rather, noticeable fading will definitely occur sooner in test while more easily
noticeable fading will be reached at the product's rated endpoint, assuming the 450 lux/12 hour per day illumination level holds true. With some systems exhibiting non linear fading behavior, at least some noticeble change can occur early on in test. That change would trigger the AaI&A criterion but not necessarily any WIR densitometric criteria for failure. Thus, the two laboratories' choices for "allowable" change as rated by the testing also contributes to the ratings differences beween WIR and Aardenburg. The fact that greater fade is allowed in the WIR test and less fade is allowed in the AaI&A test is neither good nor bad since no single judgement of visual appearance can totally describe the full fading behavior of a printer/ink/media combination. The two laboratories simply have different audiences in mind. WIR's testing has always been dedicated to typical photo consumer expectations, whereas AaI&A was founded with a more discerning museum curator, fine art printmaker, and/or serious print collector's expectations in mind.
Lastly, I'd recommend that folks ignore the WIR "bare bulb" data and compare only the "framed under glass" WIR predictions to the AaI&A conservation display ratings, albeit keeping in mind that AaI&A fading tolerances are more conservative as explained above. For technical reasons I don't want to go into here, I don't think the WIR bare bulb findings make entire sense. However, WIR's UV-excluded numbers are also OK to consider if you want to check system sensitivity to UVA radiation which typically changes fade rates by a 2-3x factor. Just bear in mind that any media like EEF which have loads of OBAS aren't going to look good under UV blocking museum conservation glazing, and even if you do improve fade resistance by filtering the UV component, sunlight striking a print directly (i.e., the primary source of that extra UV radiation) is still going to kill a UV-protected print quickly. Many experts have confused UV-induced damage with the far greater damage caused by the total light intensity of sunlight which coexists along with that extra UV component. The total intensity of direct sunlight entering a window and striking a print on the wall is order of magnitudes higher than typical room illumination levels. Hence, it's not the 2-3x UV factor that is causing so darn much damage. It's the 100-1000x total light level increase associated with that sunlight, even when UV gets blocked, that is so destructive to artwork and other furnishings in the home or office.