Well, maybe Joe Photographer is busy ...
....They care, but if I tell them one ink/paper combination might last 30% longer than another ink/paper combination, but then I tell them that's the difference between 80 years and 120 years ... your right .. they don't care.
And I don't think this is "dumbing" down that data. I think it's about presenting the data in a simple and comprehensible manner. What's wrong with saying you have x number of standard points to measure and those points somewhat correlate to "x" years under certain types of conditions? We're all smart enough to know those conditions vary wildly ...the important piece of information which is difficult to extract is how well does paper a compare to paper b and paper c. That's what everyone wants to know.
Wayne, please reread these two excerpts from what you wrote, again. This is precisely why AaI&A is now challenging the industry not to translate megalux hours to "years" on our behalf, and why I fault the industry for having done so. Both "light exposure dose" (i.e, megalux hours) and "years" ratings give anyone the opportunity to quickly assess if product A is 30% more lightfast than product B to use your example. In fact, if I simply split the difference between Kodak's recommended light level assumption of 120 lux per 12 hours per day and Wilhelm's recommended assumption of 450 lux hours per day, I could easily justify using 228 lux for 12 hours per day. And when the testing laboratory now arbitrarily assumes this illumination level on your behalf, then guess what?
Megalux hour ratings now translate exactly into "years on display". 
For example, an AaI&A lower CDR rating (the lower CDR rates the weakest part of the system) equal to 10 then means the print buyer will observe little or no noticeable light-induced fading for 10 years or more
if your light level stays at or below 228 lux for 12 hours or less per day. It's that final caveat for assumed light level which is the problematic assumption because, again, as you noted " those conditions vary wildly" in the real world.
To summarize this point, WIR "years" ratings and AaI&A conservation display ratings reported in megalux hours both give you what you say you want... an easy way to compare the
relative lightfastness of one product versus another, e.g., that "one ink/paper combination might last 30% longer than another ink/paper combination" as you suggested. The only other big difference between WIR ratings and AaI&A ratings is the assumptions that the two labs make about "allowable fade". As I've said many times in various forum threads, WIR uses a consumer-oriented criterion for "easily noticeable fade" (nothing wrong with this consumer-oriented approach at all, IMHO) and AaI&A uses a museum/fine art criterion for "little or no noticeable fade" (which is more appropriate for prints having artistic and/or historic value), so our ratings will be systematically different based on the differences in our visual criteria for allowable/acceptable fade. Pick which criterion better suits your needs. The WIR and AaI&A testing assumptions about "how much fade are we talking about" are both valid choices for the stated reasons, but people do need to realize that any testing lab's chosen test criteria have important ramifications regarding their relevance to the end-user applications.
So, why do I strongly believe the industry made a fundamental mistake to translate test results into "years" on your behalf. It comes down to this basic issue: when you use a
relative "years of life" rating rather than an
absolute "exposure dose" rating to imply to your customer that he doesn't have to worry about print fading for x number of years, then that standardized prediction begins to mislead the public into believing an absolute number of years for their safe display time of all prints made with that particular product regardless of how the prints get displayed. The reality that, again to use your words, "We're all smart enough to know those conditions vary wildly" quickly gets overlooked, and thus the "years of life" score becomes grossly misleading whereas a numerical score based solely on exposure and avoiding light intensity assumptions remains valid no matter how high the light level is in the display area.
Once again, I am very grateful to all who have been participating in this thread. You've given me more ideas to consider. More than one of you has asked for a simple categorical ranking system. It wouldn't be too difficult to derive one from the AaI&A test results. It's not hard to envision a digital print light fastness merit rating or award somethlng like the Olympic medals "gold", "silver", "bronze", and maybe one or two more categories like "fugitive", "not recommended", etc. for some easy-to-digest guidance on the printer/ink/media combinations that have been tested. Of course, to do this fairly and with future proofing for tomorrow's new technologies, the highest award has to be reserved for the extremely lightfast prints, and thus in today's market, many "archival pigmented prints" won't get much past a bronze medal. So, an AaI&A category ranking scheme may become a case of "careful what you wish for". Also, I generally don't like categorical rating systems like A, B, C, or five stars, fours stars, etc., because of the problem of "binning", i.e. sorting all the products into just three or four bins or barrels. Simply put, the "expert" creating the categories must make further assumptions about what truly constitutes good, better, best, and there must also inevitably be rigid pass/fail boundaries between the categories. Hence, two products that are almost nearly identical in performance can by luck of the draw land just above and just below one of the category boundaries. Then, for example, the "A" rated product looks much better than the "B" rated product when in reality those two products aren't that much different in performance. Think of a school exam where one student get a 69 out of 100 on an exam and another student gets a 70 out of 100 score. The numerical 0-100 point scale tells a more revealing story, but when that numerical scale is reduced/simplified to just a few categories, then one student gets the C (satisfactory) while the other gets the D(poor), and the D rated student then has more explaining to do to his parents. So, to summarize, categorical rating schemes do offer basic guidance and are meant to be a quick-look summary of the issue, but they can introduce their own set of biases as well. Such is life. I do realize we humans benefit from these types of categorical rating systems when there isn't time to delve further into the subject matter.
best,
Mark