The thread has made it very clear that Lab is no good option as an EDITING color space.
Unfortunately, many authors, and it would appear to me Bruce Fraser included, do not make a clear distinction between what is the theoretical limitation of a space and what is the current implementation of that space/standard/spec/etc. offering. Conflating the two results in notions that people make judgments on theory and algorithmic correctness based upon what a particular implementation (particular software) is doing. For example even in this forum issues regarding the correctness of stuff such as "optimal" sharpening etc., are being done based upon what a particular software, Photoshop, is doing. Of course, I fully realize that a particular software is what people have in their hands so they have to go by that software. However, that is the responsibility of a technical author to clearly delineate which shortcomings in a certain workflow are coming from a particular implementation and which are actual theoretical bounds.
I think I have a few books from Bruce Fraser at home and I shall go and recheck them, but what is quoted of his writings here on this forum, it appears, that Bruce is heaping criticism on Lab space, many of which, in more modern specs, have been assimilated into a theoretical model with Lab (think color appearance models, CAMs, in conjunction with Lab). In theory, it does not matter if Photoshoop does not have them, and an author should point that out that its Photoshop responsibility to modernize and not necessarily the fault of a particular space.
1- Since it is a huge space, would it make sense as a space for archiving images?
This is another one of an implementation issue. You would see comments such as that particular space is limited in "gamut", etc. However, in practise, many of those "shortcomings" happen because of certain decision early in the processing chain (such as clipping negative numbers and numbers greater than 1 (normalized) in color values, etc.). The primaries of a color space span the space and if such strippings, etc., are not done early on, and kept in the file all the way to end of the processing chain, when one is about to output, and then gamut mapping/clipping is done, then many of the restrictions of the "small" gamut of a certain space may be resolved.
In essence, notions such as "huge space" are appearing because it is "huge" in positive numbers and less than normalized 1 (though in Lab space, a and b do go negative). Otherwise, negative numbers did not stop CIE to conduct its spectral tristimulus determination experiments in RGB, and then they moved on to
all positive XYZ space, because of concerns at that time regarding negative numbers, which should not affect us these days working with computers.