I'm thinking something deeper, probably related to the kernel and memory usage.
I don't run Lightroom on MS-Windows, but its performance on OS X seems to be significantly affected by physical memory availability. I've noticed that even killing idle
processes—i.e., other programs that were no longer actively making requests to the process scheduler—usually improves Lightroom's user responsiveness, presumably because killing a process frees up its resident memory set. (For those of you unfamiliar with OS X, programs typically don't die when you "shut them down" from the graphical windowing system; they continue to run in the background so they will appear to launch almost instantly the next time you invoke them.)
I routinely kill coactive but idle processes when LR seems sluggish and it always helps. I also coax the operating system to more aggressively reclaim physical memory by running the OS X purge
command and this also always helps. (This is a very blunt instrument and I don't recommend it; then, again, I don't follow my own advice.) For quite some time, a number of very knowledgeable outsiders have believed something is broken in OS X's memory management. Apple has never acknowledged this and in fact may have fixed it in recent releases (I'm still running 10.6), but there's reason to believe memory management hasn't been working as well on OS X as it does on most other variants of UNIX—or even as well as most Linux distributions, for that matter.
Finally, a couple of comments on the issue of CPU utilization. I used to do a fair amount of performance tuning of UNIX on fairly large machines. Modern UNIX kernels often will appear from snapshot performance-monitoring utilities to be using all available CPU even when there is actually plenty of processing power still available because their schedulers will offer time slices to lower-priority (and eminently delayable) processes when higher-priority ones don't need access to a processor—e.g., when they block waiting for an input or output operation involving memory (including the kernel's demand-pager), access to a disk, or data from a peripheral device such as a keyboard. It's a little more complicated with multithreaded applications because threads have constraints for scheduling that are more complex than those for heavyweight processes. But I think it's safe to say that unless a snapshot tool shows that all your CPUs/cores are continuously pegged for a very long period (say several seconds since most tools only collect samples once a second), it's highly unlikely you are running out of processor. I see no reason why this, at least, wouldn't be as true on Windows as on UNIX.
OS X offers an excellent profiling tool called DTrace which could be used to study Lightroom's performance in considerable detail while it is performing specific functions, even without access to source code, in the event someone out there with the right background wants to try to gather some empirical evidence. I'm not willing to invest the time because (1) I can't fix memory management problems in OS X and (2) I can't improve LR. I assume Eric Chan or someone else at Adobe is doing this type of performance tuning, which is why so many of us seeing a distinct improvement in the 4.2 release over the first customer ship. But every operating system fights the application developer to some extent and there's often a limit to how much better performance you can wring out of a program without sacrificing features. (And I, for one, can't think of a lot of LR features I'd be willing to sacrifice.) At a certain point you either need to improve the efficiency of the operating system or upgrade the hardware. The former is slow and difficult, but the latter is fast and easy: just add money. And to the extent that memory is constrained, you usually don't need to add a lot of money.