I assume that the Mac has a utility like Windows' Task manager that can show, in real time, the amount of free and used RAM. Keep this open while working and you'll get an idea of whether your apps and workflow are using all or most of your RAM.
The OS X utility is called Activity Monitor. However, unlike the MS Windows Resource Monitor, in the current version of OS X it does not display the operating system kernel counters for paging activity. It has a "Swap Used" metric, but that's a very blunt instrument. (OS X does include a sophisticated utility called DTrace which allows the user to probe the kernel counters. However, it has a moderately steep learning curve.)
I personally don't put much stock in either operating system's low-resolution monitoring utilities, but at least the Microsoft one offers information that is directly relevant to determining whether more memory would be useful. The problem is that it's difficult to interpret what the display means unless you have so much physical memory installed that some of it is always free. Assuming you are
using all the physical memory in the machine, in a Microsoft environment it is possible to monitor the memory consumption of the process whose performance you want to optimize. The "commit charge" shows how much virtual memory—either physical memory or backing store—the process is using. (In other words, the aggregate memory the process has requested.) The "shareable working set" displays the amount of virtual memory that can be paged-out if another process makes a demand for physical memory on the operating system. The "hard faults/sec" metric shows how often the monitored process needs to request that a block be pulled off the swap storage device and placed in physical memory.
But the hard part, whether you use the Microsoft Resource Monitor or probe the OS X kernel with the DTrace utility, is to figure out how much demand-paging is too much for a particular process. Most modern application programs spawn multiple coactive threads of control (in effect, mini-processes), some of which are born and die rather quickly and some of which persist for a long time. They all share the same virtual memory by default (although the programmer can explicitly protect a thread's memory from other threads). Some threads probably have more influence on whether an application seems slow than others. Unless you are thoroughly familiar with the application's internals, I'm not sure any technical performance monitoring will be a more useful guide than experimenting with the things you can directly control—namely how many applications are running at the same time.
Jeff Schewe tells us that Lightroom should be happy as long as it has access to 16 GB of physical memory (minus operating system overhead). So my feeling is that the best way to determine whether Lightroom would perform better with more RAM would be something along these lines:
- If you always have some free memory when Lightroom is running, you don't need more.
- Otherwise, if LR seems sluggish, try terminating other applications that may be competing with it for physical memory.
- If LR still seems sluggish after terminating inessential applications, add more memory and hope that the improvement will be sufficient.
I've done a fair amount of performance-tuning over the years, but that was mostly on large multiuser computers that were required to execute several (or many) application programs simultaneously and where I had tools at my disposal to modify the behavior of both the system and the application programs. For a single-user desktop or laptop, I just don't have any confidence that a technical approach will yield a better result than the simple algorithm above.