Probably not - because you haven't mentioned creating a device link or ICC profile with a gamut mapping tailored specifically to your source space or images. I'd guess you are using the generic gamut mapping created by whatever software created the ICC profiles you are using.
*I have profiled my camera using a colorchecker Passport/xRite software and a couple of different illuminations.
*I semi-periodically profile/calibrate my displays using the xRite i1 display pro and i1 profiler software
*I download printer profiles for my 3880 from my paper suppliers.
*I print from Adobe Lightroom after using softproofing.
All of these assume that the color characteristics of my devices are of a somewhat static nature (measure in one or a few cases, reuse afterwards).
It's a requirement from the definition of (real) gamut mapping - mapping one gamut into another. That implies two gamuts, a source and a destination, if the mapping is going to be accurate in this task. As I was alluding to, while you can get away with assuming your images completely fill a smaller gamut space like sRGB or AdobeRGB, assuming this with a large gamut space (like ProPhoto, scRGB, L*a*b* etc.) will give extreme compression, and it probably won't look very good if mapped to your printer space.
So if what I am doing (which seems to be a pretty common normal way of doing things) is not (real) gamut mapping, then what am I doing, and what am I missing? If a patch of an image is numerically represented as rgb [12,42,255] and the camera profile describe how the camera maps (perceptual-models of) color to 3-channel readings and the display/printer profile describes how the (post calibration) display/printer maps 3-channel input to (perceptual-models-of) color, then what is left?
Makes sense to me. But that's not how things have evolved. Under the pressure of people demanding that "it just work", everyone in the chain has tried to make things look "really nice" without needing to judge or adjust or choose anything.
I always thought that sRGB was the "always just works" method, and that color management was an attempt to make things "right" at the expense of endless hair-pulling.
I generally roll my eyes when I come across yet another technical article about how to identify and then automatically tweak "key memory colors" like sky, grass, skin etc. in images to make the result "more pleasing". But then this sort of tweaking has gone on forever - none of the photographic processes reproduced color faithfully, they all tweaked it (using chemical "algorithms") to enhance saturation, contrast etc. etc. so that people liked what they saw, and make it better corresponded to what they remembered. Much of the digital workflow has emulated all that.
I want my pipeline to be able to do perceptually transparent reproductions of reality (or as much so as 2-d static technology and my time/economy allows). Then I take the liberty to bluntly ignore the "scientifically correct" answer whenever I feel like it based on my conscious choice, not some faceless product developer.
-h