No personal experience making lens profiles, or hacking profiles for one body+lens combo to work on some other combo. But I recall a long and detailed thread from the past. Can't remember where or when that was. I just rememebr that Eric Chan was participating.
The lesson was that lens profiles (for Adobe softwre) are definitely resolution dependent. The lens profile basically tells the software to move a pixel from coordinates X,Y to new coordinates X1, Y1. The distance the pixel moves is dependent on BOTH the location where it falls in the image AND the size of the image in total pixels.
A pixel in the center of the image moves very little or not at all. A pixel closer to the edge of the image moves a lot. This is easy to understand. We can see it in an un-adjusted wide-angle image. Distortion is greater toward the edges, and greatest toward the corners.
But consider two different sensors with identical physical sizes but significantly different pixel counts. For example, assume one is 1,000 pixels wide and the other is 2,000 pixels wide. On the lower pixel density sensor, a pixel at coordinates X,Y has to move 10 pixels up and 10 pixels to the right to arrive at the new coordinates X1,Y1.
On the sensor with greater pixel density, the same pixel would have to move twice as far to arrive at the same X1,Y1 coordinates.
I remember that, in that thread, someone (not Eric) pointed out that the Adobe Lens profiling software is simply not smart enough to map the absolute X,Y lens coordinates to the relative sensor pixel coordinates. That's why Adobe lens profiles are always constricted to a lens+body pair. That author was praising some other software (DXO, C1, ??) that was able to do such mapping, so that a single profile for a given lens could be used for any camera body.