I’ve spent most of my career involved with costing and pricing low-volume products (1-100,000 units/yr). It is fraught with pitfalls. If pricing McDonald’s french-fries (sorry - chips), there are a host of things you can do to accurately price (and cost) the product. I was recently told the light bulb isle at Home Depot is a $4M isle per store per year. That’s roughly a customer per minute per store. All sorts of fun stuff you can learn about people’s utility for various things and the price elasticity of not only a product, but its individual features in a case like that.
When selling a few thousand of something per year it is very different. To some extent, the MF manufactures use all three types of price discrimination. Although not technically a two-part tariff, the fact that you can trade in an old back for a reasonable credit (especially with P1) creates a similar economic effect even if irrational: It’s expensive to get on the train and now that I’m on I don’t want to get off! There is also the obvious feature set difference between the IQ350 and 150. Combine all this with quantity discounts to high-volume institutional users and they have all the price discrimination boxes checked. The problem still is volumes are low, so it is very difficult for HB or P1 to determine the real price elasticity of a product (if they lower the price will the marginal revenue go up or down); there are just too many other factors going on with too little volume to perform a meaningful test. This is especially true for young products.
As a result, pricing often defaults to a cost-plus model, or at least a hybrid that includes some sort of desired gross margin and (un)educated guesses about market dynamics.
You would think at least costing should be an exact science but it often isn’t. All sorts of overhead costs can be added as a percentage of material cost, and those costs are a jumbled mix of fixed and variable that change depending on the time horizon and question being asked.
For example, P1 may assign warranty cost as a percentage of material cost (I have no idea if they do, but it is certainly done often). As one might guess, the warranty and other support costs for more complicated (higher-end) products are often higher than those of established, less “cutting-edge” products. I have no idea what sensors cost but it doesn’t really matter to illustrate the point of how costs can escalate when multipliers are applied to cover warranty support and other costs allocated on a material cost basis:
IQ180 Cost (sensor only): $3000 x 1.1 = $3,300
IQ150/350/50c Cost (sensor only): $4500 x 1.12 = $5,040
IQ3100/100c Cost (sensor only): $6000 x 1.15 = $6,900
Now pile on top of that the completely rational thought that newer products with less-established competitive offerings should generate higher gross margins. Again, difficult to test and verify. But any change in margin results in a big difference in price. Let’s say 40% for established products and 50% for new products:
IQ180 Price (sensor only): $3,300 @ 40% GM = $5,500
IQ150/350/50c Price (sensor only): $5,040 @ 45% GM = $9,164
IQ3100/100c Price (sensor only): $6,900 @ 50% GM = $13,800
So, a sensor that costs only $3,000 more ends up with a price adder of > $8,000 (again, sensor only). That doesn’t count the added memory, faster processing required, heat management, allocated sunk costs for developing feature sets, etc.
How much of the price difference is associated with cost vs. market pricing? Only HB and P1 know, and even they may not really know!
Dave