Simulation modeling for cost estimation

BY RICHARD WATERMAN
Department of Statistics, Wharton School, University of Pennsylvania,
DONALD RUBIN
Department of Statistics, Harvard University,
NEAL THOMAS
Datametrics Research Inc.
ANDREW GELMAN
Department of Statistics, Columbia University.

Draft of June 9th 1999







Prepared for the Seventh Conference on Postal and Delivery Economics: Current Directions in Postal Reform. June 23-26, 1999, Sintra, Portugal.

The authors acknowledge the contribution of many individuals to both the design and construction of the model. Paul Kleindorfer and Michael Crew were instrumental to the design of the underlying econometric models. Without the inputs from the U.S. Postal Service, provided by Ross Bailey, John Reynolds and their staff, there would be no data on which to run the model. The members of the LINX DQS team turned the Postal data into formats amenable for input to the economic models. Many discussions with both the Postal Rate Commission and the General Accounting Office helped steer the modeling in relevant directions.


1.1 Introduction

What do a group of statisticians have to contribute to the subject of good costing practices, and in particular the United States Postal Service (USPS) costing methodology? Credible cost estimates require the collection of high quality information as component inputs. Deciding what information to collect lies in the province of the economist, but exactly how to collect that information, when to collect it, how much of it to collect, as well as a significant part of the overall evaluation of the quality of the information itself, lies in the province of the statistician.

Our involvement with the Postal Service costing dialog began as members of the LINX team on a large scale Data Quality Study (DQS) of Postal Service data inputs to the Postal Service rate making process. This study took place from June 1997 to April 1999 and is fully described in the Summary Report and four supporting Technical Reports 1. Though the study was broad in its approach, including economic, statistical and an Industry Survey studies, one component involved the construction of a simulation model to investigate a variety of questions including the overall quality of specific marginal cost estimates, as well as an examination of issues and concerns raised by intervenors during various Postal Service rate hearings. This paper describes the rationale for the simulation model, explains the key ideas on which it is founded, and illustrates its use. Furthermore, it expands on some of the insights provided by the model.

In particular, among the benefits of the simulation model approach are that it forces the user to think hard about their assumptions and to focus on what exactly it is that needs to be measured. Thereby it may provide a means of exploring conjectures and their consequences, of different or even opposing viewpoints.

1.2 The role of a simulation model within cost accounting systems

Accurate costing of products is an essential activity within any large company with a diverse product mix. It is a key requirement for identifying the organizations ultimate profitability. A diverse and complex product mix is likely to require an involved process to reveal individual product costs. In these circumstances it can be a major achievement simply to arrive at a product level cost estimate. However, there is a second and even more demanding dimension to the cost estimation process: to ask how reliably (described in terms of precision and accuracy) those costs have been estimated. If we agree that it is important to estimate costs, then it is clearly equally important to quantify the quality of those cost estimates. Cooper and Kaplan (1991) discuss possible reasons for, and the impact of, measurement errors in cost management systems.

The simulation model is one way to approach this second-level question - the question that asks, not simply ``how should we estimate costs'', but adds caveats ``how well have these costs been estimated'', and ``what are the likely consequences of potential errors in the cost estimation process''.

discuss possible reasons for, and the impact of, measurement errors in cost management systems.

The simulation model is one way to approach this second-level question - the question that asks, not simply ``how should we estimate costs'', but adds caveats ``how well have these costs been estimated'', and ``what are the likely consequences of potential errors in the cost estimation process''.

1.3 The multi-product multi-driver firm

Because costs arise from a variety of sources, it is necessary to construct a cost formulation that incorporates a range of cost drivers. In the case of the Postal Service, there are nineteen separate cost segments involved, (for example, Purchased Transportation, Supervisors and Technical Personnel etc.), which are further subdivided into 59 cost components.

In addition, the annual Cost and Revenue Analysis (CRA) presents attributable costs from numerous categories of mail and services. Though the CRA costs are based on accounting records, the accounts do not differentiate the costs by class and subclass of mail. In order to provide this breakdown by mail class and subclass, additional sources of information have to be utilized. These sources include large scale multi-stage sample surveys, operating data systems and special purpose econometric studies. Data from these sources most often makes their appearance in (i) the distribution keys used to distribute the attributable cost, and (ii) the elasticities of accrued component cost with respect to the cost driver. Due to the diversity of the inputs to the cost calculations, it is extremely difficult to identify analytically the quality of the resulting cost estimates, despite being a very legitimate question to ask. Further, one of the cost measures of interest to the Postal Service, the marginal cost estimate (Unit Volume Variable Cost in Postal Service parlance) is calculated by combining four multiplicative factors:

1.
The accrued cost.
2.
The elasticity of the accrued cost with respect to the cost driver.
3.
The elasticity of the driver with respect to mail volumes (Distribution Key Share - DKS).
4.
The mail volumes.
Bradley et. al. (1993) and Panzer2 provide a detailed description of this product costing procedure.

An estimate of each of the four components must be derived, and it is not at all obvious how the uncertainty in each component relates to the uncertainty in the overall marginal cost estimate. One immediate practical application that results from measuring this uncertainty is that it enables the analyst to begin addressing the following question ``if there were one million dollars to spend on improved information collection, where should those dollars be spent - better elasticity estimates, distribution key shares or volume estimates''. Furthermore, the simulation model helps direct the analyst to the specific cost components (for example Delivery, Transportation or Mail Processing) where better component estimates would provide significantly better overall cost estimates.

1.4 The Data Quality Study

Information Technology is often cited as the key driver of current productivity increases. A sometimes overlooked component is the raw material of the IT system itself, that is the data/information that these systems work with. If the IT system is ideally a machine that constructs knowledge, then what the DQS looked at was the sometimes less than glamorous, but clearly essential, raw material inputs to the machine. Simply put, without quality inputs there are unlikely to be quality outputs.

1.5 A neutral analytical tool

Value from an endeavor often arises indirectly, even serendipitously, and that appears to have been the case during the implementation of the simulation model. The reason why value from the model may be gained indirectly, is that the construction of an acceptable cost estimation model requires dialog concerning:

As such, the model may play the role of a rule book in a sporting contest. The players should agree on the rules a priori and accept the outcome. This is not to make the claim that the rules should be immutable; clearly, over time adaptation of the rules is a necessary consideration, but for any particular game they should be fixed.

Two examples follow that illustrate the manner in which the model lead to potentially useful insights. The project had initially focused entirely on marginal cost estimates for the mail subclasses of interest. After consideration of results from the model, parties to the project began to focus attention also on relative marginal costs, which represented a major change in the main outcome measure of interest. Further, concern had been expressed over the consequences from a recent decrease in data collection resources in the core statistical sampling systems. The model simulation model suggested that this was not of primary concern, because there were other components in the cost estimation process that contributed more to the overall uncertainty of cost estimates. This example illustrates how the simulation model offered the potential to focus attention on those parts of the process that were most influential with respect to the outcome of interest.


References

M.D. Bradley, J.L. Colvin, and M.A. Smith. (1993). Measuring product costs for ratemaking: The United States Postal Service. In M.A. Crew and P.R. Kleindorfer, editors, Regulation and the Nature of Postal and Delivery Services. Kluwer Academic Publishers, Boston, MA.

Cooper and Kaplan. (1991). The Design of Cost Management Systems. Prentice Hall, Englewood Cliffs, NJ.


Footnotes

1 Available from http://www.usps.gov/clr/dqs.htm. A full description of the simulation model, its results and conclusions constitute Technical Report #3.
2 Testimony in Docket No. R97-1

Richard Waterman
1999-06-11