the use of models in information evaluation

19
The Use of Models in Information Evaluation Author(s): Gerald A. Feltham and Joel S. Demski Source: The Accounting Review, Vol. 45, No. 4 (Oct., 1970), pp. 623-640 Published by: American Accounting Association Stable URL: http://www.jstor.org/stable/244202 . Accessed: 12/06/2014 21:43 Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp . JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. . American Accounting Association is collaborating with JSTOR to digitize, preserve and extend access to The Accounting Review. http://www.jstor.org This content downloaded from 185.44.78.143 on Thu, 12 Jun 2014 21:43:01 PM All use subject to JSTOR Terms and Conditions

Upload: gerald-a-feltham-and-joel-s-demski

Post on 15-Jan-2017

213 views

Category:

Documents


1 download

TRANSCRIPT

The Use of Models in Information EvaluationAuthor(s): Gerald A. Feltham and Joel S. DemskiSource: The Accounting Review, Vol. 45, No. 4 (Oct., 1970), pp. 623-640Published by: American Accounting AssociationStable URL: http://www.jstor.org/stable/244202 .

Accessed: 12/06/2014 21:43

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at .http://www.jstor.org/page/info/about/policies/terms.jsp

.JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range ofcontent in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new formsof scholarship. For more information about JSTOR, please contact [email protected].

.

American Accounting Association is collaborating with JSTOR to digitize, preserve and extend access to TheAccounting Review.

http://www.jstor.org

This content downloaded from 185.44.78.143 on Thu, 12 Jun 2014 21:43:01 PMAll use subject to JSTOR Terms and Conditions

The Accounting Review

VOL. XLV OCTOBER 1970 No. 4

The Use of Models in Information Evaluation

Gerald A. Feltham and Joel S. Demski

M UCH, if not most, accounting re- search is aimed at some facet of the general problem of determining

what information should be supplied to a particular decision maker in a particular decision context. Broadly viewed, this is a choice (or decision) situation, and the research in question is concerned with ultimately discovering the "optimum" set of information for the particular decision setting.

The purpose of this paper is to present and discuss a model of the information choice situation. The first section presents a general framework, or model, for infor- mation evaluation. The second examines the nature of the operational model that would be constructed in order to apply the general framework in a specific setting. The third section then examines the nature of the model used to predict the decision maker's action selections and its relationship to the information evaluation model developed in the second section. Finally, the fourth section examines a number of recent research projects in terms of the information evaluation model.

I. A GENERAL FRAMEWORX FOR INFORMATION DECISIONS

The general framework presented here is founded in decision theory [50, 55]; and it particularly draws on the insights gained from the Theory of Team as de- veloped by Marschak and Radner [45] and the use of that material in earlier work by Feltham [29, 30] and Demski [23].

The information evaluation process is viewed in cost-benefit terms; and the cost- benefit, or value, calculation is developed from the point of view of an individual who decides what information to supply. We refer to this individual as the "in- formation evaluator." He may not be the same person who receives the information and subsequently makes decisions based on that information; we refer to this latter person as the "decision maker."

An important point to recognize at the outset is that the information evaluator's

Gerald A. Felthacn and Joel S. Demski are Assistant Professor of Accounting and Associate Professor of Business A dministra- tion respectively at Stanford University.

623

This content downloaded from 185.44.78.143 on Thu, 12 Jun 2014 21:43:01 PMAll use subject to JSTOR Terms and Conditions

624 The Accounting Review, October 1970

model of his choice situation is dependent on his current levels of experience and information. That is, given his present levels of experience and information, he specifies what he perceives to be the relevant action alternatives, events, conditional returns, and probabilities in his specific choice situation. The important specification requirements are discussed below.

Decisions. As perceived by the informa- tion evaluator, the decision maker (or decision makers) in question must make a set of resource commitments, or decisions. We term such resource commitments ac- tion selections. Let a denote the action selected and A the set of (mutually ex- clusive) actions which the information evaluator perceives might be selected by the decision maker.' Note that a represents the action selected and not its implemen- tation. The decision and the ultimate action will be identical only if there is per- fect implementation.

Events. Let x denote the events which occur during the period affected by the action specified and let X denote the set of all possible events during that period. A current act cannot affect the past; there- fore, these events occur entirely in the future. The events occurring prior to the action are denoted by x and the set of all possible prior events is denoted by X. Both the selected action and the ultimate action are included in x.

Payoff. The information evaluator is assumed to have some preference over the events which may occur. The measure of that preference, denoted by u, is dependent on the events that actually occur and is represented by

(1) u = w(x).

We assume an expected value formulation here. That is, the measure is such that if the evaluator has two potential conditional probability distributions over the ele- ments of X, he will always prefer the situa- tion which produces the conditional distri-

bution with the highest expected payoff. Prediction of Future Events. The infor-

mation evaluator is assumed to have a con- ditional probability distribution over the possible future events for each action that may be specified by the decision maker. Let ?(x/a) denote the probability that event x will occur given that action a has been selected by the decision maker. Clearly, then, the expected payoff from a particular act at this point is

(2) E(u/a) = E w(x)4(x/a) xEX

Information. While the decision maker could make a decision before receiving any additional information, we are interested in predicting the value of alternative in- formation systems which will supply him with additional information before the terminal decision is made. Let Xq denote a particular information system and y the signals emitted by the information system before the decision. The set of all such sig- nals is denoted by Y.

Different signals may result in different decisions, and, therefore, the information evaluator must develop a conditional prob- ability distribution over the future events for each possible signal. Let O(x/y, 71, a) de- note the conditional probability that event x will occur given that signal y is generated by information system -q and action a is selected. The conditional probability that the signal generated by information system 7 will be y is denoted by q(y/t1).

Calculation of the Probabilities. The vari- ous probability distributions must be "consistent." Raiffa and Schlaifer [50] state that there are three basic methods for generating these probabilities: (1) direct assignment of the joint measure to XX Y for a given information system; (2) direct assignment of a marginal measure to X and a conditional measure to Y for every x in X; and (3) direct assignment of a

1 The term "action" is used in the singular here, but it may represent a large number of activities. In fact, in Section II a represents a vector of activity levels.

This content downloaded from 185.44.78.143 on Thu, 12 Jun 2014 21:43:01 PMAll use subject to JSTOR Terms and Conditions

Feltham and Demski: Models and Information Evaluation 625

marginal measure to Y and a conditional measure to X for every y in Y. "The de- cision maker will wish to assess the re- quired measures in whatever way allows him to make the most effective use of his previous experience." [50, p. 5 ]

The approach adopted in this section is similar to the third approach.2 The infor- mation evaluator is assumed to have three basic probability distributions: (1) a prior distribution over the past events, ?(x); (2) a conditional distribution over the future events given the past events and the action specified, 45(x/x, a); and (3) a conditional distribution over the signals that will be received given the past events and the information system used, 0(y/x, 71). (Observe that if the information system is precise, this will imply a function, y =-7() .) The required distributions are calculated as follows:

(3) 0(x/a) = E 4(x/t, a)+(x) tEX!

(4) .(y/'2) = E 4(y/x, 7)+() .tex

(5) b(x/y, 7, a) -E +(x/x, a)4(T/y, v) tex

= ELI ?(x/x, a)

0 f (Y/'~I-)+ 1

t

Decision Rules. The information eval- uator must also predict the relationship between the signals generated by the in- formation system and the action ultimately selected by the decision maker. If he is the decision maker, he will select the action which maximizes the expected payoff, given the signals, i.e.,

(6) E*(uly, -q) _ _ ~~~~~~~~3

= max w(x)c1(x/y, nq, a) a e A XEX

The solution of (6) establishes a functional relationship between the signals generated

and the action selected. Denote this by a-oa(y, 77)

If the decision maker is a separate indi- vidual, he will base his action selections on his own decision model. And if he selects the action which maximizes his expected payoff, we may denote his choice process as:

(7) Max E (x) '(y, q, a) aA Ls xzex

where, of course, wz(x) is the decision maker's payoff function, +(x/y, -q, a) is his prediction of the events given the signals received from the specified information system and action selected, and A is the set of feasible actions perceived by the de- cision maker. In this case, the functional relationship a = c(y, 'q) represents the infor- mation evaluator's prediction of the results of the decision maker's model given in (7) .4

In fact, the evaluator might obtain this function by constructing his prediction of the decision maker's model,

(7a) AMax F E w'(x)$'(x/y, ?1, a) aGA' LExaX

where each of the items in this model rep- resent the information evaluator's predic- tions of the corresponding items in the

2 The specific method employed to develop these necessary distributions is largely irrelevant, except that a "total" information-decision model would pro- vide for determination of the optimum method. Since such refinement is not essential to our discussion, we adopt the expositionally convenient assumption that the optimum method is to rely on the conditional dis- tributions specified in equations (3), (4), and (5).

3 In this case, since the evaluator and decision maker are the same individuals, A may also be interpreted as the set of actions which the information evaluator assumes to be feasible.

I A more general approach is to permit this relation- ship to be probabilistic. That is, instead of relying on a=cx(y, -q) we might employ a=ce(y, i7, ji) where ,u is some random variable. Also observe that, strictly speaking, in the case where the decision maker and eval- uator are different individuals, the ce(y, -) collapses to ce(y) because any communication of the information system employed will be accomplished through the signals y. However, we leave this distinction to the reader because its formal inclusion would unduly in- crease our notational burden.

This content downloaded from 185.44.78.143 on Thu, 12 Jun 2014 21:43:01 PMAll use subject to JSTOR Terms and Conditions

626 The Accounting Review, October 1970

decision maker's model. Thus, solution of (7a) establishes a(y, q) for the case where the evaluator and decision maker are dif- ferent individuals; and, in the ideal situ- ation, (7a) and (7) are identical.

Information System Selection. Selection of the information system requires recog- nition of varying information system costs. Let w'(y, 77) denote the cost of operating information system X when it generates signal y. It is assumed that the net payoff may be expressed as u'=w(x) -w'(y, q). Then, given a(y, q) as determined by (6) or (7a), the expected net payoff for a given information system, as perceived by the information evaluator, is:

(8) E(u'/7)

= E I w(x)P(x/y, aq, a(y, n)) EY L xXC

- '(y, ]Y/n).

Equation (8) thus provides the basis for viewing two related information issues in cost-benefit terms. First, if the information evaluator specifies an exhaustive list of alternative information systems (say -qj, * * * 7m), then the Xqi which maximizes (8) is the optimum information system. Sec- ond, if the information evaluator is con- cerned with marginal system improvement then E(U'/772)-E(u'/'q1) is the expected change in net payoff if he moves from in- formation system 771 to f2.

The entire evaluation process is depicted in Figure 1, where we observe the following somewhat sequential process: specification of a particular information system, -q, re- sults in a set of signals, y, being supplied to the decision maker; the decision maker may then use the resulting information in selecting his action, a; and this action may determine, in part, the events, x, of the sub- sequent period. The information evaluator must predict the relationships between each of the above elements: the signal

generation process, 4(y/71); the decision maker's prediction and action choice pro- cess, a(y, ti); and the relationship between the actions selected and the events which will occur, 0(x/y, I, a). In addition, he must predict the gross payoff, w(x), he will derive from the events of the subsequent period as well as the cost of operating the particular information system, w'(y, ti).

II. MODELS FOR INFORMATION

DECISIONS

The previous section outlined the general framework of an information evaluation model. In this section we examine the nature of the specific model the informa- tion evaluator might construct to imple- ment that general theoretical model. Con- sider the following example:

Example. The information evaluator wishes to evaluate certain information that might be used by a decision maker con- cerned with the production and marketing of two products. The price, as well as the production quantity, must be established for product 1. The price will affect the average demand, but the demand that will occur is uncertain. If demand exceeds the units available, the excess demand will be satisfied by obtaining units from an out- side source; excess units will be inventoried. The price for the second product has been advertised, but the resulting demand is uncertain. Demand in excess of available units will be lost and may, if significant and repetitive, have a detrimental effect on future demand.

The production and marketing of these products requires a number of inputs, in- cluding materials, supplies, utilities, ma- chines, plant and warehouse space, and labor (administrative, sales, supervisory,

5 A multiple period model (see [29, 30]) is probably a better representation if the current action or informa- tion system changes affect the payoff in future periods. While these conditions usually exist, our analysis can be sufficiently explained using the single period format outlined above.

This content downloaded from 185.44.78.143 on Thu, 12 Jun 2014 21:43:01 PMAll use subject to JSTOR Terms and Conditions

Feltham and Demski: Models and Information Evaluation 627

rInfiormation Evaluator's (Net) Payoff Functionl

Information -----------esultantX System Events

Information Evaluator's

(Information Evaluator's Resultant Prediction of Future Prediction of the) Signals Events Signal Generation Sls {}L(x/y, nq, a) I

Processly .. t(Y177)}

(Information Evaluator's Prediction of) Decision Resultant Maker's Prediction and Action

Choice Process Choice

O AT (y, PRC {a})

FIGURE

INFORMATION EVALUATION PROCESS

and production). The department has a labor force to which it is committed, but additional temporary labor may be hired at the start of each period. Available ma- chine hours and warehouse space are limited; and the number of machine hours available is reduced by the hours required for maintenance.

Decisions. In many models the decision is stated as a vector of controllable vari- ables and a specific decision is represented by specific magnitudes for each of these decision variables. In our example, this vector might be a= (Q', Q2, Pi, H) where Qt, i= 1, 2, denotes the planned production of product i, P1 the net selling price of product 1, and H the number of temporary labor hours hired.6

Decision models seldom represent as con- trollable variables all actions that must be taken. Some actions are assumed to be implied by the controllable variables that are included. For example, if the raw ma-

terial purchased and used is assumed to equal the quantity required by the pro- duction levels specified, the purchasing activity may be ignored except for appro- priate inclusion of the cost of materials in the production costs. Other actions are excluded because they are under the con- trol of someone other than the decision maker in question or because they are as- sumed to be independent of the actions currently being determined. For example, acquisition of additional machines may be excluded from the product mix decision because the information evaluator per- ceives that this decision is under the con- trol of some other decision maker or that the decision maker in question does not consider the possibility of acquiring addi-

6 In other cases, the model may be changed directly for each alternative. This will usually occur when there are a limited number of alternatives and the alternatives are different in nature, e.g., alternative production facilities.

This content downloaded from 185.44.78.143 on Thu, 12 Jun 2014 21:43:01 PMAll use subject to JSTOR Terms and Conditions

628 The Accounting Review, October 1970

TABLE 1

THE PAYOFF FUNCTION

(1.1) r = F(a;O) = R(a; 0) - C(a; O)

where R(a; 0) represents the net revenue from sales plus the value of the ending inventory less the cost of demand exceeding units available for sale, and C(a; 0) represents the cost of the inputs used in production. These two functions are defined as follows:

(1.2) C(I; 0) = c1Q1" + c2Q2"1 + c3H

(1.3) Q." Qi - AQ* i = 1,2

(1.4) AQ* min AQ, AQ > 0,

such that

mrl(Q -AQ) + m2(Q2 - AQ) < mM - k*K

(1.5) R(a; 0) = Ri(a; 0) + R2(a; 0)

fPizi + v1(II + Q1" - zi)

(1- 6) RI (a; 0) I +

Qzl-l1 >-l-Ql

I + Q1" < zl

P2z2 + V2(12 + Q2" - Z2)

(1.7) R2(a; 0) 12 + Q2"Z2 ]P2(12 -I Q2") - S2(Z2 -'2 - Q2,")

12 + Q2" < Z2

Notation: cj= cost of material and variable overhead per unit

of i produced, i= 1, 2 Q"=the quantity of product i actually produced,

i=1, 2 C3=cost of one hour of temporary labor

AQ*= reduction in planned production due to insuffi- cient machine hours

in= number of machine hours required to produce one unit of product i, i= 1, 2

m = maximum number of hours available per machine

k=number of hours required to perform main- tenance on one machine

M=maximum number of machines available K=number of machines requiring maintenance P2=average selling price less variable selling cost

per unit of product 2 zi= demand for product i, i = 1, 2 zi= value of each unit of ending inventory of

product i, i= 1, 2 si= cost (of product i) of each unit of demand in

excess of units available for sale, i= 1, 2 Ii= number of units of opening inventory of

product i, i = 1, 2 0= (c1, C2, c3, mi, 1n2, M, k, Al, K, P2,

Z1, Z2, VI, V2, SI, S2, Il, I2)

(Qi" and AQ* are dependent variables; they are merely intermediaries in the functional relationship).

tional equipment at the time he makes the scheduling decision.

As before, the information evaluator

must predict the relationship between the information and the decision maker's choice. The precise nature of this relation- ship, denoted by a= c(y, a), is discussed in the third major section of the paper.

Events and the Payoff Function. Most models avoid detailed event specification by expressing the payoff as a function of the controllable variables and the magni- tude of a number of parameters. Let 0 = (01, . . . , m) denote this vector of pa- rameters. The functional relationship is expressed as:

(9) r = F(a; 0)

where r is the predicted payoff. Table 1 presents the functional relationship for our example.

This function should include specifica- tion of what happens when the selected actions are infeasible. Since the decision maker's model is dependent on his prior information and since it is also likely to be a simplified reflection of the actual situa- tion, as perceived by the decision maker, we must regard the typical model as being less than perfect. As a consequence, it is possible that the decision maker's (imper- fect) model may induce him to attempt to implement an infeasible action. And since different information alternatives may in- duce different infeasibility selections, it follows that the information evaluator should recognize the consequence of such selections. In our example, the possibility that planned production may require more machine hours than are available is re- flected in the model. The information eval- uator predicts that the production person- nel will cut back the production of both products equally if there is insufficient machine time available. (See (1.3) and (1.4).)

On the other hand, the example does not make any provision for planned production requiring more labor hours or more ware- house space than are available. This may

This content downloaded from 185.44.78.143 on Thu, 12 Jun 2014 21:43:01 PMAll use subject to JSTOR Terms and Conditions

Feltham and Demski: Models and Information Evaluation 629

reflect the fact that the decision maker's model includes constraints on these re- sources and the information evaluator be- lieves that the decision maker's predictions are sufficiently accurate (or conservative) to induce production level selections that will not exceed these limitations. Alterna- tively, the exclusion may represent a model simplification which the information eval- uator believes is "justified."

As in our example, most mathematical models do not attempt to include detailed predictions of individual events. The pa- rameters represent aggregations of events and average relationships which the infor- mation evaluator hopes are suitable ap- proximations of reality. At the same time, computational cost usually requires that the information evaluator use functional relationships which are simpler than he perceives to be accurate. Linearity assump- tions are particularly common. For in- stance, the production cost in our example is assumed to vary linearily with the pro- duction level of each product. The infor- mation evaluator is either unaware of or is ignoring any externalities, economies of scale, or other complexities that may exist. Furthermore, even if nonlinearities are rec- ognized, they usually take a simple form, such as a quadratic or exponential function.

The use of opportunity costs in place of a more complete model is also common. This simplification is related to the exclu- sion of certain decisions as outlined earlier. For instance, our example only considers one department; if this department con- sumes scarce resources also consumed in other departments, the opportunity cost of these resources must be included in the production cost. This opportunity cost may not, in fact, be linear.

Opportunity costs also arise when cur- rent actions have an impact on the payoff beyond the time horizon of the model. In our example, the one period formulation is a simplification; ending inventories and

unsatisfied demand affect the payoffs of future periods. In order to reflect this, the model includes an opportunity value for the ending inventory and an opportunity cost for the unsatisfied demand. However, these amounts are difficult to predict with- out constructing the appropriate multiple period model; and they may not, as the model assumes, be linear.

Another obvious simplification is that, except for the infeasibility case, perfect implementation is assumed. Behavioral factors which might cause differences be- tween the selected action and the imple- mented action are ignored.7

Parameter Prediction. Parameter predic- tion is represented by q5(0/y, 7, a), a condi- tional probability distribution over the set of possible parameter magnitudes (denoted by ()) given the information sent to the decision maker, the information system used, and the selected action. The param- eter prediction for our example is given in Table 2; it illustrates a number of typical simplifications.

1. The prediction of 6i is independent of a. In the example, the demand for product 1 depends on its price, but the prediction of all other parameters is independent of the selected actions.

2. The prediction of Oi is based entirely on prior information and is, therefore, independent of y. In the example, only the prediction of the number of machines that will require mainte- nance is affected by the proposed ad- ditional information. (This relation- ship is developed in Table 3.)

3. The prediction of 0s is independent of the predicted magnitude of Oj, j -i. In the example, the prediction of both the total number of machines avail- able and the number of machines that will require maintenance depend

7 See Bonini [111 and Demski [21] for attempts to model some of these complexities in information choice situations.

This content downloaded from 185.44.78.143 on Thu, 12 Jun 2014 21:43:01 PMAll use subject to JSTOR Terms and Conditions

630 The Accounting Review, October 1970

TABLE 2

PARAMETER PREDICTIONS

XO(Oly, n a) O(ZII/PI) O(Z2) O(K/y, q), if the magnitude of each deterministic parameter is Xi, ) 0 equal to its predicted magnitude.

otherwise for

d -Pi z" 1 where d and Ad are parameters of a linear demand func- 4(z1/i = --(d--P1)IAd r Ation, i.e.,

L Ad J z toi/ average demand = (d - PI)/Ad

O(Q = ez-2[}2]2 (I) where Z2 is the average demand.

O(K/y, 77) = E o(K/Mo1, * * MOT', b) where MOT is the number of machines of age r that will bCB be used (this is prior information) and b is an unknown

*k(MO1, ,MOT )0(by, ) parameter of the distribution. Table 3 develops the details of this prediction.

The information evaluator specifies the magnitude of the deterministic parameters (c1, c2, C3, in, M2n, i, k, Ma P2, VI, V2, S1, S2, I,, and 12) and the magnitudes of the parameters of the probability distributions (d, Ad, and ?2) on the basis of his prior experience and information. (Observe that M- rTT1 MOT, where T' is the number of age classes employed in the age distribution description).

on the number of available machines in each age group, but the predictions of all other parameters are indepen- dent of each other.

4. The magnitude of 6i is predicted with certainty. In the example, all param- eters except the number of machines that will require maintenance and the demand for products 1 and 2 are assumed to be deterministic.

5. A standard probability distribution (such as the Poisson, Normal, or Binomial) is employed if the predic- tion is uncertain. In the example, the predicted demand for both products is represented by a Poisson distribu- tion and the predicted number of machines that will require mainte- nance is based on a Binomial distri- bution.

These simplifications are intended to reduce the complexity of the model. The model must, however, be sufficiently rich to handle the information questions posed; and the probability distributions must be consistent for the different information and information systems considered. Con-

sistency is obtained by using the computa- tions outlined in equations (4) and (5).

These equations are used in Table 3 to predict the number of machines that will require maintenance. The probability that a given machine will require maintenance during a given period is assumed to be de- pendent on its age, but independent of prior maintenance and the maintenance requirements of other machines. The in- formation evaluator has a prior distribu- tion over the rate at which this probability increases with age; and additional infor- mation about the magnitude of that rate is to be evaluated. Two informaton sys- tems, 711 and 7f2, are considered; both ac- curately report the total number of ma- chines which required maintenance during the previous T periods, but only ,q' will report their respective ages.8

ExpectedPayoff. The expectedpayoff (ex- cluding the cost of the informaton system) for a particular information system is

8 Consistency between the predictions for each system is obtained by basing the distributions on the same prior events-the number of machines used and the number of machines repaired in each age group in each of the previous T periods. (The number of machines used was prior information.)

This content downloaded from 185.44.78.143 on Thu, 12 Jun 2014 21:43:01 PMAll use subject to JSTOR Terms and Conditions

Feltham and Demski: Models and Information Evaluation 631

(10) E E F(a(y, X); 0) yE= Y -a0e_

- q5 (0/y, , a (y, v))] (y/7)

Finally, observe the relationship be- tween the conceptual model of the infor- mation system choice process in equation (8) and the simplified model the informa- tion evaluator constructs in equation (10). The entire evaluation process remains as depicted in Figure 1, but in a simplified form. Specifically, the information eval- uator attempts to predict, with "sufficient" accuracy, the signals that will be produced by the information system and the predic- tion-choice process of the decision maker. Then he assimilates these outcomes with an approximation of the "true" payoff function in equation (8). That is, instead of resorting to a detailed action-event- payoff specification, the information eval- uator approximates the payoff by focusing on (some of )the decision maker's actions and subsumes the event issues with an "appropriate" set of parameters. Clearly, then, the information evaluator's decisions, based on the model in (10), are subject to error. Hence, control of the information evaluation process becomes an issue. This is, of course, a fact common to all decision situations.

Our next task is discussion of the de- cision maker's prediction-choice process.

III. THE DECISION FUNCTION: A MATHEMATICAL MODEL

The information evaluation model dis- cussed in the previous section contains a function, a(y, -q), which predicts the de- cision maker's selection process. This for- mulation is sufficiently general to handle (heuristic) decisions made by rules of thumb as well as those based on formal models. We are primarily interested in the case where the decision maker uses the latter approach, and the information eval-

uator constructs a "parallel" model to pre- dict the results of the decision maker's selection process. This latter model, which we will refer to as the "decision model," is an operational version of (7a).

The expected payoff (given the informa- tion, the information system and the se- lected action) in the decision model may be the same as, or similar to, that in the information evaluation model (See (6)). Differences may arise, however, because the information evaluator and the decision maker have different prior opinions about the system or because they face different cost-benefit issues with respect to their modeling. For example, while a sophisti- cated information evaluation model may be too costly to use for making the day-to- day decisions, it may be suitable for peri- odic evaluation of the information used in those decisions. Alternatively, the deci- sion model may have to be simpler than both the information evaluation model and the decision maker's model because it must be solved for every information set that may be generated. (The information eval- uation model requires calculation of the expected payoff for each information set, but does not require optimization with respect to each set. And the decision maker's model need only be optimized for the information set actually received.)

The decision model for our example is given in Table 4. Relevant details are ex- plored below.

Decisions. The decision, or controllable, variables are the variables whose magni- tude the decision maker must determine; they should correspond to the decision variables, a, included in the information evaluation model. The previous discussion

10 If the model is simple enough, the expected payoff may be calculated directly. However, in many cases it will be too complex for direct calculation to be practi- cal. An alternative procedure is to use simulation, i.e., determine specific magnitudes for the various random variables using the Monte Carlo technique, compute the payoff for these sepcific magnitudes, and repeat until a suitable sampling distribution is determined.

This content downloaded from 185.44.78.143 on Thu, 12 Jun 2014 21:43:01 PMAll use subject to JSTOR Terms and Conditions

632 The Accounting Review, October 1970

TABLE 3

MACHINE MAINTENANCE-PARAMETER PREDICTION

Prior Events (x) Mt, = Number of machines of age , (r = 1, , T') used in the tth (t = 1, T) period prior to the current decision. Ktr= Number of machines of age r (r = 1, , T') which required maintenance in period t (t= 1, , T).

Basic Probabilities

Prb= (1-e-b7) The probability that a machine of age r required or will require maintenance during a given period, where b is the magnitude of an unknown, but constant, parameter. Let B denote the set of possible magnitudes for that parameter.

4t(Kg,/M9,, b) = (pb)Ktr(1 -pb)Mtg-ft The (Binomial) probability that Ket machines of age r required

maintenance in period I given that MH7 machines were used and the magnitude of the unknown parameter was b.

?(b) =The information evaluator's prior probability distribution over the possible values of b.

Estimation of Prior Events (4(X))9

-T T'1 4K11, * * *, KTT') = [IIId H (/(KtrAsr, b)ck(b)] o(A11, , MMTT)

bEEB t-1 -

Maintenance Information Systems (77(X) = y)

771 (Kil, * **,TT ) = (KWl, , Krr7') The information is the same as the prior events

712(Kil, * **,TTI) = (K1, *, KT) The information is the sum of the prior events in each period, where

Tof K E= EKer

1.-i

Maintenance Information Prediction (q(y/77))

o (K11, * ,ITT01n/ ) = OJ(Ki1, * * ETT'

( ** Kg a2) = < all (K11, ** , KTTT)

3E t = Kg, t=1, ***,T

Parameter Prediction (O(O/y, 77, a))

(Kl/y, 77, a) = E k(K/Mol, , MOT', b)sb(Moi, , MOT')(b/y, 77) bBB

where T T'

IIII0(Kj,/Vj,, b)o(b)

) 0(7i, ,TMX) if Xi = X1 and y = * , KTT')

H 4(Ks/Ms1, MITI, b)ck(b)

0(K1 * **,?KT/712) if 7 = 772 and y = (Ki, * ,T)

and

4(Kt/Mnl, HIT, b) = E II ek(X/7t?,b) I.-'

all tl, , IT) 3Ktl+ * * *+ gT'=Kt fort=0,1,* **,T

9 Observe that the issue in this illustration is whether to report past breakdowns by period or by machine age and period. All other facets of the information system are held constant. For example, the critical distributions employed in developing the prediction of K are conditional on how many machines in each age class were or will be employed in the time period in question. Hence, strictly speaking, we must also work with a distribution over these employments; but we assume the existing information system reports these employments without error. As a result, w(n, * * *, MTTY) = 1 for some (M11, * * *, MTTH) and there is no need to sum over alternative MgT values. Extension may be made to either the non-precise or non-reported case.

This content downloaded from 185.44.78.143 on Thu, 12 Jun 2014 21:43:01 PMAll use subject to JSTOR Terms and Conditions

Feltham and Demski: Models and Information Evaluation 633

TABLE 4

THE DECISION MAKER'S MODEL

Maximize L j P]I + P2'[I2' + Q21 cl'QI - -c Q2 c3'H -s[(d - Pl)/lAd' - Q-I

Q1.Q2,P1,H A~d' S2[Z2

- Q2 -212]

Subject to

[ADd' 2I'

Q2 < ?2'-I2'

lz1'Q1 + h2'Q2 -H < H'

mi'Qi + m2'Q2 < M'

wi'Qi + w2'Q2 < WI'

Ql, Q2) Pi, H > O

This problem may be solved by using a Quadratic Programming algorithm.

Parameters

d', Ad'= demand function parameters for product 1 P2'= net selling price per unit of product 2 cit =material and variable overhead per unit of product i, i= 1, 2 C3 = cost per hour of temporary labor hi'= number of labor hours required to produce a unit of product i, i= 1, 2

m,'= number of machine hours required to produce a unit of product i, i= 1, 2 wit= warehouse space per period for each unit of product i produced, i= 1, 2 H = total labor hours permanently available M'= net machine hours available W' warehouse space available I'= number of units of product i in beginning inventory, i= 1, 2 Z2, = expected demand for product 2 Si/= cost per unit of demand in excess of available units for product i, i= 1, 2

Decision Variables

Qi= quantity of product i produced, i= 1, 2 P1= net sales price for product 1 H= hours of temporary labor hired

of simplifications made with respect to the decisions included in a model also apply here."1

Events and Payoff Function. As in the information evaluation model, the'decision model typically is not designed to predict individual events, but states the payoff as a function of the controllable variables. Again, the specific predicted payoff de- pends on the magnitude of the parameters of the payoff function. Let '= (01', .... 6'm') denote the vector of parameters for this model. The payoff function, then, is denoted by

(11) rf = F'(a; 0')

The previous discussion of payoff simpli- fications also applies here; and, as indi-

cated, these simplifications may exceed those in the information evaluation model. In addition, while the information eval- uation model must consider the payoff of any action that may be selected, the de- cision model will exclude the payoff on any actions which the evaluator perceives the decision maker wishes to avoid. For in- stance, no provision is made in our example for the value of ending inventory, as the

11 An interesting issue at this point is control. Once we admit that the decision maker's model is imperfect, we raise the issue of his control activities. Subsequent control actions taken by the decision maker may be formally represented in either the decision function a(y, -) or the conditional probability distribution o(xly, 7, a(y, -)) of equation (8). The general heuristic nature of these control activities reinforces the previous comments about the control of equation (10) (the oper- ational version of (8)).

This content downloaded from 185.44.78.143 on Thu, 12 Jun 2014 21:43:01 PMAll use subject to JSTOR Terms and Conditions

634 The Accounting Review, October 1970

deterministic model does not consider the possibility of its occurrence.

Feasible Decision Set. The feasible de- cision set is unbounded unless the de- cision model contains constraints which limit the magnitudes of the decision vari- ables. These constraints are expressed as functional relationships which must be satisfied and are denoted by

(12) gq(a;0') 0 q = 1, Q

As indicated, some of the parameters of the model appear in the constraints.

The constraints are designed to exclude those actions which clearly cannot be ac- complished or which the evaluator per- ceives the decision maker does not want to occur. The former occurs when certain combinations of actions are technically in- feasible; in our example, the labor hours, machine hours, and warehouse space used cannot exceed the available quantities of these resources. The parameters associated with these limitations must be determi- nistic if they are expressed as constraints. Otherwise, the model must either require the constraint to hold probabilistically (and possibly ignore what happens if it does not hold), or reflect the outcomes re- sulting from infeasibility directly in the payoff function.

Limitations on the feasible set of con- trollable variables may also represent per- ceived policy decisions by the decision maker, or his superiors. These limitations usually represent recognition of outcomes which affect the "actual" payoff but which have not been included in the payoff func- tion. For example, if the demand for prod- uct 2 exceeds production, sales will be lost and there will likely be an impact on the future demand for the product. The model in Table 4 contains a penalty for unsatisfied demand, but that penalty is difficult to predict. Furthermore, the model does not recognize demand uncertainty. The decision maker might handle both

problems by establishing a policy which requires the probability of unsatisfied de- mand to be less than a specified amount and by removing the penalty from the payoff function; and, if so, the informa- tion evaluator would probably reflect these assumptions in his prediction of the de- cision maker's choice process.

Parameter Prediction. The information evaluator must predict the decision maker's parameter predictions for each set of sig- nals the decision maker may recieve. Let this prediction be denoted by k'(6'/y, I, a) .12

The same type of simplifications used for the parameter predictions in the informa- tion evaluation model may be used here. In fact, even more parameters are likely to be deterministic since the decision model is not used to analyze the informa- tion issues associated with these param- eters. In addition, the constraints are based on deterministic parameters that may not appear in the information evalu- ation model. In our example in Table 4, all parameters are deterministic.

Computation. The information evaluator predicts that the decision maker's choices can be predicted by the solution to the following problem :13

12 Unlike the parameter predictions for the informa- tion evaluation model, these predictions need not be consistent. That is, while the information evaluator must be internally consistent, he may not believe that the decision maker will be. This is reasonable because the decision maker only makes predictions for the signals he receives and does not construct predictions for all possible signals he might receive.

13 If an efficient algorithm for solving this model does not exist, the model might be slightly modified to pro- duce a second model which is an approximation of the first, but which can be optimized. The solution to the second model then provides an approximation of the optimal solution of the original model. Examples of this are the use of separable programming in place of nonlinear approaches and the use of linear programming to obtain an approximate solution to an integer pro- gramming problem. Alternatively, if it is too costly to develop an optimization algorithm, some form of search procedure may be devised so that the expected payoff is calculated for only a limited number of action alter- natives. Finally, direct calculation of the expected payoff itself may be extremely difficult. (This will likely occur if the probabilistic relationships are exten- sive or complex.) In this case, simulation may be used to estimate the expected payoff for each alternative.

This content downloaded from 185.44.78.143 on Thu, 12 Jun 2014 21:43:01 PMAll use subject to JSTOR Terms and Conditions

Feltham and Demski: Models and Information Evaluation 635

(13) Alax F E F'(a; 0') - 4(O'/y, rq, a)] a L o'EY

Subject to:

A, gq(a; 0') * 0'/y, a) = 0 0,

~ ~ q 1, * ,

where (' is the set of all possible parameter values.

One remaining issue is the link between the decision model and the information evaluation model. That is, we have de- tailed the essential properties of these models, but have said very little about their interrelationships in the analysis of specific information evaluation issues. This is discussed below.

Information Evaluation. Differences in information may cause differences in a number of the decision maker's activities, including problem identification, specifica- tion of alternatives, model construction, and parameter prediction. Formal model- ing is probably most useful in evaluating the information used in the last two activ- ities.

The previously described differences in machine maintenance information would probably affect the decision maker's pre- diction of net machine hours available. Evaluation of these information differences requires the information evaluator to spec- ify the decision maker's prediction of net

machine hours available for each possible information set. Table 5 illustrates two different prediction methods that might be used by the decision maker.

Observe that the proposed analysis is conditional in nature. That is, the infor- mation issue revolves around alternative maintenance signals, but its solution re- quires comparison of expected payoff dif- ferences. By nature, then the evaluation process is conditional upon how the eval- uator perceives the decision maker will use the respective signals. Thus, the analysis simultaneously evaluates the information supplied and the decision maker's predic- tion and choice methods. The value of the information under Case I would probably differ from that under Case II.

The analysis can also be used to eval- uate changes in the decision maker's model. For instance, the information evaluator might predict that the additional informa- tion on machine maintenance will cause the decision maker to recognize the un- certainty of this parameter in his decision model. In this case, the expected payoff calculation for the proposed information system will use a different decision model than the calculation for the current system.

Parameters representing opportunity costs present another problem. These parameters arise because of the structure of the model and, in order to define their

TABLE 5 DECISION MAKER'S PREDICTION OF NET MACIINE HouRS AVAILABLE

Case I Simnple Historical Average

?77: f'= mn(M)-k E [Mo ( E Kt Z jMt

o72: M' = m(M)-k [M( Kt/ E Mt)] where M =E MO,

Case II Expected Value M14

211' = m( I) - k Z Kk(K/y, ,7) '7 = 771, 772 K-O

14 This assumes that the information evaluator believes that the decision maker has the same prior probability distributions with respect to machine maintenance.

This content downloaded from 185.44.78.143 on Thu, 12 Jun 2014 21:43:01 PMAll use subject to JSTOR Terms and Conditions

636 The Accounting Review, October 1970

meaning, we must look at the model's re- lationship to a larger model. For example, the cost of product 2 stockouts does not represent a cash outflow in the current period; instead, it represents a difference in future cash inflows resulting from differ- ences in stockouts in the current period. The larger model is a multiperiod model which introduces the interactionsdescribed, and the information evaluator might con- struct such a model to evaluate alternative information systems which provide infor- mation that the decision maker could use to predict the opportunity cost. This ap- proach would also eliminate the need to predict the value of ending inventory in the information evaluation model. (It does not appear in the decision model.)

Similarly, the information evaluator might construct a multidepartment model in order to evaluate information used by the decision maker in predicting the op- portunity costs of using resources which are used in several departments.

IV. INFORMATION RESEARCH

If we view the discussion in Section I as constituting a normative theory of infor- mation evaluation, the remaining issue is one of implementing that theory. Sections II and III describe a formal modeling ap- proach which directly implements the cost-benefit perspective presented in Sec- tion I. Examples of research in which models of this type have been constructed are provided by Butterworth [15], Demski [21], Feltham [29], Greenball [37], and Mock [481.15 However, this extensive form of analysis tends to be expensive; and thus we find the situations in which it has been employed to be highly simplified. For ex- ample, all but one of the references cited (Demski [21]) assume away all control problems, and all assume that the cost of information is independent of the informa- tion alternatives examined. Moreover, the vast majority of information research has

either employed less ambitious, surrogate methods of evaluation-a fact consistent with our perception of the cost of modeling the information decision-or has contented itself with determining relationships which might be helpful in constructing some por- tion of the information evaluation model. The latter research, in fact, often ignores the complete model and leaves the impli- cations of the research to the reader or implies that the results of the research provide a suitable surrogate for evaluating the information alternatives considered.

Our purpose in this final section is to relate a number of information research methods to the model in equation (8) in an attempt to shed some light on the imple- mentation issues. Description of these re- lationships is accomplished by considering the somewhat sequential elements of the model outlined in Figure 1. That is, in any specific decision setting, information sys- tem alternatives (An) are likely to produce different information signals (Ay); these, in turn, are likely to induce different se- lected actions (Aa) which will result in different events (Ax) and the different events will likely have different payoffs (Aw). Observe that the cost of information alternatives has not been included (or has been assumed to be irrelevant); this ex- clusion is consistent with each of the refer- ences cited in this section.

One simplified and well-known method for selecting information systems is to base the selection on personal opinion. We find many accounting systems apparently con- structed in this manner; and, indeed, pro- mulgations of authoritative bodies, such as the APB, are apparently based on this method of analysis.

A similar method is to examine a num- ber of specific situations and observe the information systems that the information

15 Hakansson [39] considers the value of knowing certain future events, but does not relate this value (or lack of it) to any specific information based on past events.

This content downloaded from 185.44.78.143 on Thu, 12 Jun 2014 21:43:01 PMAll use subject to JSTOR Terms and Conditions

Feltham and Demski: Models and Information Evaluation 637

evaluators have selected. Khemakhem [43], Langholm [44], and reports of ac- counting practice by the NAA and AICPA provide illustrations. The results of such research might be useful in selecting infor- mation systems if the observed information evaluators have made optimal, or at least good, selections and if the critical circum- stances of choice have been documented. Both methods, however, ignore any ex- plicit modeling of the relationships between the information system alternatives and the payoff differences.

Probably the most simplified method of explicitly recognizing some of the elements of the information evaluation model is to develop the relationship between alter- native systems (Ahe) and signals (Ay), and then rely on the information evaluator to personally assimilate the differences. Ex- amples are provided by Abel [1], Brigham [12], Davidson and Kohlmeier [18], Dem- ski [19], and Dopuch and Drake [24].

The next obvious step is to analyze these resultant information signal differences (Ay) with some surrogate evaluator. One form of surrogate is possession, by the sig- nal, of some a priori desirable property. For example, McDonald [46] and Sterling and Radosevich [54] have examined the signals produced by alternative informa- tion systems in terms of a statistical mea- sure of the signals' "objectivity." Simi- larly, Frank [32] and Simmons and Gray [53] have examined the ability of certain information alternatives to produce signals that predict their future counterparts.

Another form of surrogate is a signal's ability to reflect, using some specific model of association, certain (current) a priori relevant events. Works by Beaver et. al. [8], Benston [9], Demski [20], Gonedes [33], Greenball [36, 38], and Mlynarczyk [47] provide a variety of illustrations."6 Finally, we also have a number of illustrations of testing the ability of alternative sets of signals to forecast, using some specific

forecast model, (future) a priori relevant events. See Beaver [4, 5, 6, 8], Brown and Niederhoffer [13], Demski [20], Green and Segall [34, 35], and Horrigan [40].

The next logical extension is to move from analysis of signal differences to anal- ysis of the action differences (ha) induced by system differences (L-q). The simplest approach is to determine whether the dif- ferences in parameter predictions (assum- ing some specific prediction model) result- ing from alternative signals will be suffi- cient to change the optimal decision de- rived from some well defined decision model. This is, of course, a familiar use of sensitivity analysis. See Demski [22], Jensen [42], and Rappaport [51]. Another approach is to provide a number of de- cision makers, who face the same task, with different information and observe the differences in selected actions. Bruns [14], Churchill [16], Dyckman [27], and Ronen [52] have used this approach in laboratory settings. Dyckman [25, 26] and Jensen [41] use a similar approach. They ask "experts" what their decision would be under different information situations and Estes [28] provides an example of using "expert" opinion to directly select a pre- ferred information alternative.

Finally, the researcher may observe dif- ferences in events (Ax) which are manifes- tations of action differences (which may not be observed) resulting from informa- tion differences (Atq). Ball and Brown [2], Beaver [7], Bonini [11], and Cook [17] pro- vide illustrations.

Thus, a variety of methods have been employed in gathering specific evidence on information alternatives. Two points emerge. First, each method provides a somewhat different type of evidence; and each relies on a somewhat different under- standing of the specific situation. For ex-

1a Ball and Brown [3] provide a slightly different analysis; they determine the relationship between signals from one firm with those from firms in the same industry and firms in general.

This content downloaded from 185.44.78.143 on Thu, 12 Jun 2014 21:43:01 PMAll use subject to JSTOR Terms and Conditions

638 The Accounting Review, October 1970

ample, evaluation of signal differences (Ay) by ability to forecast some event pre- supposes an ability to determine such an event as well as an appropriate error function; and analysis of payoff differences (Aw) presupposes an ability to construct the entire relationship, between informa- tion system differences and payoff differ- ences. Second, and more important, we have scant evidence on the utility of the various methods. About all we know is that they are different (a situation paral- lel to the z77 to Ay situation) and they can produce conflicting evidence. For ex- ample, Mlynarczyk's findings [47] are inconsistent with those of Dyckman [26, 27] and Jensen [41]. Extrapolation is another fundamental issue.

The extrapolation issue, simply stated, is the question of whether evidence gathered in one specific information choice situation is useful in another. This issue takes a variety of forms. Perhaps the most obvious is extrapolation through time for the "same" specific situation. To take the most extreme case, suppose the single period model in equation (8) were opti- mized for some period I. Would knowledge of this result be useful in analyzing the period t'(t' > t) case? Or, more naively, would the optimum solution in t corres- pond to that in I'? One possible source of evidence on this point is replication of prior findings. Green and Segall provide an illustration [34, 35].

Another, more fundamental, version of the issue is movement from one specific situation to another, where the two differ by more than a time factor. Do forecast ability results under one set of forecast models correspond to those under a differ- ent set of models? Do simulation or labor- atory findings correspond to anything other than themselves?'7 Do expert opin- ion solicitations correspond to decision maker actions? Birnberg and Nath are

particularly eloquent on this issue [10]; but, again, we lack evidence.

A third version of the issue is intra- situation extrapolation. Suppose, to take an obvious case, that information system 7 produces signals that are a "superior" predictor of some a priori relevant event to those produced by system q'. Does it then follow that the information decision maker should prefer q to fl'? Again, we face a lack of evidence. A more subtle ver- sion is encountered when we attempt to work with equation (8), as we did in sec- tions II and III of this paper. Specifically, suppose we construct a "close" represen- tation of the true payoff, w(x). It does not, of course, necessarily follow that the pre- ferred information system using the close representation will be identical to the one that would be preferred had we used the precisely correct representation, w(x). (See Feltham [29].) Resolution of the issue pre- supposes knowledge of the true function; and, in this event, we would still face the other versions of the extrapolation issue.

Information research, then, is difficult, varied, and proliferating. Our ability to assimilate the numerous, diverse investi- gations is hindered by a real lack of ev- idence on how the various methods relate to the real problem, as well as to one another.

SUMMARY

We have discussed the nature of de- cision models and information evaluation models as well as their inter-relationships. Some insights that can be developed by such an exploration were then related to information choice issues and information research methods. But these latter issues are largely empirical and unexplored.

17 Packer [49] illustrates one method of validating a simulation; he makes a comparison between the histori- cal records and the simulated experience under the current decision rules and the current information system.

This content downloaded from 185.44.78.143 on Thu, 12 Jun 2014 21:43:01 PMAll use subject to JSTOR Terms and Conditions

Feltham and Demski: Models and Information Evaluation 639

REFERENCES

[1] R. Abel, "A Comparative Simulation of German and U.S. Accounting Principles," Journal of Accounting Research (Spring 1969).

[2] R. Ball, and P. Brown, "An Empirical Evaluation of Accounting Income Numbers," Journal of Accounting Research (Autumn 1968).

[3] "Some Preliminary Findings on the Associ- ation between the Earnings of a Firm, Its Industry and the Economy," Journal of A ccounting Research Supplement (1967).

[4] W. Beaver, "Alternative Accounting Measures as Predictors of Failure," TiHE ACCOUNTING RE-VIEW (January 1968).

(5] - "Financial Ratios as Predictors of Failure," Journal of Accounting Research Supplement (1966).

(61 ] "Market Prices, Financial Ratios, and the Prediction of Failure," Journal of A ccounting Re- search (Autumn 1968).

[7] "The Information Content of Annual Earnings Announcements," Journal of Accounting Research Supplement (1968).

[8] W. Beaver, P. Kettler, and M. Scholes, "The Association Between Market Determined Mea- sures of Risk and Financial Statement Determined Measures of Risk," THE ACCOUNTING REVIEW October, 1970).

[9] G. Benston, "Published Corporate Accounting Data and Stock Prices," Journal of Accounting Research Supplement (1967).

[10] J. Birnberg and R. Nath "Laboratory Experimen- tation in Accounting Research," THE ACCOUNTING REVIEW (January 1968).

[11] C. Bonini, Simulation of Informalion and Decision Systems in the Firm (Prentice-Hall, 1963).

[12] E. Brigham, "The Effects of Alternative De- preciation Policies on Reported Profits," THE ACCOUNTING REVIEW (January 1968).

[13] P. Brown and V. Niederhoffer, "The Predictive Content of Quarterly Earnings," Journal of Busi- ness (October 1968).

[14] W. Bruns, "The Accounting Period Concept and Its Effect on Management Decisions," Journal of Accounting Research Supplement (1966).

[15] J. Butterworth, "Accounting Systems and Man- agement Decision: An Analysis of the Role of Information in the Management Decision Process," unpublished Ph.D. Dissertation (University of California at Berkeley, 1967).

[16] N. Churchill, W. Cooper, and T. Sainsbury, "Lab- oratory and Field Studies of the Behavioral Effects of Audits," in Bonini et al. JMlanagement Controls (McGraw-Hill, 1964).

[17] D. Cook, "The Effect of Frequency of Feedback on Attitudes and Performance," Journal of Ac- counting Research Supplement (1967).

[18] S. Davidson and J. Kohlmeier, "A Measure of the Impact of Some Foreign Accounting Princi- ples," Journal of A ccounting Research (Autumn 1966).

[19] J. Demski, "An Accounting System Structured on a Linear Programming Model," THE ACCOUNTING REVIEw (October 1967).

[20] "Predictive Ability of Alternative Per- formance Measurement Models," Journal of Accounting Research (Spring 1969).

[21J "The Decision Implementation Interface: Effects of Alternative Performance Measurement

Models," 1TH ACCOUNTING REVIEW (January 1970).

[22] "Some Considerations in Sensitizing an Optimization Model," Journal of Industrial Engineering (September 1968).

[23] "Some Decomposition Results for Informa- tion Evaluation," unpublished.

[24] N. Dopuch and D. Drake, "The Effect of Alter- native Accounting Rules for Nonsubsidiary In- vestments, " Journal of Accounting Research Supplement (1966).

[25] T. Dyckman, Investment Analysis and General Price-Level Adjustments (American Accounting Association, 1969).

[26] "On the Investment Decision," THnE Ac- COUNTING REVIEW (April 1964).

[27] "The Effects of Alternative Accounting Techniques on Certain Management Decisions," Journal of Accounting Research (Spring 1964).

[28] R. Estes, "An Assessment of the Usefulness of Current Cost and Price-Level Information by Financial Statement Users," Journal of Accounting Research (Autumn 1968).

[29] G. Feltham, "A Theoretical Framework for Eval- uating Changes in Accounting Information for Managerial Decisions," unpublished Ph.D. Dis- sertation, (University of California at Berkeley, 1967).

[30] "The Value of Information," THE Ac- COUNTING REVIEW (October 1968).

[31] G. Feltham and R. Jaedicke, "The Use of Average Fixed Costs as Surrogates in Decision Making," unpublished.

[32] W. Frank, "A Study of the Predictive Significance of Two Income Measures," Journal of Accounting Research (Spring 1969).

[33] N. Gonedes, "The Significance of Selected Ac- counting Procedures: A Statistical Test," Journal of A ccounting Research (forthcoming).

[34] D. Green, and J. Segall, "The Predictive Power of First-Quarter Earnings Reports," Journal of Business (January 1967).

[35] "The Predictive Power of First-Quarter Earnings Reports: A Replication," Journal of Ac- counting Research Supplement (1966).

[36] M. Greenball, "Appraising Alternative Methods of Accounting for Accelerated Tax Depreciation: A Relative Accuracy Approach," Journal of A ccount- ing Research (Autumn 1969).

[37] "Evaluation of the Usefulness to Investors of Different Accounting Estimators of Earnings: A Simulation Approach," Journal of Accounting Research Supplement (1968).

[38] "The Accuracy of Different Methods of Accounting for Earnings-A Simulation Approach," Journal of A ccounting Research (Spring 1968).

[39] N. Hakansson, "On the Relevance of Price-Level Accounting," Journal of A ccounting Research (Spring 1969).

[40] J. Horrigan, "The Determination of Long-Term Credit Standing with Financial Ratios," Journal of Accounting Research Supplement (1966).

[41] R. Jensen, "An Experimental Design for Study of Effects of Accounting Variations in Decision Making," Journal of A ccounting Research (Autumn 1966.)

[42] "Sensitivity Analysis and Integer Linear Programming," THE ACCOUNTING REVIEW (July 1968).

This content downloaded from 185.44.78.143 on Thu, 12 Jun 2014 21:43:01 PMAll use subject to JSTOR Terms and Conditions

640 The Accounting Review, October 1970

[43] A. Khemakhem, "A Simulation of Management Decision Behavior: "Funds and Income," THE ACCOUNTING REVIEw (July 1968).

44] 0. Langholm, "Cost Structure and Costing Method: an Empirical Study," Journal of Account- ing Research (Autumn 1965).

[45] J. Marschak, and R. Radner, Economic Theory of Teams, Working papers, Center for Research in Management Science (University of California at Berkeley).

[46] D. McDonald, "A Test Application of the Feasi- bility of Market Based Measures in Accounting," Journal of Accounting Research (Spring 1968).

[47] F. Mylnarczyk, "An Empirical Study of Account- ing Methods and Stock Prices," Journal of Ac- counting Research Supplement (1969).

[48] T. Mock, "Comparative Values of Information Structures," Journal of Accounting Research Supplement (1969).

[49] A. Packer, "Simulation and Adaptive Forecasting as Applied to Inventory Control," Operations Re- search (July-August 1967).

[50] H. Raiffa, and R. Schlaiffer, Applied Statistical Decision Theory (Harvard University, 1961).

[51] A. Rappaport, "Sensitivity Analysis in Decision Making," THE AccOUNTING REVIEW (July 1967).

[52] J. Ronen, "Some Effects of Sequential Aggrega- tion in Accounting on Decision Making Behavior," unpublished Ph.D. Dissertation (Stanford Uni- versity, 1969).

[53] J. Simmons and J. Gray, "An Investigation of the Effect of Differing Accounting Frameworks on the Prediction of Net Income," THE ACCOUNTING REVIEW (October 1969).

[54] R. Sterling and R. Radosevich, "A Valuation Experiment" Journal of Accounting Research (Spring 1969).

[55] L. Savage, The Foundations of Statistics (Wiley, 1954).

This content downloaded from 185.44.78.143 on Thu, 12 Jun 2014 21:43:01 PMAll use subject to JSTOR Terms and Conditions