articol 54

15
Quadratic Teams, Information Economics, and Aggregate Planning Decisions Author(s): Charles H. Kriebel Source: Econometrica, Vol. 36, No. 3/4 (Jul. - Oct., 1968), pp. 530-543 Published by: The Econometric Society Stable URL: http://www.jstor.org/stable/1909521 Accessed: 05-05-2016 08:59 UTC Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at http://about.jstor.org/terms JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. Wiley, The Econometric Society are collaborating with JSTOR to digitize, preserve and extend access to Econometrica This content downloaded from 193.226.34.227 on Thu, 05 May 2016 08:59:58 UTC All use subject to http://about.jstor.org/terms

Upload: ioana-eliza

Post on 10-Jul-2016

279 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Articol 54

Quadratic Teams, Information Economics, and Aggregate Planning DecisionsAuthor(s): Charles H. KriebelSource: Econometrica, Vol. 36, No. 3/4 (Jul. - Oct., 1968), pp. 530-543Published by: The Econometric SocietyStable URL: http://www.jstor.org/stable/1909521Accessed: 05-05-2016 08:59 UTC

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at

http://about.jstor.org/terms

JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted

digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about

JSTOR, please contact [email protected].

Wiley, The Econometric Society are collaborating with JSTOR to digitize, preserve and extend access toEconometrica

This content downloaded from 193.226.34.227 on Thu, 05 May 2016 08:59:58 UTCAll use subject to http://about.jstor.org/terms

Page 2: Articol 54

Eco,lometrica, Vol. 36, No. 3-4 (July-October, 1968)

QUADRATIC TEAMS, INFORMATION ECONOMICS, AND

AGGREGATE PLANNING DECISIONS'

BY CHARLES H. KRIEBEL

The construct of team decision theory developed by J. Marschak and R. Radner provides

a useful framework for the conceptualization and evaluation of information within or-

ganizations. Quadratic teams concern team decision problems where the organizational

objective criterion is represented by a quadratic function of the action and state variables.

The purpose of this paper is to explicitly relate the analysis of quadratic team decision

problems to recent research on optimal decision rules for aggregate planning-for example,

as reported by Holt et al [5], Theil [9], van de Panne [10], and others. A numerical example

is presented based on the well known microeconomic model of production and employ-

ment scheduling in a paint factory.

1. QUADRATIC TEAM DECISION PROBLEMS2

CONSIDER AN ORGANIZATION (or team) composed of i= 1, 2, ..., N members who

choose actions during each of t= 1, 2, ..., T time periods. Let the action of member

i in period t be denoted by ait, and let xi, correspond to a state of the world ob- served by member i in period t. The states xi, are random variables, which will be identified by the notation xit, and are elements of the space X. Suppose the ob-

jective function which the organization seeks to optimize is3

(1) c(a, x) = X0-2a'(x)+a'Qa

where 20 is a scalar; a is a (column) vector of team actions with elements ait; A(x) is a measurable vector-valued function on the space X, which is partitioned to con-

form with a; and Q is a square symmetric matrix with (N T)2 elements, also partitioned to conform with a. That is,

T N T N

c(a, x) = X0-2 , Y ait i(X)+ Y Y ait q(ij) _aj t= 1 i= 1 t,t= 1 i,j= 1

where q(ij),, are elements of the partitioned matrix Qt, for i,j = 1, 2, ..., N; and Q, are ordered matrices from the partitioned Q for t, z = 1, 2, ..., T.

Since the states xit occur randomly, the optimization of (1) necessarily involves the computation of expectations on c(a,x), viz., for the elements of x(x). To con-

sider information explicitly, define an observation on the state xit as hit(x)=zit, for i= 1, 2, .. N; t= 1, 2, ..., T; where {zi,} =Z, the space of observations. That is,

1 This report was prepared as part of the activities of the Management Sciences Research Group,

Carnegie-Mellon University, under Contract NONR 760(24), NR 047-048 with the U.S. Office of

Naval Research and under a Ford Foundation Grant. Reproduction in whole or in part is permitted

for any purpose of the U.S. Government.

2 The basic propositions of team theory are presented in Marschak [3]. The discussion in this and the next section is based on Radner [6] and [7].

3 Single primes correspond to the transpose operation.

530

This content downloaded from 193.226.34.227 on Thu, 05 May 2016 08:59:58 UTCAll use subject to http://about.jstor.org/terms

Page 3: Articol 54

QUADRATIC TEAMS 531

the information function hi, defines a mapping from xi, in X into hi,(x) in Z. More generally, h(.) corresponds to a random function, such that hr(xx)=z rs with

probability P(ZriXs, hr). The (NT)-tuple of all information functions in the or-

ganization,

h = ((h1l, ...I hi1,, hN1),..., (h1T . hiT, ..., hNT)),

is called the information system (or structure) where he-H, and provides an

explicit description of the information (observations, forecasts, messages, or the

like) that is available to each individual in the organization at each instant of time.

To distinguish the computation of expected values in the remaining discussion,

the following conventions are employed :4 Ex is taken with respect to the marginal

joint probability measure on the state space X; Exlz,h or EXlZ is taken with respect to the conditional joint probability measure on the state space X given the infor-

mation system h and the observations z e Z; and E, is taken with respect to the marginal joint probability measure on the observation space Z, given h E H, where

X = ., Xil, ..XN1). (X1T .XiT. XNT)) and

Z = 1 ((ll n Zil, ZNJ) (Z1T, ZiT. ZNT)) -

Finally, we define a component decision rule, cxi, as an explicit procedure which unambiguously specifies a particular terminal action or decision to be selected in

response to each item of information available. That is, cxi is a real-valued measur- able function on Z which defines a mapping from zi, in Z into cij (z) in A, the action space, for i= 1, 2, ..., N; t= 1, 2, ..., T. The collection of all component decision functions for the organization is called the team decision function, a s A, and will

be written a (h (x)) = a (z) = a.

The function for c (a, x) in (1) is convex if the matrix Q is positive definite. In this

case, necessary and sufficient conditions for optimization of the function with

respect to team actions direct that the partial derivatives of the components of a

are equated to zero.5 Let 0/0a denote taking partial derivatives with respect to a column vector. Then

0/0ac(a, x)= -2A(x)+2Qa = 0 or

(2) a* = Q- 1 i(x)

Now suppose i (x) can be written

(3) A(x) = Rx

where R is a square matrix with (N T)2 elements, and x' is the column vector of state variables defined above. Then from (2), the best team decision rule given an a priori probability distribution on xZ is

4 For example, see [8], for further discussion on the relative importance of these distinctions within

the Bayesian framework of applied statistical decision theory.

5 A general discussion of quadratic optimization is available in [1].

This content downloaded from 193.226.34.227 on Thu, 05 May 2016 08:59:58 UTCAll use subject to http://about.jstor.org/terms

Page 4: Articol 54

532 CHARLES H. KRIEBEL

(4) Ex(*=Q-lRExxI,

or T

(5) EXt = QtR,EX for t= 1, 2, , T, 1,1 = 1

where Q - and R have been partitioned as Q by actions and time periods, and the

submatrices of the partitioning by time periods are identified as Qt,r and Rt, respectively. The form of the solution in (5) is particularly useful in distinguishing

optimal strategy dynamic decision rules, which are discussed more fully below.

(Cf., [9] and [10].) In [7], Radner derives a set of simultaneous "stationarity" conditions for optimal

component decision rules given an information system, under which a team

decision rule remains optimal if and only if it cannot be improved by changing

any of the individual component rules. Restating Radner's theorem for the par- titioned objective function in (1) gives:6

THEOREM: Let E i2 (x)< oo, for i= 1, 2, ..., N; t= 1, 2, ..., T; then for any information system h, the unique (a.e.) best tream decisionfunction is the solution of

T

(6) q(ii), -it + E q(ii)Z , Ex,, (ai, hit) + E E q(ij),, Exlz (ail hit) = Exlz (2itI hit) t#t =1 j i

Jor i= 1, 2, ..., N; t= 1, 2, ..., T.

The stationarity conditions in (6) have also been referred to as "person-by-

person" optimality requirements which, when taken in conjunction with (2) or (4), specify a procedure for the evaluation of information system alternatives. In this regard, the decision theory constructs of "perfect" and "null" information provide explicit criteria for normative analysis under uncertainty. Let V'0) represent a

(theoretically) perfect information system, such that h((x)W=x', where x corre- sponds (a posteriori) to the true states of the world; let h(?) represent a (theo- retically) null information system, such that h(?) (x) = y, where y is an (N T) vector

of constants (independent of x). The value of a particular information system, V(h(k)), is defined as the net change in the expected payoff of the optimal team decision rule that obtains from implementing the system h(k) in lieu of the null system, h(0 .i More specifically, for c(a, x) in (1), an objective cost (or loss) function,

the expected value of perfect information is

6 This theorem is proved in [7] and can be interpreted as follows: The computed expected value of

the decision rule oi, is the action aj, Then if the conditions on 4j, in (6) are met simultaneously by each ait (for i= 1, 2, ..., N; t= 1, 2, ..., T), the function c(a,x) is stationary at the optimal value, since a' = aQa in (1).

7Note, the statement of (1) implies that information costs are independent of the actions, a, and

hence can be evaluated separately, i.e., simply by subtracting the cost of the system from V(h(k)). This assumption would be inappropriate if Q was not stationary over time and, for example, depended upon

x. The practical consequences of assumed independence between decision and information processing costs are discussed at length in [2].

This content downloaded from 193.226.34.227 on Thu, 05 May 2016 08:59:58 UTCAll use subject to http://about.jstor.org/terms

Page 5: Articol 54

QUADRATIC TEAMS 533

(7) V(h('O)) = EX [i'(XZ) Q- 1 A(X)] -i (5X) Q - 1 i (X-)

where EAX(X) = X(48 The relation in (7) gives an upper bound on performance improvement that can be realized in this case through reductions in information

uncertainty.

2. ECONOMICS AND INFORMATION SYSTEMS

In [6] Radner analyzes several information systems for quadratic team decision

models. Variants on two of these partially decentralized information system models

are detailed below for the case where the team decision rule is partitioned by

member actions and time periods. In the final section, these models are specialized

to the aggregate production planning problem of Holt, Theil, and others (cf.,

[5, 9, andlO]), and a numerical example is presented.

Suppose the state of the world is such that each team member observes a

random variable Yi, from the information process which generates xi,, where Yit= Oi, () on X, and Oit(*) corresponds to an observational function for member i in period t, Oi,e hit. For the physical structure of many organizations it is often reasonable to assume that the observational functions Oi= {Oil, ..., Oit, ..., OiT} are statistically independent for all i. Clearly, this assumption does not mean that the

information system, which we will identify as h(1). Then h')-= 0i for all i, and the place the converse applies generally. Suppose, however, that we initially assume

that no communication is permitted between any members of the team. For a

particular organization this assumption corresponds to a completely decentralized

information system, which we will identify as h(1). Then h) = Oi for all i, and the (vector) information functions are also statistically independent. Referring to the

theorem in (6), independence implies that

(8) E(tIjOj)=E(cxj) for j ii. Thus, the optimal team decision rule under the completely decentralized infor-

mation system h(1) is given directly by (6) and (2). For example, the optimal

strategy dynamic decision rules (cf. [9]) for the first period are

T T

(9) il = (zlq(ji) )Z[Ex z(Rl ZOil)- E q(ji)1rExjz(7irlOil- E E q(ij) 1, a] I T=2 T= 1 j#i

for i=1, 2, ..., N where

T N

(10) aJt = , 2; q(js ) ExlzAkr(x)T( for ji; t= 1, 2, ..., T. T=l k=l

Following the previous discussion, the expected value of the completely decen-

tralized information system h(1) gives

8 If c(a, x) in (1) corresponded to a payoff function, the signs in (7) would be reversed.

This content downloaded from 193.226.34.227 on Thu, 05 May 2016 08:59:58 UTCAll use subject to http://about.jstor.org/terms

Page 6: Articol 54

534 CHARLES H. KRIEBEL

T N

( 11) V(h( ')= I I ( q(ii)tt){EzEX1Z(2it Oit)2_[Ez Exlz (it oit) ] t=l i=l

where the notation Oit = {Oil, Oi2, Oit} the t-tuple of observations by member i up to and including period t, for t= 1, 2, ..., T. That is, assuming the team ob- jective function in (1) represents operating costs, the relative worth to the or- ganization of employing this system is the reduction in expected costs that is possible by acknowledging the variance in the state variables (or a transformation of these variables) a posteriori. The value of the system is linear in the variance component and hence exhibits constant returns to scale relative to the posterior variance of the linear component in (1). The obvious distinction between h(l) and the null system is that in the latter case no observations are made on the informa- tion processes, and hence there is no formal basis on which to revise an a priori distribution on x.

For example, suppose the random state variables xit are independently distri- buted according to a normal probability law, with density function fN(xitlpit, ai 2)

for all i, t. The joint distribution of the xit, which we identify as the random vector

x, is also normally distributed, according to fN(x I tt, Z), where the expected value of the random vector is p and the covariance matrix is Z. That is,

(12) fN(xLu, Z) = (27t)2 N T Il/2exp{ (x_ 2 -)/-l(xx)}

for i= 1, 2, ... N; t= 1, 2, ..., T; and -oo x xit, X. Now suppose (3) holds, so that

(13) A(x) = Rx',

where R is the matrix defined above. Then from (12) it follows that the distribution

of i(xc) or, more compactly, . _ i(xZ), is also normal, where in particular

(14) i isfN(QJRI,RZR').

Assume further, for computational convenience, that the precision of the process is known exactly, i.e., that Z is in fact the true covariance matrix, but the process

mean, say yA, is not known with certainty. Then a natural conjugate a priori distribution (cf., [8]) for the random vector jA is the normal probability distribu- tion fN(IlIm, S) with m=Ry and S=(RZR'). The quantity S as a measure of the precision of the information available on pA can be expressed in units of the process mean precision, say s. The information S can be defined in terms of the parameter n, for n S/s. Then, using the notation cb and 6c to denote prior and posterior

parameters, respectively, the prior distribution on Pi can be expressed as fN (i Im, sh). As the observations Oit(x) are recorded by the team members, they are com- parable to "samples" providing statistics (i, n) on the normal process. In this

regard (cf., [8, Chapter 12]), the posterior distribution of Pi will also be normal with parameters

This content downloaded from 193.226.34.227 on Thu, 05 May 2016 08:59:58 UTCAll use subject to http://about.jstor.org/terms

Page 7: Articol 54

QUADRATIC TEAMS 535

(Il5a) mh = n (nmh+ nm),

(15b) n = h+n .

The mean and variance of the posterior distribution are therefore,

(I1 6a) E (ttA I m, n) = -A = mh,

(16b) Var thl , n)--sh

Identifying the diagonal elements of the covariance matrix in (16b) as &i', the result in (11) corresponding to the expected value of the completely decentralized in- formation system h(l1, can be rewritten

T N

(17) V(h(l))= (1/q(ii),,) it. t=l i=l

The second information system model we now wish to examine relaxes the

previous assumption and allows communication of member observations. One of the most common forms of communication in any organization is the

dissemination of "summary information" through published reports. Suppose that

each team member communicates some function of his observations, say dit= 6i(zit), to a centralized agency in the organization, which computes all such in- formation received and then periodically disseminates this compilation to each member as a report.9 We will define this information system as "partially decen- tralized information reporting" and identify it by the symbol h 2). Then the infor- mation function for each member under h(2) is given by V)(x)={Oit(x),6}, for i= 1, 2, . N; t= 1, 2, ..., T; where the collection of reports by the central staff to each member is

dt = { (d11, ..., dN), (d1t, . dNt)) , for t= 1, 2, ..., T, and

dit = 6i(zit) = 6i(Oit(x)) , for all i, t,

given 6(()=(1(.)-, .)-, 6N(.)); and Oit(x)={ Oil ...I Oit) as above. Before proceeding to the derivation of the best team decision rule under h

we note the following lemma (cf., [6, p. 504]).

LEMMA: Let A, B, and C be independent random variables and let b be a con- traction of B, and c be a contraction of C; and let D be a real valued random variable defined by D=f(A, B, c) where f(.) is some measurable function. Then

(18) E[DIA, b, C] = E[DIA, b, c].

Now suppose we assume, as before, that the observations Oit(x) are independent

' For example, the functions bi(.) might correspond to a weighted average of the observations zi overr =1, 2, ..., t.

This content downloaded from 193.226.34.227 on Thu, 05 May 2016 08:59:58 UTCAll use subject to http://about.jstor.org/terms

Page 8: Articol 54

536 CHARLES H. KRIEBEL

for all j#A i; i, j = 1, 2, ..., N. Applying the lemma in (18) to the stationarity theorem in (6) gives'0

(19) ExIz(ajt I W))= Ex1z(ajtlj) for i#j, and

(20) EXjZ(a(iTjM2))= ExjZ(Xi@I6) for li#t.

Substituting from (19) and (20) into the theorem in (6) we obtain

T

(21) q Zii) q(,,)1 Exaz(ixi(5)+ Z Z q(ij)1rEx1tLI(5) = E t r t r= 1 j#i

fori=1,2,...,N; t=1,2,..., T.

For any given set of observations, the right-hand side of (21) can be expressed as

(22) E /(xt h(i2)) = EXIZ (Rjtjdt), all i, t,

since by direct substitution into the lemma in (18) we have

f = 4t B = Oit , b = 6i(Oit) = ditd A =(a constant), C= {6j}ji, c = Idjtlji

The conditional expectation of (21), for ( given, is therefore

T T

(23) E q(jj) Exjz(aiT56) + Y , q(ij)tr Exlz (j I )=Exlz (it dt) r=1 r=1 jti

for i= 1, 2, ..., N; t= 1, 2, ..., T. Subtracting (23) from (21) gives

(24) ctit = Exjz (aitl 6)+ (q(j)j [Exjz (ijtI ht) -Exjz (jitI dt)]

for i= 1, 2, . N; t= 1, 2, ..., T. On the other hand the relation in (23) could be solved directly to obtain

(25) EXIZ(ax6) = Q-1 EXIZ(Jd).

Substitution, by individual components, from (25) into (24) yields the best team

decision function under ht21, say a; where for i= 1, 2, ..., N and t= 1, 2, ..., T

T N

(26) &qit = Z tx kdt)?(1/q(ii)tt)[Exlz (itl hlt)-Exjz (jtj jdt)] ,r=1 k=1

and the coefficients q(ik)tT are elements from the partitioned Q- 1, as defined above.

Proceeding as before, the expected value of the "partially decentralized in-

formation reporting" system h(2), becomes upon simplification

10 More specifically, for (19) the elements in (18) are: f=cat, A={k} for k#i,j; B=0jt; C=Oit; c=bi, and D=ajt. For (20) the elements in (18) are: f=XiT; A=(a constant); B=(Oj,, Oit, 3d); b=3i; C = {bj}, c = {djt} for j#A i; and D = ajT-

This content downloaded from 193.226.34.227 on Thu, 05 May 2016 08:59:58 UTCAll use subject to http://about.jstor.org/terms

Page 9: Articol 54

QUADRATIC TEAMS 537

T N

(27) V(h(21) = E E A [2)) Var (2itI dt)] t =1 i= 1

+2 E E q(ii)t Cov ()it AXj I dt)d t<t i,j

where "Var(.)" and "Cov( )" correspond to the variance and covariance, respec- tively.

The first summation component in the decision rule of (26) is the counterpart of, say (4) given (3), where the expectation is over the distribution of states con- ditioned on the reported observations. The second component in (26) represents an adjustment for the current period (t) between the reported observations for the

ith member and his own observation, where the weighting factor, say wit, for the

current period conditioned on the report is wit = q(LL)tt -(1/q(ii)tt), for all i, t. The expression for V(h(2)) in (27) for the a posteriori distribution on the state

variables is most easily explained by again continuing the example discussed under the previous system h(l). Assuming (12), (13), and (14) hold, recall the pos- terior distribution on jl was determined under h(l) on the basis of the individual

members' observations only. Under the current system, h , communication of observations takes place, and hence the posterior distribution is obtained with

respect to the pooled observations for each period, in the form of a report, dt. Following the earlier notation, the covariance matrix of the posterior distribution of i? based on dt, that is comparable to (16b) based on Oit only, is now

(28) Var (p. I mh, ) -s(e%+ e) = s(e),

where e is the relative precision of the process on the basis of the augmented in-

formation. Then identifying the elements of (28) as O(ij)tr, the value of h'2) in (27) can be written as

T N

(29) V(h (2) = E (11q(ii)tt) [1I2 - 5(ii)tt] + 2 E E q( (ij)tT t=1 i=1 r t<t i,j

where vi2 are the elements in (16b). It is apparent that V(h(2)) > V(h"')), since (17) is contained within (29), and that the system h 2) also exhibits constant returns to scale.

3. AGGREGATE PLANNING SYSTEMS: THE PAINT FACTORY CASE ILLUSTRATION

In [9] Theil presents a general discussion of optimal decision rules for aggregate planning problems under quadratic preferences (i.e., criterion functions). One empirical study reported in detail is the well known analysis of production and employment scheduling in a paint factory by C. Holt, F. Modigliani, J. Muth, and H. Simon (cf. [5]). We now wish to consider this planning problem within the framework of the preceding team decision models.

This content downloaded from 193.226.34.227 on Thu, 05 May 2016 08:59:58 UTCAll use subject to http://about.jstor.org/terms

Page 10: Articol 54

538 CHARLES H. KRIEBEL

The basic decision problem posed by Holt and his associates was to determine

values for production (Pr) and work force (WV) levels in each of t= 1, 2, ..., T time periods which minimize total variable costs of operations, estimated by the qua-

dratic function T

(30) C(T)= E {C13+cjLt+C2(Wt-Wt__ClJ2+C3(Pt-C4L t)2 +CsPt t=1 -C6Lt+cI2PtLt+c7(It-c8 -cgSt)2}

where unit sales in each period (St) occur randomly; unit inventories (It) are given by the balancing equation

(31) t =It - 1 + Pt -St for t = 1, 2, ..., T

the number of workers available for work (Lt) differs from the total payroll work force (Wt) by randomly determined absentees (rt) in each period, that is,

(32) Lt= Wt-rt, for t=1, 2, ..., T;

and initial inventory (Io) and work force (W1) levels are known conditions." To formulate (30) as a team decision problem, consider the team composed of

two members, i = 1, 2, where each member is responsible for a single decision

variable, P, and WV, respectively, and records observations on one of the indepen- dent state variables for sales (St) and absentees (r,), in t= 1, 2, .... T time periods. That is, following our earlier notation, let alt-Pt and a2t- Wt, and let x 1tSt and x2t- rt for t= 1, 2, ..., T and N=2. Substituting into (3) for the relations in (31) and (32), it is clear that this cost function is a special case of the general qua-

dratic in (1).12 Then, given a planning horizon, T, an optimal solution to the static

decision problem in (3), follows directly from (2). More specifically, estimates for

the cost coefficients in (30), provided by the original authors are

cl = 340.0, C4= 5.67, C7 = 0.0825,

(33) c2 = 64.3 , C5= 51.2, cs = 320.0, C3 = 0.2, c6= 281.0, c9 = cll = c12 = c13 = 0.0

Given a planning horizon of T=6 time periods, the optimal static decision rules

for production and work force levels from (2) are presented in Table I. Theil and others have shown that the optimal stratcgy for the dynamic decision

problem in (30) is obtained by selecting the decision rule of the first period, up- dating forecasts of the state variable and initial conditions in each successive time

period on the basis of the most recent information available. For a planning

" In the original analysis presented in [5], no explicit provision was made for absenteeism, so that

L,= W, for all t, in (30) and (32). The modification for absentees is introduced here to permit a simple interpretation of the team decision model and, as such, is nominally a generalization of the original

problem.

12 See Theil [9, Chapters 3 and 5] (particularly, pages 163-166), for the specific algebraic detail of(1), given (30). In the interests of brevity this detail is not repeated.

This content downloaded from 193.226.34.227 on Thu, 05 May 2016 08:59:58 UTCAll use subject to http://about.jstor.org/terms

Page 11: Articol 54

539

TABLE I

OPTIMAL STATIC DECISION RULES FOR PRODUCTION AND THE WORK FORCE UNDER A SIX-MONTH PLANNING HORIZON

t Constant WO SI-IO r, S2 r2 S3 r3 S4 r4 S5 r5 S6 r6

Term

P* (optimal production)

1 152.82 0.902 0.460 -3.061 0.231 1.309 0.106 0.601 0.040 0.227 0.008 0.044 -0.004 -0.022 2 78.55 0.255 0.258 1.460 0.362 -3.618 0.189 1.070 0.093 0.526 0.041 0.231 0.013 0.076 3 31.57 0.016 0.143 0.813 0.202 1.148 0.339 -3.749 0.182 1.031 0.092 0.523 0.039 0.218 4 -3.84 0.040 0.082 0.467 0.116 0.658 0.194 1.097 0.338 -3.751 0.182 1.029 0.081 0.461 5 - 39.93 0.288 0.055 0.311 0.075 0.428 0.121 0.684 0.202 1.143 0.339 - 3.746 0.158 0.893 6 -89.09 0.812 0.052 0.295 0.068 0.383 0.097 0.550 0.142 0.804 0.205 1.164 0.293 -4.008

W* (optimal work force)

1 1.183 0.760 0.0106 0.0601 0.0094 0.0535 0.0080 0.0451 0.0065 0.0366 0.0049 0.0278 0.0030 0.0168 2 0.248 0.580 0.0141 0.0802 0.0158 0.0893 0.0148 0.0841 0.0128 0.0728 0.0101 0.0575 0.0063 0.0357 3 - 1.589 0.454 0.0146 0.0826 0.0173 0.0979 0.0199 0.1127 0.0189 0.1070 0.0157 0.0889 0.0100 0.0569 4 - 3.683 0.373 0.0139 0.0788 0.0169 0.0960 0.0209 0.1187 0.0236 0.1338 0.0212 0.1200 0.0141 0.0798 5 - 5.619 0.328 0.0132 0.0748 0.0162 0.0921 0.0207 0.1172 0.0247 0.1401 0.0256 0.1449 0.0181 0.1026 6 -6.953 0.312 0.0128 0.0727 0.0158 0.0899 0.0203 0.1154 0.0247 0.1402 0.0265 0.1504 0.0212 0.1199

This content downloaded from 193.226.34.227 on Thu, 05 May 2016 08:59:58 UTCAll use subject to http://about.jstor.org/terms

Page 12: Articol 54

540 CHARLES H. KRIEBEL

horizon of six time periods, the optimal strategy dynamic decision rules for pro-

duction and work force levels, t= 1, 2..., are

0.460St - 3.061rt

0.231St+ 1 + 1.309rt+ 1

0. 106S+ 2+ ?0.601rt+ 2 (34.1) P* 152.82 +Et 04S +2 +E0607r,+2 _0.46011 ?j+0.902Wt_j

0.008St + 4 + 0.044rt + 4

- 0.004St + 5-0.022rt + 5 and

0.0106St + 0.0601rt

0.0094St + 1 + 0.0535rt + 1

0.0080St + 2? 0.045 1rt+ 2 (34.2) W* 1.183 + 0.760Wt - 1-0.0106It - 1 + Et .65S+3 0.0366r +3

0.0049St + 4 + 0.0278rt + 4

_0.0030St+5 +0.0168rt+ 5_

where Et[xt+,] corresponds to a forecast of the state variable x in period t+?- based on information available at the start of period t. The decision rules in (34)

follow directly.from (33) and (5) for t=1. It is these dynamic rules which will

occupy our attention for the remainder of this discussion. In this regard, the op-

timal decision rule coefficient values are relatively insensitive to changes in the

planning horizon parameter for 6 < T < oo (cf., [9, Chapter 5]). For computational convenience, therefore, a six-period planning horizon will be employed in subse- quent analysis of the paint factory case.

As indicated above, the expected value of perfect information criterion provides an upper bound on cost savings that can be realized through reductions in in-

formation uncertainty. The perfect information criterion in (7) for the paint factory cost coefficient estimates in (33) gives

2 6

(35) V(h(c1)) = L E k(oj)tr, (xitxjT) i,j= 1 t,= 1

where the elements a (xitxjT) are the covariance terms for St and rt, t= 1, 2, ..., 6;

and the coefficients k(ij), = k(jJrt are given in Table II. If we assume that sales and absentees are independently distributed random variables and that the underlying

stochastic process generating each variable is stationary over time, (35) simplifies to

(36) V(h("))= 1.4384a2 (St) + 29.1217a2 (rt) ? 2.2856a (St St + 1)- 6.0818a (rtrt + 1)

? 1.6156a (St St+ 2)- 1.1014a (rtrt+ 2)

? 1.0026a (St St + 3) + 0.2030a (rtrt + 3)

? 0.5120a (St St + 4) + 0.5428a (rtrt + 4)

? 0.0470a (St St + 5) + 0.2660a (rtrt + 5)

This content downloaded from 193.226.34.227 on Thu, 05 May 2016 08:59:58 UTCAll use subject to http://about.jstor.org/terms

Page 13: Articol 54

QUADRATIC TEAMS 541

TABLE II

VALUES OF k(iJ),: COVARIANCE WEIGHTS FOR PERFECT INFORMATION CRITERION IN THE PAINT FACTORY CASE

1 1 2 3 4 5 6 t ij 1 2 1 2 1 2 1 2 1 2 1 2 1 0.4150 -0.4536 0.3770 -0.2011 0.3178 -0.0690 0.2468 -0.0039 0.1690 0.0226 0.0866 0.0235 2 3.8578 -0.2011 - 1.1400 -0.0690 -0.3910 -0.0039 -0.0223 0.0226 0.1281 0.0235 0.1330 1 0.3580 -0.3090 0.3091 -0.1185 0.2435 -0.0227 0.1683 0.0189 0.0870 0.0253

2

2 4.6775 -0.1185 -0.6720 -0.0227 -0.1285 0.0189 0.1074 0.0253 0.1433 1 0.2848 -0.2563 0.2325 -0.0848 0.1643 -0.0038 0.0862 0.0208

3

2 4.9765 -0.0848 -0.4808 -0.0038 -0.0216 0.0208 0.1179 1 0.2066 -0.2320 0.1527 -0.0697 0.0822 -0.0017

4

2 5.1143 -0.0697 -0.3952 -0.0017 -0.0096 1 0.1261 -0.2205 0.0715 -0.0622

S

2 5.1796 -0.0622 -0.3529 1 0.0479 -0.1964

6

2 5.3160This content downloaded from 193.226.34.227 on Thu, 05 May 2016 08:59:58 UTC

All use subject to http://about.jstor.org/terms

Page 14: Articol 54

542 CHARLES H. KRIEBEL

The structure of the matrix K in Table II evolves directly from the R and Q ma-

trices (cf., equation (7)), which in turn depend on the cost coefficients {C7, C3, C4,

C5, C2} and {C7, C9, C3, C4, C12}, respectively in (30). The covariance terms in (35) are those of the underlying stochastic process generating the state variables. (If these

covariances are unknown a priori they can be estimated from subjective and/or

observational data.) Clearly, covariance as an index of state variable dispersion is a

convenient measure of information uncertainty for decision making. In this case,

systems which provide more complete information on the state variables exhibit

constant returns to scale in the process covariance. More specifically, for produc- tion and workforce decisions the expected value of increasingly accurate forecasts

(conditional expectations) of the state variables is V(h(')) in the limit.

In the discussion of information systems for the quadratic team, two particular

systems were described in some detail: h"), a completely decentralized information structure; and h(2', a partially decentralized structure which allowed communica-

tion of reports on observations. Following this analysis for the paint factory cost

coefficients in (33), the optimal strategy dynamic decision rules under h(') in (9) are

(37.1) Pt = E[PI* I lt] and

(37.2) Wt = E [W* 02t] for t = 1, 2,.

Similarly applying system h(2) to the paint factory case, the optimal strategy dynamic rules in (26) are

(38.1) Pt = E[P*Idt] +?0.712 E1[(St.dt)(St h -1.632 E[(rtldt)-(rtl h

+ 0.594 E [(St+ 1 Idt) -(St+ 1 h(l2t) +0.475 E[(St+2 Idt)-(St+21h'lt')

+0.356 E [(St+3dt)-(St+31h (2))] +0.237 E[(St+41dt)-(St+41h (2))] +0.119 E[(St+ 5 dt)-(St+51h (2))],

and

(38.2) Wt = E[W*ldt]+0.0476E[(rtJdt)-(rt Ih(2))

for t= 1, 2. where P* and W* are given by the decision rules in (34.1) and

(34.2), respectively, and the terms E(.Idt) and E(.I )h() are forecasts of the state variables based on the report of observations (dt) and each member's knowledge of the reporting system and his own observations (hW)). Concerning the latter, the optimal strategy dynamic rules in (38) permit adjustment of the forecasts of the

local state variables in each case on the basis of improved information.

From the decision rules given in (34), (37), and (38), it is apparent that the economics of introducing a formal description of information systems into the

analysis of aggregate planning decision rules, is reflected directly in the feed-

forward segment of the decision functions and the relative increased accuracy such systems provide for state variable forecasts. In this regard, the team decision

This content downloaded from 193.226.34.227 on Thu, 05 May 2016 08:59:58 UTCAll use subject to http://about.jstor.org/terms

Page 15: Articol 54

QUADRATIC TEAMS 543

model has been a convenient analytical framework for detailing the normative

evaluation. Some practical considerations based on this discussion for the design

of computer-based decision and information systems in the firm are analyzed in [2].

Carnegie-Mellon University

REFERENCES

[1] BOOT, J. C. G.: Quadratic Programming, North-Holland Publishing Company and Rand Mc- Nally and Co., 1964.

[21 KRIEBEL, C. H.: "Information Processing and Programmed Decision Systems," Management

Sciences Research Report No. 69 (April, 1966), Carnegie Institute of Technology, Pittsburgh, Pa.

[31 MARSCHAK, J.: "Towards an Economic Theory of Organization and Information," Chapter XIV in R. M. Thrall, C. H. Coombs, and R. L. Davis (eds.), Decision Processes, J. Wiley, 1954.

[41 "Problems in Information Economics," Chapter 2, in C. P. Bonini, R. K. Jaedicke, and H. M. Wagner (eds.), Management Controls: New Directions in Basic Research, McGraw-Hill,

1964.

[5] HOLT, C. C., F. MODIGLIANI, J. F. MUTH, AND H. A. SIMON: Planning Production Inventories and the Work Force, Prentice Hall, 1960.

[61 RADNER, R.: "The Evaluation of Information in Organizations," pp. 491-533 in J. Neyman (eds.),

Proceedings of the Fourth Berkelel Symposium on Mathematical Statistics and Probability, University of California Press, Berkeley, 1961.

[7] "Team Decision Problems," Annals of Mathematical Statistics (September, 1962), pp. 857-881.

[81 RAIFFA, H., AND R. SCHLAIFER: Applied Statistical Decision Theory, Division of Research, Har- vard Business School, Harvard University, 1961.

[91 THEIL, H.: Optimal Decision Rules for Government and Industry, North-Holland Publishing Co. and Rand McNally and Co., 1964.

[101 VAN DE PANNE, C.: "Optimal Strategy Decisions for Dynamic Linear Decision Rules in Feed- back Form" Econometrica (April, 1965), pp. 307-320.

This content downloaded from 193.226.34.227 on Thu, 05 May 2016 08:59:58 UTCAll use subject to http://about.jstor.org/terms