enacting networks: the feasibility of fairness

25
Social Networks 12 (1990) 1-25 North-Holland ENACTING NETWORKS: THE FEASIBILITY OF FAIRNESS * Eric M. LEIFER * * Columbia Universi@ Network analysis has ignored the process of network enactment. Yet there are many fairness norms, such as reciprocity, that are oriented toward the timing of encounters as much as toward their structure. The difficulties involved in enacting a network within the bounds of such fairness norms can constrain what kinds of network structures are sustainable. In this paper, these difficulties are assessed across networks varying in size, density and differentiation using a computer program that searches for fair network enactments. In one application, the results help explain actual fairness properties of National Football League season schedules (1960-1987), such as the decrease in home-away game alternation after the 1969 merger between AFL and NFL and the threshold that was reached in 1977 and not substantially exceeded since. In another application, null expectations for short-run exchange imbalances (between giving and taking) are generated for networks where a long-run generalized norm of reciprocity strictly holds. A strong faith in the long run is needed in large, moderately dense, undifferentiated networks because ehminating the short-run imbalances can be infeasible. The pursuit of fairness is limited as much by the means of network designers as by their intentions. 1. Introduction Fairness complaints are typically framed in dyadic terms, in terms of what one side should have done for another. When the dyad is part of a larger network, however, such complaints ignore the competing obliga- tions of other ties. Actor A, who is normally quite generous, asks for some help from B, who enjoys others’ generosity. Yet B is tied up helping out C and D, and thus is unavailable even to E, who would like to help B out. This kind of “embeddedness” ~Granovetter 1985) is rarely taken into account in fairness complaints. Yet it might be impossible or infeasible to enact a network where unfairness is not perceived at some point by some actors. * Support from the National Science Foundation (SE‘S 8610363) is gratefully acknowledged. The paper has benefited from the comments of Steve Appold and Rich Milby. * * Department of Sociology, Columbia University, New York, NY 10027, U.S.A. 0378~8733/9C1/$3.50 0 1990, Elsevier Science Publishers B.V. (North-Holland)

Upload: eric-m-leifer

Post on 21-Jun-2016

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Enacting networks: The feasibility of fairness

Social Networks 12 (1990) 1-25

North-Holland

ENACTING NETWORKS: THE FEASIBILITY OF FAIRNESS *

Eric M. LEIFER * * Columbia Universi@

Network analysis has ignored the process of network enactment. Yet there are many fairness

norms, such as reciprocity, that are oriented toward the timing of encounters as much as toward

their structure. The difficulties involved in enacting a network within the bounds of such fairness

norms can constrain what kinds of network structures are sustainable. In this paper, these

difficulties are assessed across networks varying in size, density and differentiation using a

computer program that searches for fair network enactments. In one application, the results help

explain actual fairness properties of National Football League season schedules (1960-1987), such

as the decrease in home-away game alternation after the 1969 merger between AFL and NFL and

the threshold that was reached in 1977 and not substantially exceeded since. In another

application, null expectations for short-run exchange imbalances (between giving and taking) are

generated for networks where a long-run generalized norm of reciprocity strictly holds. A strong

faith in the long run is needed in large, moderately dense, undifferentiated networks because

ehminating the short-run imbalances can be infeasible. The pursuit of fairness is limited as much

by the means of network designers as by their intentions.

1. Introduction

Fairness complaints are typically framed in dyadic terms, in terms of what one side should have done for another. When the dyad is part of a larger network, however, such complaints ignore the competing obliga- tions of other ties. Actor A, who is normally quite generous, asks for some help from B, who enjoys others’ generosity. Yet B is tied up helping out C and D, and thus is unavailable even to E, who would like to help B out. This kind of “embeddedness” ~Granovetter 1985) is rarely taken into account in fairness complaints. Yet it might be impossible or infeasible to enact a network where unfairness is not perceived at some point by some actors.

* Support from the National Science Foundation (SE‘S 8610363) is gratefully acknowledged. The

paper has benefited from the comments of Steve Appold and Rich Milby.

* * Department of Sociology, Columbia University, New York, NY 10027, U.S.A.

0378~8733/9C1/$3.50 0 1990, Elsevier Science Publishers B.V. (North-Holland)

Page 2: Enacting networks: The feasibility of fairness

2 E.M. Leijer / Enacting networks

Once the competing obligations of others are acknowledged, fairness ceases to be purely a matter of intentions. Fairness can rest on having the adequate means for working out the intricate problem of who should encounter who, when, and on what terms. This paper examines the feasibility of fairness in network enactment where there is an “intendedly fair” coordinator. ’ Focus is not on the absolute difficulty, which will vary with the technology available to the coordinator, but on the relative difficulty across networks varying in size, density and differentiation within a given technology. The technology is a new computer program, ENACTER, that generates networks and schedules encounters within the constraints of specified fairness norms. The fairness norms examined, beyond those used to generate the network, are ones that (1) require that no actor be denied an assignment each encounter period, and (2) fix a degree of alternation in an asymmetric tie that no actor can step beyond.

The results yield a number of testable hypotheses on where we should expect laxness to strict fairness to occur, due to inadequate means, and thus where we should observe a stream of fairness com- plaints. Assuming fairness complaints increase the instability of a network, the results point to some basic limitations in the design of stable networks. These are explored in the two application sections. In one, the merger of the American and National Football League sched- ules in 1970 is shown to make the alternation of home and away games more difficult and create a fairness threshold reached in 1977 that has not been markedly surpassed to the present. In a second application, the tendency is examined for imbalances in giving and taking to emerge even where a generalized norm of reciprocity holds in the long run. In enactments where the short-run imbalances are large there needs to be considerable faith in the long run for the enactment to continue.

2. Fairness norms as constraints in network enactment

Network enactment can be divided into two steps, and each step subject to constraints in the name of fairness. The first step is to

’ Fairness is an issue whether there is an actual designer, or the network is enacted through a

decentralized process. The two applications span these two possibilities. In the discussion, it is

suggested that fairness norms extending beyond dyads may be least relevant in between the

extremes of centralization and decentralization.

Page 3: Enacting networks: The feasibility of fairness

E.M. Leifer /Enacting networks

12 3 4 5 6 7 8

1 0 0 10 0 0 6 7 9

2 00 05 90 6 10

3 12 00 00 56

4 80 90 1 0 10 0

5 00760805

6 5 7 0 4 10 0 0 0

7 08304900

8 43 07 0100

Fig. 1. A schedule for 8 actors who each send and receive 8 ties over 10 encounter periods.

Self-encounters, the two encounter periods each actor is not engaged, are suppressed - these

would be diagonal entries.

generate a network that determines who encounters who, and on what terms if “A encounters B” is different from “B encounters A.” The network can be represented by a (0, 1) matrix M where a “1” in cell mji denotes a tie “sent” from actor i to actor j (e.g., actor i initiates, gives, hosts). The second step, enactment, is to assign a time, or period, to each tie turning the network into a schedule. The schedule can be represented by a matrix S where the timing of ties sent is given by letting cells sij equal the period (e.g. minute, hour, day, weekend...) when the encounter occurs, with sii = 0 denoting no encounter. No actor can be involved in more than one encounter in any given period. ’

A sample schedule is shown in Figure 1. The “5” in the second row, fourth column cells means that actor 2 encountered actor 4 in the 5th encounter period. The “0” in the fifth row, second column cell means that actor 5 did not encounter actor 2 (but note that actor 2 did encounter actor 5.) The reader is encouraged to check that this schedule

* This is not restrictive, since “periods” can always be defined so as to make this true. Actors not

encountering others during a period can be thought of as engaging in a “self encounter”, so each actor is involved in exactly one encounter each period. What is restrictive is that any pair of actors

can only encounter each other twice (above and below the diagonal). This can be got around by

dividing total encounter periods into segments, and applying ENACTER to one segment at a time.

Page 4: Enacting networks: The feasibility of fairness

4 E.M. Leifer / Enacting nerworks

can be enacted, in not demanding any actor be involved in more than one encounter in any period. Note also that each actor has 8 encoun- ters in 10 encounter periods, and thus has two “self encounters,” or periods “off” (these are diagonal entries, but are suppressed for presen- tation convenience).

Many common fairness norms can be expressed in terms of schedule constraints. Starting with the underlying network, fairness norms are likely to influence who is included in the network and hence the size of the network. For those included, fairness norms are likely to constrain the number of row and column encounters each actor is involved in, and hence the density of the network. One possible fairness norm, reflected in Exhibit 1, is that each actor is involved in the same number of encounters. In some settings, however, fairness may dictate an uneven number of encounters across actors. In other settings where fairness norms differentiate actors into groups and subgroups, a differ- ent number of encounters within and across subgroups might be dictated. In pro football, for example, all division members play each other twice, play 14 games within their own conference (there are three divisions in a conference) and play 2 games across conferences (there are two conferences in the National Football League).

In addition, fairness might dictate a division of ties between those sent and received. In generalized exchange, for example, it is important that in the long run giving and taking balance out. In Figure 1, this balance is reflected in that each actor has the same number of row (sent) and column (received) encounters. To continue to pro football example, fairness dictates that encounters be divided equally in their home or way location, since the home field confers a definite advantage to the home team (Schwartz 1977). Every actor in this paper sends the same number of ties as they receive.

Finally, fairness norms can dictate restrictions on possible encoun- ters. Other norms in a setting, such as an incest taboo, or simply pragmatic considerations, may rule out possible encounters. The only fairness norm I make use of in this paper is, however, one that ensures all actors in a subgroup the fame acce.s~ to all actors in their own and other subgroups (although this might be no access). That is, all mem- bers of a subgroup are treated equivalently. Restrictions on encounters are used to differentiate actors into distinct subgroups.

Specifications of these fairness norms can be used to generate a network. Additional fairness norms, however, might be evoked to

Page 5: Enacting networks: The feasibility of fairness

E.M. Leifer / Enacting networks 5

constrain the enactment of encounters. One is that every actor be assigned each period-that nobody gets left out. As will soon be evident, it can be difficult satisfying this norm. One way to relax this norm is to admit “self encounters”, or periods when an actor is assigned to encounter himself. These may be justified in terms of needed rest for actors, or necessitated when encounters entail the use of equipment that is in short supply. In Figure 1, each actor has two self encounters. The total number of periods, or duration, is defined by the number of self encounters plus encounters. Self encounters are neces- sary when there are unequal numbers of encounters across actors (to give less active actors “periods off’). Also, when there is an odd number of actors, or internal cliques of an odd number, at least one must have a self encounter each period.

In many settings there is a fairness norm that regulates the alterna- tion of an asymmetric tie. In generalized exchange, for example, there might be norms regulating the running balance of giving and taking in addition to the long-run balance ensured by the network. In the case of sports competition, an alternation between home and away games might be deemed “fair” to guard against ensuring any team momentum by a concentration of home games early in the season. In general, fairness norms might regulate the maximum imbalance in the alterna- tion of an asymmetric encounter. Strict alternation, as I will show, is a fairness norm that is not always feasible.

There are additional norms that could be defined at the level of dyads, traditionally attached to the norm of reciprocity (Gouldner 1960; Leifer 1988). These set minimum and maximum delays in re- ciprocation. These constraints are not easily accommodated in EN- ACTER as they entangle the network and timing steps in scheduling. 3 For example, a maximum delay norm would require that every encoun- ter be eventually reciprocated requiring a symmetric encounter net- work. This, however, makes the assignment of encounters interdepen- dent and thus vulnerable to other fairness complaints, as could be found in, say, the particularism of patronage systems (Eisenstadt and

3 Skvoretz (1985) developed a network-generating program that is designed to satisfy dyadic and triadic constraints. Skvoretz’s program, however, does not make timing assignments, and operates on the basis of probabilistic constraints so that generated networks will stochastically vary in their actual properties. ENACTER strictly satisfies fairness norms.

Page 6: Enacting networks: The feasibility of fairness

Roniger 1984). Additional fairness norms can only make the feasibility of fairness a more vexing concern. 4

3. Modelling and monitoring the scheduling task

In what networks is fair enactment more or less feasible? To get an answer, the scheduling task must be modeled and the performance of the modef monitored in a range of network settings. The model used here is ENACTER, a computer program written in APL. Although the absolute difficulty of scheduling will vary with different software and hardware, it is conjectured that network variation in size, density and differentiation will pose common relative difficulties across disparate technologies. Personal correspondence with the National Football League provided supporting evidence here. Although it takes two men eight weeks (working late into many evenings) to generate a season schedule that ENACTER can produce with a few minutes of mainframe CPU, changes in league size and degree of home-away game alterna- tion affect both technologies similarly {see Appli~at~un I).

A flow chart of ENACTER is given in Figure 2. As is immediately apparent, ENACTER separates the steps of network generation and timing assignment. The main principle underlying network generation is the decomposability of hierarchy, where the internal encounters of any subgroup can be isolated and constraints satisfied without refer- ence to the constraints on external members (see Simon 1969). Hence the program can proceed through any number of levels, and groups within levels, tending to only one set of constraints (i.e., marginals) at a time. Within each subgroup, row and cotumn constraints are satisfied exactly, by adding tu or deleting from a random initial assignment. The decomposability of hierarchy ensures that overall difficulty of network generation will be no more that the sum of the subgroup difficulties.

Enactment presents a more complex challenge. A form of annealing is the main principle used, where, in the face of a barrier to further assignments, previous assignments are unscheduled in a way that will allow the program to get past the barrier in moving forward again (see Kirkpatrick et al. 1983). The various levels marked in Figure 2 corre- spond to how far back the program must go in order to move forward.

4 It is possible, for example, to set a minimum delay for those encounters that happen to be reciprocated. ENACTER has this capability, but no use of it in this paper. In all the schedules generated for this paper, the burns delay is set at zero so that an encounter can be reciprocated in the next period (as occrrrred between actors I and 6 in Figure 1).

Page 7: Enacting networks: The feasibility of fairness

E.M. Leijer / Enacting nenoorks 7

Fig. 2. Flow chart of the encounter scheduling program, ENACTER. The left-hand column marks

places in the program that are referenced in the text’s explanation of performance measures.

Features in the actual program that were not used in the present research are suppressed from the flow chart.

Page 8: Enacting networks: The feasibility of fairness

8 E.M. Leifer / Enacting networks

For example, suppose we randomly assign encounters for period 1 among 8 actors and reach a point where only actors 3 and 6 remain unscheduled, but they do not have an encounter with each other (and do not have self encounters). In level 2, we look for a period 1 scheduled encounter, say between actors 2 and 7, where 2 has an unscheduled encounter with 3 (or 6) and 7 has an unscheduled encoun- ter with 6 (or 3). Thus we can unschedule 2 and 7 and schedule them with 3 and 6, completing the period assignment. In level 3, activated in the case of a level 2 failure, we must find two encounters to unschedule where the four freed actors can combine with 3 and 6 into three schedulable encounters.

More levels are possible, though in ENACTER a level 3 failure restarts the effort to set period assignments. Higher levels grow increas- ingly more complicated to program, describe, and execute (particularly within primitive technologies). The level (1 to 4, with 4 = failure) where ENACTER completed assignment attempts is used as an index for difficulty of the scheduling task. That is, if the probability of level 1 success is high, then the program could move directly through the task. On the other hand, high rates of level 1, 2 and 3 failures indicate difficult scheduling tasks that might frustrate even the best-intended scheduler. The average number of attempts required for period assign- ments (determined by attempt failures) is also shown in the results.

A limit is placed on the number of failed attempts each period, although this was set so high (64) that it was not reached in the reported trials. The limit would be reached, however, were it not for a check for impossible period assignments. A period assignment is im- possible if there is a clique with an odd number of members and no self encounters in the encounter network at the start of each period (i.e. with past assignments removed). Two level-3 failures prompts the program to check for these cliques which, if found, yield a WARN failure. In the case of a WARN failure (or reaching the limit of attempts), the program does some unintelligent backtracking-un- scheduling three periods (six for a second failure, then nine for a third, and so on). 5 If five such failures occur, the scheduling task fails. The

5 This particular formula for backtracking is a bit heavy-handed. For the more CPU-intensive

runs for Application I, the program was set to backtrack two periods (i.e. the preceding period),

regardless of the number of past failures. This seemed to facilitate the task, although the amount

of backtracking in either regime was insignificant next to the other impediments to progress. The

adjusted average attempt statistic (see p. 10 and footnote 6) was designed to reduce the influence

of the arbitrary backtracking formula in assessing total difficulty.

Page 9: Enacting networks: The feasibility of fairness

total amount of period backtracking is counted. The numerous types of failure give good indication for the difficulty

of enactiag a particular network. As will soon be evident, completing the task can be exceedingly difficult. It is difficult to anticipate when a dead end will be encountered until it is reached (with the exception of the check for odd-membered cliques), much as in navigating a maze or solving an n-dimensional Rubik’s cube. With the final assignmeats, much must fall into place so that all constraints are satisfied. EN- ACTER preceded by numerous alternative versions that performed less satisfactorily, in terms of computing time andzhances of total failure. If ENACTER approaches the task in a sensible manner-one that might be echoed across a variety of hardware technologies-its perfor- mance can be used to explore the relative feasibility of strict fairness. One indicator of the program’s merits is that it has generated “fairer”’ schedules than have ever been implemented in NFL Football (see Application I).

4. Results

In all the networks enacted in this section, the fairness norms of equal total encounters across actors and an equal number of row and column (e.g. give and take, home and away) encounters for each actor are strictly satisfied. This applies within each level when actors are dif- ferentiated. While these fairness norms remained the same, networks varied by size, density and differentiation. In enacting the networks, all actors are assigned each period (this can entail self encounters) and no actor is involved in more than one encounter per period. No fairness norms are imposed on the alternation of row and column encounters in this section.

For each network, ten enactments were generated. Assessment of enactment difficulty is thus based on ten trials. While more trials would be desirable, generating ten schedules in the more difficult network settings often took more than 24 hours on an IBM XT. Analyses for the applications performed on a mainframe greatly speeded up the process, reducing hours to seconds, although cost soon became a factor. Even the most advanced software and hardware cannot ensure that fairness will always be feasible-particularly considering how simple my most complex environments are (with at most only 28 actors) relative to many real-world settings.

Page 10: Enacting networks: The feasibility of fairness

10 E.M. Leijer / Enacting networks

Table 1

The effects of system size and network density on scheduling difficulty. Bar graphs in each cell

display the probabilities of success in period assignments at program level 1-4, where a level-4

“success” is actually a level-3 failure. Above the bar graph is the mean success level of all period

assignment attempts. On the right side of each cell are, in sequence, average attempts per period,

adjusted average attempts per period (controls for backtracking), number of times backtracking

invoked-total periods backtracked (in ten trials), and the average of the largest number of

attempts on any period in each trial. Results in each cell are based on 10 trials.

ENACTER generates networks with little difficulty. The number of additions, deletions and retries (see Figure 2) increased in a nearly linear way with system size. The exact effect of size is complicated by the effect of varying density within each size. Difficulty is curvilinear with density, peaking around the point where half of all possible encounters occur. This is not surprising, for when all or no encounters occur the network generation process is trivial. Differentiation posed no new problems, as the hierarchy principle employed by the program allowed a straightforward decomposition of the task.

Enacting the networks is where difficulty becomes an issue. The program levels where period assignments are successfully completed, and total attempts, are used to measure difficulty. In Tables 1-3, the probability of success at each level is shown with a bar graph, with the mean level given above (the probabilities sum to one across the four levels-recall that a level-4 “success” is actually a level-3 failure). On the right-hand side of each cell in Tables 1 to 3, measures that tap the

Page 11: Enacting networks: The feasibility of fairness

Table 2 The effects of allowing four self-encounters for each actor in a range of network size and density ~~~~~n~~~~s (compare results to those in Table 1). Shaded in regions in ievei 2 denote situations where no self-encounters remained for ~~R~~~~~~~ actors. There is no level 3 in this situation when a special level 2 algorithm fails. Statistics in each cell are the same as those used in Table 2. Results are based on 10 trials. PWObS Encountelr 10 46

total. number of period ass~g~rn~~t efforts are given. The top number is the average attempts per period needed for a successful assignment. This is simply the total attempts divided by number of periods (aver- aged across ten trials). The second number incorporates an adjustment far the total backtracking across the ten trials. ’ The third line gives two numbers, the first the number of assi~~e~t failures and the second the total number of periods backpacked over the ten trials. The fourth hz~e gives the average (over ten trials) largest singEe periud attempt total. The larger all af these right-side numbers, the more difficult was the enactment process,

’ ‘The adjusted statistic removes the effects of backtracking from average attempts. fnsfead of dividing total attempts by the number of ~c~~~t~r periods, the adjusted statistic divides total attempts by encounter periods plus periods backtracked (i.e. the actual number of periods that were at one point scheduled). As is evident in the tables, the number of periods backtracked varied radically across network settings, arnd so its influence should probably be removed in relative comparison. The absolute amount of backtracking is shaped by the arbitrary backtracking formula {see Apportion I for a variation), yet any ass~sment of enactment difficulty cannot ignore ba~ktrac~g entirety.

Page 12: Enacting networks: The feasibility of fairness

12 E.M. L.eifer / Enacting networks

Table 3

The effects of introducing differentiation in patterns of subgroup encounters on scheduling

difficulty. Each network is divided into two equally sized subgroups. Statistics in each cell are the

same as those used in Tables 1 and 2. Compare results for each group of four cells with the Table

1 cell with the same system size and overall density (i.e. encounters).

Start with the undifferentiated systems in Table 1. An immediate conclusion is that increasing network size greatly increases the diffi- culty of the task. Holding number of encounters constant, probabilities of level-l success, or conversely, probabilities of assignment failures, decrease or increase faster than multiplicative expectations. Relatedly, the number of attempts per period increases fivefold in moving from network size 8 to 24 where there are 6 encounters per actor-an important case since it is usually the last 6 or so periods that present the most difficulties. In terms of processing time, scheduling 8 actors averaged around a minute on the IBM PC, while 24 actors averaged over two hours. Size obviously presents a formidable barrier to the enactment process.

Page 13: Enacting networks: The feasibility of fairness

E. M. L.eifer / Enacting networks 13

Another striking finding from Table 1 is the curvilinearity between difficulty and network density (i.e. number of encounters per actor). Period assignments are easiest when the network is extremely dense or extremely sparse, and the most difficult when actors have between 4 and 8 encounters each. This curvilinearity can at least be partly explained by the relative ease of early period assignments where there are many encounters to draw from, and the automatic success of the final period assignment where each actor has only one encounter left. However explained, the curvilinear&y seems to support the bifurcation of ties into “strong” and “weak” categories, associated with very dense and very sparse networks, as a middle range of ties might be attached to networks that are the most vulnerable to fairness complaints (see Granovetter 1973).

In the above analysis, each actor was paired each period. One way to “loosen” up the enactment process is to increase the duration by adding self encounters. This can allow actors who can find no available encounter in a period to take the period off (see footnote 7 for an explanation of how levels 2 and 3 are modified in ENACTER to handle situations where no self encounters are available). Self encoun- ters help eliminate the interdependencies that derive from the fact that encounters otherwise involve two actors. To explore the consequence of self encounters on enactment difficulty, four self encounters per actor are added to each network. Results are shown in Table 2. Self encoun- ters reduced, often greatly, the difficulty of enactment in sparse and mildly sparse networks. In the sparsest case, the effect was so great that it almost eliminated the effect of increasing network size on difficulty. Over 91 percent of period assignment attempts in the largest network were resolved at level 1 when self encounters were allowed, compared to 26.3 percent when no self encounters were allowed. Oddly, though, the curvilinearity observed without self encounters was pushed toward the dense end of the spectrum. That is, difficulty steadily increased with network density, and it is not even clear that this increase stops as the extreme of full connectivity is approached. Self encounters offer no solution to enactment difficulties in denser networks. ’

’ Perhaps self encounters should be added in a fixed proportion to encounters, as this might be the relevant basis for comparision. In addition, levels 2 and 3 are modified where level 1 fails due to only one unscheduled actor who lacks a self encounter (a “diagonal” failure). Level 2 seeks a scheduled pair, one of whom can encounter the scheduled actor and the other who has a self-encounter. There is no level 3 (i.e., level 3 is a level-2 failure). This has little consequence, however, as the level 3 for regular level-2 failures does not resolve period assignments often.

Page 14: Enacting networks: The feasibility of fairness

14 EM. Heifer / Enacting networks

All analysis so far has involved undifferentiated networks, where every actor had an equal chance of encountering any other. Fairness norms, however, often acknowledge group differences and call for more, or less, intragroup encounters than intergroup encounters. What effect do in- or out-group biases have on enactment? The baseline, again, is the undifferentiated results (Table 1, since no self encounters are admitted here). The program does not distinguish between the undifferentiated case and the case where there is no encounter biases between a priori differentiated groups, as is confirmed in Table 3. Look, for example, at the environment with 16 actors in two groups of 8, with 6 intragroup encounters and a total of 14 encounters (i.e. 8 intergroup). Period assignment difficulty approximates that found for an undifferentiated group of 16 actors with 14 encounters (although network generating is more difficult).

As intragroup bias appears, however, period assignments become easier in the limiting cases (shown), and probabilities of first-level success approximate the product of scheduling the subgroups indepen- dently, as one would expect. The difference between this limiting case and the unbiased case is evidence of the size effect, since, say, gener- ating one 16-actor schedule with no bias is more difficult than gener- ating two unbiased schedules with eight actors each. The same logic applies to intergroup bias, since the program is blind to on or off diagonal blocks of encounters.

In summary, network size is the most consistent obstacle to strict fairness. Increasing network size makes the near final timing assign- ments extremely difficult-nearly every assignment in a moderately sparse network. This difficulty can be eased if self encounters are allowed. Strangely, though, allowing self encounters does not facilitate period assignments in dense networks. The obstacle of size can also be reduced somewhat by differentiation that is linked to intra- or inter- group encounter biases. The more insulated subgroups are from others or themselves, the more feasible is strict fairness in the enactment process. Self encounters or differentiation must, of course, be condoned by fairness norms.

These results provide the basis for empirical expectations in settings where fairness norms apply to the underlying schedule of encounters. The expectations, however, must be framed in relative terms for the technology of scheduling must be considered to derive absolute limits. We can, for example, predict that fairness complaints will increase with

Page 15: Enacting networks: The feasibility of fairness

network size, although these can be aheviated in moderately sparse networks by “‘1oose” schedules that allow for self encounters. We can also predict that, ceteris paribus, fewer fairness complaints should be found in differentiated than undifferemiated networks (as long as the basis for the differentiation itself is grounded in fairness norms!). In a long-term evolutionary framework, the results can be used to predict what kind of networks are most likely to be sustained, assuming that excessive fairness complaints eventually undermine the viability of the network.

This section has ignored a serious obstacle to enactment, This comes from a fairness norm that would regulate the alternation of some asymmetric tie. No constraints have so far been placed on this alterna- tion. This ommission is remedied in the two applications below.

Application I: pro football playing schedules

Season schedules are an important concern of the National Football League. Fairness norms applying to the number of games played at each level (division, conference and league) and the apportionment of home and away games are institutionalized as strict requirements on season schedules. Currently, every team plays 16 games, 8 at home and 8 away, with all division rivals encountering each other once at home and once away and with each team playing two games across con- ference (thus 14 games within conference). Schedules are “tight” in that every team is scheduled for a game every week of the season. An analysis of actual playing schedules (1960-1987) reveals that these norms are strictly satisfied wherever possible. *

Analysis of the season schedules also, however, reveals evidence of a norm that has yet to be strictly satisfied. This norm would call for a strict alternation between home games and away games for each team. 9 Since there is abundant documentation that home games confer an advantage on the home team (Schwartz 1977; Edwards 19791, and a widely ac~owledged ~~rnornent~” effect that confers an advantage to winning itself (Adler 19811, a series of home games especially at the beginning of the season might confer an unfair advantage. It is the case that the Green Bay Packers, throughout their dynasty, had the highest concentration of early home games of any team in football, and the

* In the late 1960s there were an odd number of teams in the NFL, so not every team could play every week. 9 Footnote 9 appears overleaf (p. 16).

Page 16: Enacting networks: The feasibility of fairness

16 E.&f. Heifer / .&wing networks

New York Jets had the longest stretch of consecutive home games-seven-of any team in the entire period studied the year they won the Superbowl (1969). lo

There is a definite trend toward stricter alternation between home and away games. In Table 4, four measures are given all based on the absolute value of home minus away games at each week of the season. Consider the fourth measure, the average home-away imbalance across all teams and weeks. The range is S trna~rnurn alternation} to 4 ~~~irnurn ~ternation in 16 game seasons), since at the best imbalance would be one after odd-numbered weeks and zero after even-numbered weeks, and at worst imbalance would increase by one each week until the middle of the season (week 8) and decrease by one thereafter, Table 4 reveals the NFL doing consistently better than the AFL in approach- ing the ideal. The merger of the leagues for the 1970 season increased the difficulty of scheduling by increasing network size, and can account for the step backward from the NFL’s standard. Progress resumed, however, until 1977 when the NFL reached a threshold in fairness that has not been significantly exceeded in the decade that has followed. In terms of consecutive home or away games, four game runs have virtually ~sappeared, although three game runs are still a part of season schedules (experienced by 12-22 teams each season).

Since 1970 the network underlying season schedules has not changed substantially. Only two new teams have been added (size change}, and the playing season lengthened by two games (density change). The

9 de Werra (1982,1985) has analytically shown that strict alternation is impossible in round robin

tournaments where everyone plays everyone else only once. Although Werra explicitly set out to

address the problem of home-away game alternation in league sports, his. proofs cannot be

applied to the NFL schedules, which are not round robin, nor is his definition of strict atternation

as borne-away-home-a~~ay...approp~ate. The team with the good fortune of beginning the

season with a home game would, after each odd-numbered period, also have one more home game

than a team starting with an away game. This could be easily be perceived as unfair, insofar as the

momentum accrued from the advantage of home games reduces the disadvantage of the even-

numbered away games. Strict alternation is better defined in terms of eliminating the imbalance

between home and away games after each even-numbered period, leaving the ordering of home

and away in each odd-even couplet open to random determination. Thus a team could play its

second and third game at home under strict alternation, as long as it played its first and fourth

games away. I” Statistical analysis across all teams from 1960 to 1981, however, failed to find a significant

relation between earliness of home games and season performance. Since the home advantage is

not uniform across all teams (Leifer 1986), it is possible that the early home games did confer an advantage to these particular teams. At least this could be the basis for perceived unfairness.

Page 17: Enacting networks: The feasibility of fairness

E. M. Leijer / Enacting networks 17

Table 4

The degree of home-away game alternation in actual playing schedules for the American and

National Football Leagues since 1960. All statistics are based on the weekly imbalance between

home and away games (I home games - away games 1) for each week and each team in the schedule.

The first descriptor for imbalance applies across teams, the second applies across weeks for each

team. Hence, the “average average imbalance” is the average across teams of each team’s average

same two men have laboriously generated the season schedules by hand since 1970, in a process that takes around eight weeks (including many evenings). Their improvements in alternating home and away games can be credited to experience, but what can explain the threshold they reached in 1977? Something in the enactment process must be stopping them.

This conjecture is supported by attempting to move closer to the fairness ideal than has ever been achieved by the league, and monitor- ing just how difficult this is. A MAXIMUM imbalance constraint was introduced, where only encounters that did not violate this constraint were considered for period assignments, taking into account that sched-

Page 18: Enacting networks: The feasibility of fairness

18 E.M. Leifer / Enacting networks

ules have to be imbalanced after odd-numbered periods. If, for exam- ple, MAXIMUM is set at “2”, on even-numbered periods teams can be assigned a second consecutive home or away game (and on odd-num- bered periods they can be assigned a third-MAXIMUM + l-consecutive home or away game). Only even numbers are used for MAXIMUM, as each even-numbered period offers the opportunity to swing the imbalance from the prior even-numbered period two counts or zero counts. Because a perfect schedule (MAXIMUM = 0) could not be generated, and observed schedules were better than could be gener- ated with MAXIMUM = 2, the program was allowed to increase MAXIMUM to two if there was a period assignment failure (64 level-3 failures or the discovery of an odd-membered clique). For trials with a less stringent alternation constraint, MAXIMUM was fixed. The MAXIMUM value and the actual imbalance statistics are used to describe the fairness norm used and realized, respectively.

Table 5

The difficulty of achieving greater home-away game alternation in National Footbal League

schedules. Cell statistics are the same as those in Table 1. MAXIMUM is the maximum allowed

imbalance on even numbered weeks (MAXIMUM + 1 allowed on odd numbered weeks). Where a

“+” follows the MAXIMUM value, the program was allowed to add two to MAXIMUM (only

once) where it would otherwise have to backtrack. “F” in the backtracking formula refers to the

number of period assignment failures, and is multiplied by a constant to determine the number of

periods backtracked. In parentheses next to the number of trials are the number of scheduling

failures (where “F” exceeded a limit of 5). Use of the failed trials in the results lends a

conservative bias - difficulty is greater than indicated. In the far right column, are the same

imbalance statistics used in Table 4, averaged across all trials.

Page 19: Enacting networks: The feasibility of fairness

EM. Leifer ,./ Enacting nefwmks 19

The results are given in Table 5. The fairness threshold reached by the NFL in 19’77 was surpassed by the scheduling program, but only with great difficulty. Schedules that surpassed the observed barrier were roughly Seuen times more difficult (in terms of number of attempts needed) to generate than schedules with alternation properties of the best of those actually used. The failure to generate a perfect schedule suggests that going beyond the new threshold reached may be exceed- ingly difficult. Interestly, getting to the observed threshold from a point where no alternation constraints are imposed is not that difficult. An exponential increase in difficulty apparently begins near where the observed threshold lies. Getting to the observed threshold was feasible, going beyond was nut. This helps explain why the 1977 threshold was not surpassed..

Application II: the problem of generalized exchange

Suppose there is a fairness norm operating in a social group that demands a long-run balance between giving and taking help for each member. Each member is free to ask help from other members, and is expected to give help when asked, Exchange is generalized in that help is given or taken from the group, without regard to the balances between individual actors. As long as actors are taking as much help as they are giving from the group, fairness will be perceived. Given the fortuitiveness in who happens to be available for helping, and in when help will be needed, it will be the case that some actors get ahead or behind in their giving or taking. This will be true even if, from an omniscient viewpoint, a long-run balance would materialize.

Null models are needed for showing how much short-run imbalance would be found even if a long-run balance held. Empirical efforts are needed to assess the sensitivity to short-run imbalances against these null expectations. It is conceivable that some not improbable amounts of short-run imbalance could disrupt generalized exchange, through a myopic perception of unfairness, and perhaps even change the encoun- ter dynamics that would otherwise be realized. That is, role differentia- tion could emerge to post hoc account for those who are ahead and behind, and this might have consequences on future encounters and their outcomes.

Given the numerous interdependencies in the enactment process, it is extremely difficult to analytically obtain null expectations for short-

Page 20: Enacting networks: The feasibility of fairness

20 E.M. Heifer / Enacting networks

run imbalance given long-run balance. Instead, ENACTER was used to enact networks under the condition of no short-run timing constraints. That is, from a balanced pool of giving and taking encounters (the network), period assignments are randomly drawn without replacement (and in a way that ensures system consistency). The actual short-run imbalance can then be analyzed to obtain null expectations.

Measures for short-run imbalance (used in Application I) are re- ported in the left side of each cell in Table 6 for the same networks as

Table 6

The extent of short-run imbalance in generalized exchange where a long-run balance between

giving and taking is strictly satisfied, across scheduling environments varying in size and density.

In the left column of each cell are the same imbalance statistics used in Table 4. In the right

column are the same statistics normalized between zero and one (the ranges of the imbalance

statistics vary across statistics and across scheduling environment). Results are averages across ten

trials.

Page 21: Enacting networks: The feasibility of fairness

EM. Leifer / Enacting networks 21

used in Table 1. Increasing network density (i.e. encounters per actor) not surprisingly increases short-run imbalance. When the different ranges are controlled by resealing each statistic between zero and one (right side of cell), proportionate imbalance decreases with density-al- though this may mean little to the actors themselves. The size of the network tends to increase the imbalance statistics, an effect that is not easy to account for. While network size does not affect the ranges of the statistics, it does increase the sample from which the statistics are computed. This should have the greatest effect on statistics based on extreme occurrences, like the largest imbalance across all teams. Yet all the statistics seem to be affected similarly by network size. Clearly, analytic work is needed before definite conclusions are drawn.

Actual tolerances to short-run imbalance are needed to assess the significance of the Table 6 levels. Although short-run imbalance in- creases with longer long runs (i.e. number of encounter periods, or density), how tolerant are actors (i.e. myopic) and is this tolerance affected by the network characteristics? If network size does increase short-run imbalance, how does the presence of a greater number of actors affect the flow of fairness complaints? Table 6 only provides baselines that can be used to increase the tolerance of the otherwise myopic actor.

5. Discussion

Network analysts have shown more interest in the enumeration of ties than the enactment of ties. In listing ties, completeness is encouraged and chronology is suppressed. This paper used enactment as a way to delimit what structures are sustainable. The key prediction to emerge is that where fairness norms constrain enactment, relatively large, mod- erately dense and undifferentiated networks can make fair enactment problematic. In the long run, this should render these networks unsta- ble and hence evolutionarily disadvantaged. While this prediction is admittedly crude, it points in the direction of establishing a theoretical unde~inning for what has largely been the empirical analysis of network structure.

This paper is written from the standpoint of a benevolent coordina- tor who has every intention to implement well-defined fairness norms but is frustrated by inadequate means. If this coordinator finds fair

Page 22: Enacting networks: The feasibility of fairness

22 E.M. Leifer / Enacting networks

enactments infeasible in certain network settings, we must wonder what becomes of fairness when there is no identifiable coordinator at work. A comparison of college and professional sports season schedules, for instance, reveals the greater centralized control of professional leagues (Stern 1979) and allows superior fairness properties in their season schedules. Is the feasibility of fairness issue more, or less, relevant where there is no identifiable coordinator responsible for fair en- actments?

In many social networks, enactment falls somewhere between fully centralized or decentralized control. Some actors have more control over enactment than others. The differences are often legitimated in the form of strong patron-client bonds that impose obligations on both sides. Just how many “powerful” actors, or strong ties, a network can maintain before the dictates of either start impinging upon other actors or ties needs to be analyzed. There can be considerable pressure on actors to block out other obligations and appear fully responsible for living up to dyadically defined and controlled fairness standards (see Boissevain 1974; Foster 1977). While this can have implications for network characteristics, fairness becomes personalized rather than ab- stract and network wide. Key actors mold the network, as opposed to abstract fairness norms that hold everyone under equal sway.

In this paper, power and particularism were neglected. Dyadic and triadic constraints prevalent in social networks (Holland and Leinhardt 1976) were ignored by ENACTER. This is because once we lose the benevolent coordinator we are not likely to find abstract fairness norms become relevant again until control over enactment is fully decentral- ized. Here enactment fairness is most likely to become a public issue, and provide needed guidance in allowing actors to work out the timing of their social ties. Decentralized enactment is found, for example, in modem urban-centered networks. Here actors are at the same time heavily committed and open for the right opportunity. The juggling of commitments is a crucial social skill, but the power to forge commit- ments is something you have to be discreet about. Simple fairness norms like “don’t leave anyone in your circle out” or “be sure and appear as generous as you appear covetous” can go far in shaping guest lists and weekend activities.

Where enactment fairness is a public issue, there must be strong pressures to limit network size, avoid middle-range densities, and to differentiate. In unwieldy networks the burden on social skills will be

Page 23: Enacting networks: The feasibility of fairness

E.M. L.eifer / Enacting networks 23

too great and stepping outside the bounds of enactment fairness will be unavoidable. Drawing the distinction between uncontrollable cir- cumstance and intentional malfeasance will become an irksome task that everyone would prefer avoiding. Self encounters will be used as a measure of last resort, but keeping their distribution even and suffering through them become increasingly difficult the more they have to be used. At some point the energy needed to keep within the bounds of enactment fairness and the tolerance for others hovering too close to these bounds will diminsh and the network, as it was, will be no more.

Besides being tied to the extremes of centralized or decentralized enactment, the relevance of fairness norms depends on a stable and well-defined network to enact. The network provides a fix accounting framework from which actors monitor enactment fairness as a develop- ing property. Without this fixed framework, actors could not come to be seen as left out or slighted as these might not be distinguishable from merely being outside the network. Hence an interest in enactment does not exclude an interest in the structure of networks, but is entirely complementary to it.

ENACTER separated the task of network generation and network enactment, and hence the latter could not affect the former. It is ironic that this short-run independence may be necessary for a longer-run interdependence. Networks must be stable enough for tensions over enactment fairness to build up. If networks were instantaneously redefined so as always to be consistent with enactment norms (e.g. if someone was left out or slighted they are simply no longer in the network), then enactment would just drift aimlessly across network possibilities. On the other hand, networks must not be so fixed that they cannot respond to pressure for change that the violation of enactment norms may bring.

Evolution has been invoked to make network structures compatible with enactment fairness. More needs to be done, however, in finding the actual mechanisms that ensure the long-run interdependence be- tween enactment norms and network structure. An interesting start is Rosenbaum’s (1984) tournament model of mobility with corporations. In this model, an entering cohort is put on an equal footing that officially continues until first promotions are made. Early promotions may be somewhat arbitrary, but function to differentiate the large entering cohort in winners and losers entirely ex post on the basis of who gets promoted. This differentiation makes enactment fairness

Page 24: Enacting networks: The feasibility of fairness

24 E.M. L.&r / Enacting networks

easier as winners compete only with other winners for progressively higher-level promotions, but is carried off only with highly unequal distributions of rewards and attention in the organization needed to reinforce acquired unequal roles.

6. Conclusions

Fairness is at once disarmingly simple and extraordinarily complex. As a necessary component of stable networks, it must be something that network participants can readily perceive. In an in-depth empirical investigation of the social construction of transfer prices within large corporations, where economic theories are conspicuously irrelevant, Eccles (1986) comes to the simple yet profound conclusion that fairness is the key property in assessing transfer prices and policy. Fairness is instinctual to good managers. Yet when scholars attempt to dissect “fairness”, it becomes extraordinarily complex. Anyone who has braved Rawls’ A Theory of Justice (1971) or Rae’s Equalities understands how complex fairness can be made.

The problem of “what is fairness” has been avoided by treating fairness norms as “given” inputs. In some systems differentiating the actors might come to be accepted as “fair”, while in others it may not. Likewise, loosening an enactment process with self encounters might be “fair”, or it might not. Our problem begins after these issues have been worked out. Even where there is agreement over fairness norms and sincere efforts to satisfy them, fairness is not always feasible. Fairness, as a property of enacted networks, is a matter of means and not just ends. The construction of particular fairness norms is something that politics can resolve. Social science, however, can allow us to map out possibilities in the design of fair network enactments.

References

Adler, P. 1981 Momemtum: A Theory of Social Action. Beverly Hills, CA: Sage.

Boissevain, J. 1914 Friends of Friends: Networks, Manipulators and Coalitions. Oxford: Basil Blackwell.

de Werra, D. 1982 “Minimizing irregularities in sports schedules using graph theory.” Discrete Applied

Mathematics 4: 211-226.

Page 25: Enacting networks: The feasibility of fairness

E.M. l&r / Enacting networks 25

de Werra, D.

1985 “On the multiplication of divisions: The use of graphs for sports scheduling.” Networks,

15: 125-136. Eccles, R.G.

1986 The Transfer Pricing Problem: A Theory for Practice. Lexington, MA: Lexington.

Edwards, J.

1979 “The home field advantage.” In Goldstein J.H., (ed.), Sports, Games and Play: Social

and Psychological Viewpoints. Hillsdale, NJ: Lawrence Erlbaum.

Eisenstadt, S.N. and L. Roniger

I984 Patrons, CIients and Friends: Interpersonal Relations and the Structure of Trust in Society.

Cambridge, UK: Cambridge University Press.

Foster, G.M.

1977 “The dyadic contract: a model for social structure.“. In Schmidt, SW. et al. (eds.),

Friends, Followers, and Factions. Berkeley: University of California Press.

Gouldner, A.

1960 “The norm of reciprocity.” American Sociological Review 25: 161-178.

Granovetter, M.

1985 “Economic action and social structure: The problem of embeddedness.” American Journal of Sociology 91: 481-510.

Granovetter, M.

1973 “The strength of weak ties.” American Journal of Sociology 78: 1360-1380.

Holland, P.W. and S. Leinhardt

1976 “Local structure in social networks.” In Heise, D. (ed.), Sociological Methodology, pp.

l-45.

Kirkpatrick, S. et al. 1983 “Optimization by simulated annealing.” Science 220: 671-769.

Leifer, E.M.

1986 “Sustaining inequality among equals: The effects of social context on competition,” NSF

grant proposal.

Leifer, E.M.

1988 “Interaction preludes to role setting: Exploratory local action,” American Sociologica/ Review 53: 865-878.

Rae, D.

1981 Equalities. Cambridge, MA: Harvard University Press.

Rawls, J.

1971 A Theory of Justice. Cambridge, MA: Harvard University Press.

Rosenbaum, J.E.

1984 Career Mobility in a Corporate Hierarchy. New York: Academic Press.

Schwartz, B. and S. Barsky

1977 “The home advantage.” Social Forces 55: 641-661. Skvoretz, J.

1985 “Random and biased networks: Simulations and approximations.” Social Networks 7: 225-261.

Stern, R.N.

1979 “The development of an interorganizational control network: The case of intercollegiate

athletics.” Administrative Science Quarterly, 24: 242-266. Weiss, H.J.

1986 “The bias of schedules and playoff systems in professional sports.” Management Science 32: 696-713.