a task environment approach to organizational assessment

8
A Task Environment Approach to Organizational Assessment Author(s): Alana Northrop and James L. Perry Source: Public Administration Review, Vol. 45, No. 2 (Mar. - Apr., 1985), pp. 275-281 Published by: Wiley on behalf of the American Society for Public Administration Stable URL: http://www.jstor.org/stable/976148 . Accessed: 14/06/2014 21:38 Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp . JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. . Wiley and American Society for Public Administration are collaborating with JSTOR to digitize, preserve and extend access to Public Administration Review. http://www.jstor.org This content downloaded from 185.44.78.31 on Sat, 14 Jun 2014 21:38:03 PM All use subject to JSTOR Terms and Conditions

Upload: alana-northrop-and-james-l-perry

Post on 20-Jan-2017

215 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A Task Environment Approach to Organizational Assessment

A Task Environment Approach to Organizational AssessmentAuthor(s): Alana Northrop and James L. PerrySource: Public Administration Review, Vol. 45, No. 2 (Mar. - Apr., 1985), pp. 275-281Published by: Wiley on behalf of the American Society for Public AdministrationStable URL: http://www.jstor.org/stable/976148 .

Accessed: 14/06/2014 21:38

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at .http://www.jstor.org/page/info/about/policies/terms.jsp

.JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range ofcontent in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new formsof scholarship. For more information about JSTOR, please contact [email protected].

.

Wiley and American Society for Public Administration are collaborating with JSTOR to digitize, preserve andextend access to Public Administration Review.

http://www.jstor.org

This content downloaded from 185.44.78.31 on Sat, 14 Jun 2014 21:38:03 PMAll use subject to JSTOR Terms and Conditions

Page 2: A Task Environment Approach to Organizational Assessment

275

A Task Environment Approach to Organizational Assessment Alana Northrop, California State University, Fullerton James L. Perry, University of California. Irvine

In recent years, interest has grown in the comprehen- sive and systematic measurement of organizational behavior. One reflection of this interest is the increased use of organizational assessment which is the process of measuring the effectiveness of an organization from a social systems perspective.' An organizational assess- ment is grounded in conceptual models of organizations and is typically holistic, i.e., focused on "both the task-performance capabilities of the organization . . . and the human impact of the system on its individual members. "2

When assessing the task-performance dimension of effectiveness, client evaluations and objective indicators are normally used to judge the extent to which an organization has met its goals.3 Input from officials in an organization's task environment is a third source of data that is traditionally ignored in organizational assessments, but one which is widely recognized as important.4

This study explores the use of effectiveness measures gathered from officials whose jobs require them to interact with federal organizations. By focusing on such inside respondents, a different view may be secured of federal agencies than that provided by private citizens.' Respondents include members of Congress, state and local welfare officials, private contractors, and naval ship commanding officers. Being mainly government employees, they should have a relevant comparative perspective from which to evaluate other agencies' and workers' effectiveness, and they should be knowledge- able about service quality because of their stable inter- actions with the agencies. Of course, evaluations by both private citizens and "insiders" suffer from the limitations of perception and knowledge. But so far, evaluation studies have focused on private citizens with varying degrees of success.6 By using task environment respondents, we hope to provide another part of the evaluation picture, one previously overlooked, but one which can add to the external validity of organizational assessments. 7

The concept of task environment not only directs use of a different set of respondents but also requires sensi- tivity to domains of activity as they vary across organi- zations.8 Prior organizational assessments have not ex- plored how domain characteristics such as the location or type of service provided by the governmental unit may affect its perceived effectiveness. However, sup-

MARCH/APRIL 1985

* This study takes a task environment approach to organizational assessment by using data gathered from knowledgeable govern- ment officials external to the agencies being assessed. These of- ficials were asked to rate the performance of seven federal organi- zational units in the summer of 1980 and again in 1981 in order to test hypotheses under two different administrations. The study seeks to address the following three questions. (1) Does field office or headquarters status affect the evaluation of an agency or its employees? (2) Does the type of service provided, physical or human, affect the evaluation of employees or agencies? (3) Do employees and the agencies they work for receive different per- formance ratings? Our findings suggest that the location and type of service provided by agencies as well as the focus of evaluation affect the results of organizational assessments.

port exists for the view that the type of service provided by an agency can inherently affect its performance rat- ing independently of the quality of the staff or of the organization's procedures.9 This is because relation- ships between goals of human service organizations and the means to achieve them are often not well under- stood. In contrast, relationships between goals of physical support organizations and the means to achieve them tend to be better understood. ' For example, methods for preventing soil erosion (a physical service) are based on years of scientific study while the process for making mortgage investments (a human service) where applicants' credit worthiness falls below market standards is much less well understood. This difference in the connection between means and ends would seem to have an important effect on the ability of an agency to be effective.

In addition, the hierarchical location of a governmen- tal unit may also have an effect on the ability of an agency to be effective. In the federal government, head- quarters offices are typically engaged in policy or rule-

Alana Northrop is coordinator of the Public Administration Program at California State University, Fullerton. Her previous research has been on management of information systems, municipal reform, and quantitative methods.

James L. Perry is professor of management in the Graduate School of Management, University of California, Irvine. His research on public organizations and management has focused on innovation, organiza- tional effectiveness, and personnel and labor relations. He recently co- authored (with Kenneth Kraemer) Public Management: Public and Private Perspectives (Mayfield, 1983).

This content downloaded from 185.44.78.31 on Sat, 14 Jun 2014 21:38:03 PMAll use subject to JSTOR Terms and Conditions

Page 3: A Task Environment Approach to Organizational Assessment

276 PUBLIC ADMINISTRATION REVIEW

making activity within politicized, complex, and am- biguous performance environments. However, the primary job of field offices is rule application. There- fore, the performance environment of field offices is ordinarily less politicized and more predictable. For instance, the routinization of field service provision for naval fleet support or social security benefits contrasts with the much less routine regulatory activities of GSA's transportation procurement offices. Hence, it seems reasonable that field offices have a greater potential for effective performance than do headquarters offices."

It is also expected that location and type of service provided has smaller effects on evaluation of employees than on evaluation of agencies because one cannot com- pletely hold employees responsible for any inherent problems in their work. Thus, employees are expected to get higher ratings than their agencies. As Michelson argues: "Government functionaries, though they work hard and long, accomplish little because they have in-

The system may be a more serious problem than the motivations or abilities of its personnel. Instead of merit pay and other employee oriented incentives, Congress and the present administration should consider decentralizing the federal bureaucracy.

herently unproductive jobs.""2 In explaining this "working bureaucrat, nonworking bureaucracy" thesis he writes: "This isn't sabotage. It's just plain bad man- agement. It's not bad management at the functional level, assigning and monitoring doable tasks, although that may occur. It's bad management at the conceptual level, devising relevant tasks."'3 This theme was reiter- ated in 1983 in a Wall Street Journal article by Clayton Christensen, a principal with the Boston Consulting Group and former White House Fellow. '4 Mr. Christen- sen also argues that the Carter and Reagan administra- tions have been especially distrustful of bureaucrats, thereby restraining their productivity.

In sum, this study introduces the task environment as an alternative approach to organizational assessment. The task environment concept directs use not only of a different set of respondents than typically used in assessments, but also consideration of (1) whether the type of service provided, physical or human, affects per- formance ratings; (2) whether headquarters or field of- fice status affects performance ratings; and (3) whether the focus of evaluation, employees or agencies, affects performance ratings. In the following pages, these ques- tions are addressed using data gathered under both Carter's and Reagan's administrations.

Methods and Data

The data for this analysis were collected as part of a larger study of the effects of civil service reform in the federal government.'5 Contractual and resource limita-

tions confined the number of organizations participat- ing in the study to seven. The following discussion describes the basis for selection of these organizations, the procedures for sampling respondents, and the con- struction of the evaluation indexes.

Site Selection

Four federal departments were chosen, and a fifth department was added in case one of the departments terminated its voluntary participation for unforeseen reasons. Through negotiations with the five depart- ments, seven sites were chosen. The sample consisted of the following organizational units: the Transportation and Public Utilities Service (TPUS) of the General Serv- ices Administration, Washington, D.C. (headquarters/ physical services); the Naval Ship Weapon Systems Engineering Station (NSWSES) in Port Hueneme, California (field/physical services); the NASA-Ames Research Center in Moffett Field, California (field/ physical services); the Southern California network of- fices of the Social Security Administration (SSA) (field/ human services); the national office of the Farmers Home Administration (FmHA) (headquarters/human services); and the national and California state offices of the Soil Conservation Service (SCS) (headquarters, field/physical services). All agencies were voluntary participants in the study.

Because sites were chosen expressly to test hypotheses about services and location, the primary site selection objective was internal validity. The extent to which sites were representative of all federal organizations was less of a concern because generalization to all federal organizations was not an immediate objective. The sites, however, appear to represent a range of federal organization characteristics, including military and civilian missions, of varying ages and sizes. While it is doubtful that a small sample could be representative of all federal agencies, this sample meets the explicit requirements of the research design and yet reflects the diversity found throughout the federal government.

Sample and Sampling Procedures

Members of each organizational unit's task environ- ment-i.e., other federal officials, state and local of- ficials, and oversight bodies-were surveyed. Those sur- veyed were either selected from archival information or were nominated by managers and staff from the focal agencies. The criterion for selection was that the individual interact with the focal agency in the per- formance of its tasks or mission (hence the "task environment" designation). We sought out individuals from each program unit to identify potential respon- dents to assure as representative a sample as possible.

Biases are possible when managers nominate respon- dents to evaluate their agencies. A manager could know- ingly stack the deck with favorable informants or with- hold identification of potentially critical information. Nominations could be biased in other ways, for exam- ple, by a manager's inability to remember potential

MARCH/APRIL 1985

This content downloaded from 185.44.78.31 on Sat, 14 Jun 2014 21:38:03 PMAll use subject to JSTOR Terms and Conditions

Page 4: A Task Environment Approach to Organizational Assessment

A TASK ENVIRONMENT APPROACH TO ORGANIZATIONAL ASSESSMENT 277

respondents. These biases would not necessarily affect the relative evaluation across agencies, but nevertheless efforts were made to minimize them. Archives were used for random sample selection whenever feasible, and with informants, emphasis was given to the non- threatening purposes for which the names of potential respondents were being solicited.

Potential respondents in positions with few incum- bents were sampled disproportionately. Those in posi- tions that had many incumbents (e.g., naval ship cap- tains and contractors) were randomly sampled. Repre- sentative external roles that were included in each organizational unit's task environment are listed below.

TPUS: public utilities, state and local utilities agen- cies, General Accounting Office, transporta- tion-related divisions and financial branches of federal departments and programs.

NSWSES: naval shipyards, weapons stations and other naval facilities, Department of Defense, Naval Sea Systems Command, contractors, ships' commanding officers.

Ames: contractors, NASA facilities, federal agency liaisons, military installations, high schools, colleges and universities, aircraft associa- tions.

SSA: members of Congress, state and local wel- fare agencies.

FmHA: housing and investment associations, profes- sional groups, federal departments and agencies, other Department of Agriculture agencies, public interest groups, health asso- ciations, water and waste associations.

SCS: professional groups and societies, environ- mental associations, colleges and universi- ties, public interest groups, equal oppor- tunity associations, federal, state and local agencies.

The sample of task environment members was devel- oped in April and May of 1980 for the June 1980 survey administration, and it was revised before the June 1981 administration. Given the potential difficulty of resur- veying the same individuals for the second wave, the probable attrition using a panel design, and the changes in the task environment, a modified panel sampling was used. The same organizational positions were sampled in each wave, rather than specific individuals. This assured comparability between the 1980 and 1981 sam- ples, after adjusting for any structural changes in the task environments.

The survey was conducted by mail and accompanied by a letter of explanation. A follow-up letter and new survey were mailed to non-respondents about three weeks after the initial mailing. Table 1 summarizes the size of the task environment samples and the number of completed questionnaires.'6

Construction of Evaluation Indexes

Ten indicators of employee and agency performance were factor analyzed. Two factors were obtained: (1) a rating of employee integrity and work, and (2) a rating of agency effectiveness. A score for each respondent on the two factors was obtained by averaging the category responses on the individual questions. Each index has a standardized item alpha of at least .80, which suggests their high scalability. The questions used to build each index are listed in Table 2.

Findings

Do Location and Type of Services Affect the Evaluation of Federal Agencies?

We hypothesized that field offices and units which provide physical support services would be judged more

TABLE 1 Task Environment Survey Response Rates

June 1980 June 1981

Returned Returned Sample _Sample Site Name Size No. t Size No. S

Farmers Home Administration 71 33 46 133 65 49 Soil Conservation Service Headquarters 8a 5 63 230 110 48 Soil Conservation Service California Field Office 38 28 74 63 45 71 NASA-Ames Research Center 236 184 80 182 130 71 Naval Ship Weapon Systems Engineering Station 130 94 72 223 119 53 Social Security Administration, Southern California Network 12 10 83 11 9 82

Transportation and Public Utilities Service 178 80 45 129 79 61 TOTAL 673 434 64% 971 557 57%

aThe first wave of questionnaires was distributed only to a few external contacts for this agency because of the agency's apprehension about the consequences of such a survey. Since this agency's participation was voluntary, we chose not to press for a larger sample and potentially risk discontinuation of the long-term relationship. By the time the second year's survey was administered the agency's concerns were significantly reduced because the first administration had no negative effects and good working relationships had been developed with the agency. The sample size for the second administration, therefore, was increased considerably.

MARCH/APRIL 1985

This content downloaded from 185.44.78.31 on Sat, 14 Jun 2014 21:38:03 PMAll use subject to JSTOR Terms and Conditions

Page 5: A Task Environment Approach to Organizational Assessment

278 PUBLIC ADMINISTRATION REVIEW

TABLE 2 Questions Used in Constructing

Evaluation Indexes

Rating of Employee Effectiveness* Alpha = .85

I would never question the integrity of employees in this organization. Employees of this organization know what their jobs require of them. Employees in this organization maintain high standards of conduct. Employees in this organization have the skills necessary to do their

jobs. Employees in this organization work hard. Employees in this organization have enough work to keep them busy.

Rating of Agency Effectiveness* Alpha = .80

Overall, this organization is effective in accomplishing its objectives. In this organization, it is often unclear who has the formal authority

to make a decision. It takes too long to get decisions made in this organization. The management in this organization is flexible enough to make

changes when necessary.

*Response categories on a seven-point scale ranged from strongly dis- agree (1) to strongly agree (7).

effective than headquarters or units which provide human services. On a rating scale of very negative (1) to very positive (7), the headquarters received an average rating of 4 in 1980 and a 4.3 in 1981 (Table 3). In con- trast, the field offices received the higher average ratings of 4.8 in 1980 and 4.9 in 1981 (Table 3). The differences between headquarters and field offices are significant with less than a 5 in 100 probability that they are due to chance alone. Moreover, the magnitude of Yule's Q which measures the strength of association between location and rating (Table 4) is a substantial one based upon normal conventions given that Yule's Q varies from 0 (no relationship) to ? 1 (a perfect relationship). 17 Thus, two different statistical tests, Yule's Q and t-test for difference in means, show that field offices receive significantly higher ratings than headquarters offices under two different administrations. And, looking at the ratings of the seven units individually confirms that the field/headquarters findings are not biased by the disproportionate sample size of any one unit; the in- dividual headquarters ratings are lower than those of the field offices.

Considering the type of services provided by an agen- cy, we find that they have a slightly weaker effect on an agency's effectiveness ratings than does location (Table 4). Again, on a scale of very negative (1) to very positive (7), agencies that provide human services received a 3.9 mean rating in 1980 and a 4.1 mean rating in 1981 (Table 3). But agencies that provide physical services received the higher mean rating of 4.7 in both 1980 and 1981 (Table 3). These differences in mean ratings by service type are statistically significant, and the mag- nitude of Yule's Q indicates a "moderate" to "strong" relationship between types of service provided and ef- fectiveness ratings (Table 4). '8 In addition, the different

ratings for agencies that provide human services versus physical services are not due to the disproportionate sample size of any one unit.

In sum, field offices and units which provide physical services have a higher likelihood of being judged more effective than headquarters and units which provide human services. We believe that location affects per- formance rating because field offices have more routine and less politicized jobs, making their goals easier to accomplish. We also believe that type of services pro- vided affects performance ratings because governmental agencies that provide physical services tend to have clearer means for achieving their goals than do agencies that provide human services, thereby making their goals easier to attain.

Do Location and Type of Services Affect the Evaluation of Federal Employees?

Respondents were also asked to assess the integrity and work effectiveness of federal employees (see Table 2 for questions). Again we wanted to look at the impact of location and type of services but this time on the evaluations of employees. We believed that because location and type of services affected the evaluation of federal agencies, a spillover or tarnished halo effect would carry over to the federal employees. In addition, we expected that location and type of services provided would have a weaker effect on the evaluation of em- ployees than on the evaluation of agencies. Specifically, we felt that employees can be viewed as distinct from the agency, and there could, therefore, be a rational tend- ency not to blame employees for the inherent admin- istrative problems of their work.

On the scale of very negative (1) to very positive (7), our respondents gave headquarters employees a 4.8 mean rating in 1980 and a 4.9 mean rating in 1981 (Table 3). In contrast, employees in field offices re- ceived the significantly higher mean rating of 5.4 in both 1980 and 1981 (Table 3). These different ratings for employees in headquarters versus field offices are not due to any disproportionate sample size of unit respon- dents. Complimentarily, our additional statistical tests indicate that functional location has a statistically sig- nificant and "moderate" effect on the evaluation of federal employees (Table 4). 19 We should also note, though, that location's moderate effect on the rating of employees is smaller than its effect on the rating of agencies (Table 4).

Using the same 1 to 7 scale, employees who are in- volved in human services received a 4.7 mean rating in 1980 and a 4.9 mean rating in 1981 (Table 3). As antici- pated, employees who are involved in physical support services received the higher average ratings of 5.3 in 1980 and 5.2 in 1981 (Table 3). The effect of type of services provided on respondents' evaluations of federal employees is statistically significant and "moderate" in strength (Table 4).2? These findings again are not due to the disproportionate sample size of any one unit's respondents. In addition, type of services provided has a

MARCH/APRIL 1985

This content downloaded from 185.44.78.31 on Sat, 14 Jun 2014 21:38:03 PMAll use subject to JSTOR Terms and Conditions

Page 6: A Task Environment Approach to Organizational Assessment

A TASK ENVIRONMENT APPROACH TO ORGANIZATIONAL ASSESSMENT 279

TABLE 3 Means on Evaluation Indexes by Location and Servicea

Rating of Employeesb Rating of Agenciesb

1980 1981 1980 1981

Location

Headquarters 4.8 4.9 4.0 4.3 (N=115) (N=249) (N=118) (N=250)

* * * * Field 5.4 5.4 4.8 4.9

(N=308) (N=298) (N=309) (N=301)

Service

Human Services 4.7 4.9 3.9 4.1 (N=42) (N=73) (N=43) (N=73)

* * * * Physical Services 5.3 5.2 4.7 4.7

(N=381) (N=474) (N=384) (N=478) Location and Service

Headquarters/human 4.7 4.9 3.9 4.1 (N=33) (N=64) (N=33) (N=64)

Field/human 4.7 4.7 3.9 4.3 (N=9) (N=9) (N=10) (N=9)

Headquarters/physical 4.7 4.9 4.1 4.4 (N=82) (N=185) (N=85) (N=186)

Field/physical 5.4 5.4 4.8 4.9 (N=299) (N=289) (N=299) (N=292)

All Respondents 5.2 5.2 4.6 4.6 (N=423) (N=547) (N=427) (N=551)

a Medians for location and service parallel the mean findings, so only one statistic is used for parsimony of data presentation. bIndex ranges from (1) strongly disagree (very negative rating) to (7) strongly agree (very positive rating). *p < .05, one-tailed t-test for difference in means within years.

smaller effect on the evaluation of employees than it does on the evaluation of the agencies in which they work (Table 4).

To summarize, employees in field offices and who are involved in physical service delivery receive higher eval- uations in terms of integrity and quality of work than do employees in headquarters and those who are involved in human service delivery. Consequently, the location and type of services provided by an agency not only affect its image but also that of its employees. But being a headquarters office or providing human services has a greater effect on the evaluation of an agency than it does on the evaluation of an agency's employees.

Are Employees Rated Higher Than Their Agencies?

The third issue we wanted to address of importance to organizational assessment was whether evaluation of the employees of an agency differs from evaluation of their agency. We had hypothesized that it was quite con- ceivable that one would get different ratings for an agency than for its employees because quality of staff can be distinguished from the quality of the organiza- tion's procedures.

MARCH/APRIL 1985

A large overlap was found in ratings of the employees and the agencies (1980 Q = .83*, 1981 Q = .76*). How- ever, employees were consistently rated higher than their respective agencies (Table 3). This is true for each of the seven units individually as well as grouped by location or type of services (Table 3). Using our 1 to 7 scale, employees received a mean rating of 5.2 in both years while the agencies received the statistically sig- nificant lower mean rating of 4.6. The median scores as well as the mean scores were lower for agencies than for employees in both years.

Summary and Conclusions

The concept of task environment not only directed us to use a different set of respondents but also to be sen- sitive to domains of activity as they vary across organi- zations. As a result, we hypothesized that field offices and physical support organizations had an important edge in performance achievement over headquarters and human service organizations. Our data strongly support both of these hypotheses. Field offices and physical services organizations received much higher evaluations than did headquarters and human service

This content downloaded from 185.44.78.31 on Sat, 14 Jun 2014 21:38:03 PMAll use subject to JSTOR Terms and Conditions

Page 7: A Task Environment Approach to Organizational Assessment

280 PUBLIC ADMINISTRATION REVIEW

TABLE 4 Yule's Qs Between Evaluation Indexes and Location and Service Measuresa

Evaluation Index

Rating of Employees Rating of Agencies

1980 1981 1980 1981

Headquarters/field -.47* -.47* -.64* -.5 2* Human services/physical services -.39* -.28* -.51* -.44*

aAll indexes ranged from 1 to 7 and were dichotomized by combining the positive ratings of 5, 6, and 7 and the negative and undecided ratings of 1, 2, 3, and 4. Variables were dichotomized because there were very few high or low scores on the 7-point scales, leading to the problem of insufficient units in table cells to be analyzed. Yule's Q is an appropriate measure of association because the independent variables are measured at the nominal level and all variables are dichotomous. The data adhered to the 30:70 rule of marginal splits.

*p < .05.

organizations. Moreover, the findings remained the same even though one survey took place under the Carter administration and another under the Reagan administration. The stability of these findings suggests that systemic factors such as location and type of serv- ices provided may be important explanations of differ- ences in performance ratings.

Furthermore, we found that the effects of location and type of services carry over, although not as strong- ly, to evaluations of federal employees. Employees in field offices and those who were involved in physical services received higher ratings than did employees in headquarters and those who were involved in human services.

Finally, we found that employees were consistently rated higher than their agencies. We think it is impor- tant that employees not only received different ratings from their agencies but also that they received higher ratings. And we believe that the higher employee ratings reflect problems-real and perceived-with the manage- ment of federal agencies. Such management problems appeared to be present under both the Reagan and Carter administrations.

In a speculative vein, these results have implications for the reorganization of the federal bureaucracy. Both Jimmy Carter and Ronald Reagan rode to victory as Washington outsiders. Carter viewed the problem of the federal bureaucracy as one of personnel and conse- quently focused on improving the quality of the work- force. These findings suggest that the system may be a more serious problem than the motivations or abilities of its personnel. Instead of merit pay and other employee oriented incentives, Congress and the present administration should consider decentralizing the federal bureaucracy. By expanding and creating new

field offices, the image of the federal bureaucracy might improve, at least among government insiders and per- haps among a wider group of the citizenry. And to the extent that perceived effectiveness relates to effective- ness, the creation of new field offices could lead to pay- offs in effectiveness. But please note we are suggesting decentralization through the creation of federal field of- fices and not decentralization through transferring federal programs to the states.

Our findings also suggest that cutting human services, a Reagan strategy, may not deal with the problem of human services agencies. Human services organizations appear to be perceived as less effective and thus are open to criticism due to the inherent nature of their tasks rather than to their present size, structure, or per- sonnel. However, to the extent that the Reagan admin- istration strategy leads to an aggregate shift toward physical services, the image of the federal government might be improved among government insiders and per- haps, in turn, among a wider group of the citizenry.

Future research will need to assess how results from this task environment approach may be used either in conjunction with or as a substitute for other sources of evaluation information, such as private citizen evalua- tions and objective indicators. One possible area for ex- ploration is to see whether private citizens or objective indicators discriminate on the basis of location and type of services as did our task environment respondents. It is likely that this sample of federal agencies is not in- dicative of all federal agencies. Other studies will have to confirm whether our findings on location and type of services performed are unique to our agencies or are fac- tors that are generalizable to all federal agencies and, for that matter, other similarly structured government bureaucracies.

Notes

1. Edward E. Lawler III, David A. Nadler and Cortlandt Cam- mann (eds.), Organizational Assessment (New York: John Wiley and Sons, 1980).

2. Ibid., 6.

3. For a summary of many client-based evaluations, see Charles Goodsell, The Case for Bureaucracy (Chatham, N.J.: Chatham House Publishers, Inc., 1982), Chapter 2. Illustrative of these client surveys are Charles Goodsell, "Client Evaluation of Three

MARCH/APRIL 1985

This content downloaded from 185.44.78.31 on Sat, 14 Jun 2014 21:38:03 PMAll use subject to JSTOR Terms and Conditions

Page 8: A Task Environment Approach to Organizational Assessment

A TASK ENVIRONMENT APPROACH TO ORGANIZATIONAL ASSESSMENT 281

Welfare Programs," Administration and Society 12 (August 1980), 123-136; and Stuart Schmidt, "Client-Oriented Evalua- tion of Public Agency Effectiveness," Administration and Soci- ety 8 (February 1977), 403-422.

4. The task environment concept was first introduced by William Dill to identify the parts of the environment which are relevant to goal setting and attainment. It has been used extensively in the organization theory literature. See William R. Dill, "Environ- ment as an Influence on Managerial Autonomy," Administra- tive Science Quarterly 2 (March 1958), 409-443.

5. See F. P. Kilpatrick, et al., The Image of the Federal Service (Washington, D.C.: The Brookings Institution, 1964) for the public's and federal employees' ratings of federal employees.

6. For a summary of the criticisms of citizen surveys, see Jeffrey L. Brudney and Robert E. England, "Urban Policy Making and Subjective Service Evaluations: Are They Compatible," Public Administration Review 42 (March/April 1982), 127-135.

7. Donald T. Campbell and D. W. Fiske, "Convergent and Dis- criminant Validation by the Multitrace-Multimethod Matrix," Psychological Bulletin 56 (1950), 81-105.

8. See James D. Thompson, Organizations in Action (New York: McGraw-Hill, 1967), and Andrew H. Van de Ven and Marilyn A. Morgan, "A Revised Framework for Organization Assess- ment" in Organizational Assessment, Edward E. Lawler III, David A. Nadler and Cortlandt Cammann (eds.) (New York: John Wiley and Sons, 1980).

9. James D. Thompson, Organizations in Action (New York: McGraw-Hill, 1967). Derek Pugh, D. J. Hickson, C. R. Hinings, and C. Turner, "Dimensions of Organizational Structure," Administrative Science Quarterly 13 (1968), 65-105.

10. Thompson, Organizations in Action, and Charles Perrow, Organizational Analysis: A Sociological Perspective (Belmont, Calif.: Brooks/Cole, 1970).

11. The headquarters/field distinction has been found to be useful for predicting internal attitudinal differences. See Frank T. Paine, Stephen J. Carroll, Jr., and B. A. Leete, "Need Satisfac- tions of Managerial Level Personnel in a Government Agency,"

MARCH/APRIL 1985

Journal of Applied Psychology 50 (1966), 246-249. For a general discussion of decentralization issues in the federal government, see Robert K. Yin, "Decentralization of Government Agen- cies, " in Making Bureaucracies Work, Carol H. Weiss and Allen H. Barton (eds.) (Beverly Hills, Calif.: Sage Publications, 1979), 113-124, and U.S. General Accounting Office, Streamlining the Federal Field Structure: Potential Opportunities, Barriers, and Actions That Can Be Taken (Washington, D.C.: U.S. General Accounting Office, August 5, 1980): FPCD-80-4.

12. Stephan Michelson, "The Working Bureaucrat and the Non- working Bureaucracy," in Making Bureaucracies Work, Carol H. Weiss and Allen H. Barton (eds.) (Beverly Hills, Calif.: Sage Publications, 1979), 175-199.

13. Ibid., 176. 14. Clayton Christensen, " 'Bureaucrat' Need Not Be a Dirty

Word," The Wall Street Journal (Nov. 7, 1983), 26, cols. 5-7. 15. James L. Perry and Lyman W. Porter, Organizational Assess-

ments of the Civil Service Reform Act of 1978 (Washington, D.C.: U.S. Office of Personnel Management, 1981).

16. A retest of the June 1980 survey using a sample of the original respondents, was conducted to determine the reliability of the initial questionnaire items. Twenty-one individuals were con- tacted and consented to complete the survey a second time, two weeks after the first survey. Comparison of an individual's responses at this short interval permits an assessment of the probable variance in responses due to measurement error alone. The appropriate statistic for assessing retest reliability is the Pearson correlation coefficient. A mean coefficient of .70 was achieved, indicating a high reliability for the survey items. For further discussion of this procedure, see Jum Nunnally, Psycho- metric Theory (New York: McGraw-Hill, 1977).

17. See James A. Davis, Elementary Survey Analysis (Englewood Cliffs, N.J.: Prentice-Hall, 1971), 49, for adjectives to describe strength of relationship with Yule's Q.

18. Ibid. 19. Ibid. 20. Ibid.

This content downloaded from 185.44.78.31 on Sat, 14 Jun 2014 21:38:03 PMAll use subject to JSTOR Terms and Conditions