Northern Illinois University
City University of New York
ABSTRACT: This article discusses the system and practice of performance assessment (PA) in academic libraries both conceptually and practically. A descriptive analysis of an existing PA system used in a university library is conducted. Three system-inherent problems, i.e., weighting of criteria, the immediate supervisor as the rater, and the purpose of PA, are illustrated. It is concluded that, as a managerial means, the PA system can directly reflect the social and cultural environment of a work place as well as the managerial standards of that administration.
PA has been a management topic covered by mainly library educators, personnel staff and library administrators (Aluri & Reichel, 1994). Its wide application in libraries and its influence on library employees, however, has made it a common concern of not only administrators but also librarians in general.
In the library and management literature, the managerial function of an effective PA system to an organization is strongly confirmed (e.g., McGregor, 1960; Reneker & Steel, 1989; Anderson, 1993; Fletcher, 1997; Edwards & Williams, 1998). Evans & Ruggas (1982) have identified many beliefs about an effective performance appraisal system. According to them, such a system is essential to ensure good management and good job performance, to assist in organizational personnel planning, to assess an employee’s future and potential progress, to maintain control of staff productivity, to help with personal growth, to be an effective system of motivation, to assess an individual’s strengths and weaknesses objectively, and to identify areas that need improving. But a good understanding of a system does not come from theoretical generalizations. People tend to agree peacefully that PA should focus on the evaluatee’s work performance not his personality, and that excellent work performance should be rewarded. But disputes take place where subjectivity involved in PA and merit compensation in practice are reviewed. Because of this reason and in order to touch upon some substantial issues in PA, the authors have chosen to start the current discussion with reference to the existing evaluation practice and system used in an academic library, which is referred to as the Case Library or CL in short, meaning the library used for a case study. The focus will be on the PA of librarians with faculty status.
|AD for PS & AD for CTS||ETPS Committee|
CL conducts annual evaluations of librarians who have faculty status. The assessment of the effectiveness of a faculty member is based on three categories: (1) effectiveness in librarianship, (2) scholarly performance and achievement, and (3) service to the university, community, and profession. Sixty percent of the evaluation is based on Category 1. The remaining two categories are evaluated between the ranges of ten to thirty percent for each as shown in Figure 2.
|Category 2||Scholarly Activities||10-30%|
The first step of the PA process is preparation of the Annual Report, which consists of:
All completed reports are submitted to the ETPS Committee and reviewed by all participants involved in the rating, including the Dean, ADs, members on the Library Council and ETPS Committee.
A five-point numeral scale is adopted for rating in each of the three categories. The descriptors and their numerical equivalents are as follows:
According to the Library Policies & Procedures, the Dean’s performance review should be conducted by the Library Council.
The ETPS and the Dean vote to give rating values to the ADs.
The ETPS and the AD for PS vote to give rating values to faculty in the PS Division.
The ETPS and the AD for CTS vote to give rating values to faculty in the CTS Division.
The administration has 50% open vote, and the ETPS has 50% blind vote.
If an evaluator or an evaluatee is an ETPS Committee member, s/he shall leave the meeting while the discussion and the vote on this evaluation is being conducted. A sample rating is given below.
|Areas||ETPS* (50%)||AD (50%)||Final Rating|
|Category 1 (60%)||4,5,5,4,4,5,5=5||5||5|
|Category 2 (20%)||4,4,4,3,3,4,4=4||5||4|
|Category 3 (20%)||3,3,4,4,4,4,3=4||5||4|
|Category 2||Category 3||Formula||Composite Ratings|
|5||5||5||3.0 + 1.0 + 1.0 = 5.0||5|
|5||4||4||3.0 + 0.8 + 0.8 = 4.6||5|
|5||3||4||3.0 + 0.3 + 1.2 = 4.5||5|
|5||4||3||3.0 + 1.2 + 0.3 = 4.5||5|
|5||5||5||2.4 + 1.0 + 1.0 = 4.4||4|
As the above shows, with the 60% allocated weight, Category 1 has the overriding power over criteria 2 and 3 for a composite rating of 5. In real practice, the actual rating does not coincide with the standards specified in “Descriptors and Their Numerical Equivalents”, where 3 is equivalent to “Satisfactory Performance”. The numeral 3 is given to Category 2 for no evidence of scholarly publication and achievements; a rating of 3 is given to Category 3 for no evidence of service at all. Thus, a rating of 5 in librarianship guarantees the final 5 rating with no publication and with minimum service, or with minimum publication and no service at all. Instead, if one has a rating of 4 in librarianship, one’s composite rating can be no higher than 4 no matter if one has 5 books published and serve as the ALA president in the year when one has been evaluated. The philosophy of such a system does not encourage scholarly development and professional service.
CL is not the only library setting up 60% weight for librarianship. The University of New Mexico (UNM) (Baldwin, 2003, p. 133), for example, has established the range of weights as in Figure 6:
Despite having 60% weight for librarianship, the assessment system of UNM is different from the CL system in two important ways. Firstly, research and service have fixed weight values of 25% and 15% respectively. As a result, with a rating of 5 in librarianship, the evaluatee needs a second 5 rating in either research or service to have an overall rating of 5. One phenomenon observed in both systems is that the overall outstanding performance rating allows a rating of 3 in either research or service, as indicated in Figure 7, where 3 may equate to “very poor performance”, as in the CL evaluation.
|5||5||3||3.0 + 1.25 + 0.45 = 4.7||5|
|5||3||5||3.0 + 0.75 + 0.75 = 4.5||5|
|5||3||4||3.0 + 0.75 + 0.6 = 4.35||4|
|5||4||3||3.0 + 1.0 + 0.45 = 4.45||4|
|4||5||5||2.4 + 1.25 + 0.75 = 4.4||4|
The practice of giving the most weight to librarianship is supported in the library literature. Baldwin (2003), for example, argues that “Librarianship will be considered the most heavily weighted of the three major areas” (p. 133). This statement proposed to have more combined weight allocation possibilities for a PA system than one can imagine. However, Baldwin also writes: “An individual must achieve the minimum standard of excellence in librarianship before they may be considered for merit in any other area.” (p. 133). To follow all three key concepts (i.e., most weight, minimum standard of excellence, and merit) in both statements is not an easy task. Although the CL system gives the most weight (60%) to librarianship, it can be flawed because it requires the top most rating, not “minimum performance of excellence” in librarianship, to guarantee a composite rating of 5, based on which merit raise is given.
Weighting structure of evaluated criteria deserves serious attention, as it reflects our philosophy about what makes a faculty librarian, and our expectations of an outstanding faculty librarian.
UNM also differs from CL in rating librarianship and the other two categories separately by different evaluators. The rating for librarianship is assessed, proposed, discussed and finalized by the evaluatee and the immediate supervisor together. The rating for research and service, instead, is conducted by a peer group. There is a very sound legitimacy in letting the immediate supervisor rate performance in librarianship, because s/he knows about the evaluatee’s daily work performance the best. In addition, unlike research and service, which are quantifiable and measurable, librarianship in different jobs and at different levels of jobs cannot be compared uniformly within the library.
In the CL system, the immediate supervisor is deprived of the rating authority. Part of the reason for such practice is to eliminate supervisor’s bias against the subordinate in evaluation. Two different issues are juxtaposed here: one is concerned with individual interpersonal relationship, and the other contributes to evaluation validity. A solution to one problem should not be sought at the expense of the other. In the rating process at CL, ratings are conducted by the AD whom the evaluatee does not report to for his or her performance in librarianship, and a peer group, the ETPS Committee, which consists of members mostly from across departments and divisions. Could results from such ratings be valid and fair, ethical and legal?
Supervisor-subordinate interaction is reviewed as an important integrated component of the appraisal process (e.g., Anderson, 1993; Moon, 1997; Edwards & Williams, 1998; Baldwin, 2003). The most discussed strategy to enhance supervisor-subordinate communication is the assessment interview. Baldwin (2003) argues that “The most significant benefit of a performance appraisal is that it offers the opportunity for a supervisor and employee to have a one-on-one discussion of important work issues that might not otherwise be addressed. For some employees, the appraisal interview may be the only time they get to have exclusive, uninterrupted access to their supervisor. The value of this scheduled interaction between a supervisor and employee is that it gives the employee an opportunity to have a discussion focused on performance issues in a way that just is not possible during the ordinary course of the workday” (p. 84). CL’s practice segregates the supervisor and the subordinate in the critical rating process. As a result, the evaluation ratings, especially those of librarianship, lack objectivity and consistence, and thus are questionable.
The problems discussed above are well observed. Many CL librarians have tried hard over the years to make changes to the system, but their efforts, so far, are rather unsuccessful. The reasons for not being able to make a change are multiple, of which the most critical lies in the purpose set for the implementation of PA. It is the belief of the top administrator that top ratings should be restricted to a small number of people. Here comes the question: What is the purpose of performance evaluation? To assess work performance, or to put people on a manipulated performance curve and single out a small group of prospective merit money recipients? This is a question every library administrator needs to ponder and tackle accordingly. When “more than a few” librarians have done “excellent” work and only a few are rated as excellent, errors occur in the PA process. The CL administration is challenged by the following questions:
All in all, why one gets the ratings one gets, good or bad?
Just as PA can be used as a managerial means (Reneker & Steel, 1989; Edwards & Williams, 1998; Baldwin, 2003), it can reflect facets of the organization. Edwards and Williams (1998) claim, for example, that although lacking faith in the function of PA systems to reflect work performance, a PA system can tell people about the political and cultural environment of the work place as well as the managerial standards of that administration.
A sound system can effectively help to create a healthy work environment, establish nice interpersonal relationships, encourage active work attitude, and promote productivity. Conversely, a poorly constructed system may bring unfairness, arouse conflicts, and deteriorate interpersonal relationships at the work place. How to establish and continuingly improve a performance assessment system, therefore, is an urgent issue that needs to be adequately addressed and more closely examined in the library and management literature.
Anderson, Gordon C. (1993). Managing Performance Appraisal Systems. Oxford, UK; Cambridge, Mass.: Blackwell.
Baldwin, David A. (2003). The Library Compensation Handbook: A Guide for Administrators, Librarians, and Staff. Westport, Connecticut: Libraries Unlimited.
Edwards, Ronald G; Williams, Calvin J. (1998). “Performance appraisal in academic libraries: minor changes or major renovation?” Library Review, 47(1), 14-19.
Evans, G. Edward & Benedict Rugaas. (1982). “Another look at performance appraisal in libraries.” Journal of Library Administration, 3, 61-69.
Fletcher, Clive. (1997). Appraisal: Routes to Improved Performance. London: Institute of Personnel and Development. 2nd Edition.
McGregor, Douglas. (1960). The Human Side of Enterprise. New York: McGraw-Hill.
Moon, Philip. (1997). Appraising Your Staff. London: Kogan Page. 2nd Edition.
Reneker, Maxine & Steel, Virginia. (1989). “Performance Appraisal: Purpose and Techniques.” In Creth, Sheila & Duda, Frederick (Eds.). Personnel Administration in Libraries (pp. 152-220). New York: Neal-Schuman Publishers, Inc. 2nd Edition.
|Pan, Junlin, & Qian, Gaoyin. (2006). "Librarian Performance Assessment: A Case Study," Chinese Librarianship: an International Electronic Journal, no.22 (December 1, 2006). URL: http://www.iclc.us/cliej/cl22PanQian.htm|