Evaluate programs and services using measurable criteria
Importance of Measurable Criteria
Data of any kind that does not reflect any kind of usable information about the material it is about is essentially worthless. Using measurable criteria in evaluation is a good practice in any profession. Doing so provides solid evidence of the successes or failures for a given program or service, which then allows the library to present specifics regarding these programs and services both externally and internally. Although designing measurable criteria can be challenging, especially as trends favor outcomes over outputs, implementing measurable criteria in the evaluation of library programs and services can make all the difference in how a community and the library’s funders perceive the library and how the library progresses.
Value and Funding
Once measurable criteria has been assessed and articulated in a way that explains the meaning of the outcomes or outputs, disseminating this information externally can provide a library with extraordinary results. If, for example, the library measures outputs and determines that computer usage for this year has increased by 20% from the previous year, using this data as evidence that the library should be perceived by the community as a place where members can expect access to computers can help the library show the advantages of its computer services. If the library can point to a probable reason why there was a 20% increase, such as the inclusion of new and in-demand software, the library can then tout the value of this particular software to community members who may not already use the computers to further increase computer usage. Meanwhile, the library can use this same information to convince funding bodies such as the board of directors of Friends of the Library that additional funding could be used to make more computers available to more customers, thus increasing overall usage.
Outcome measures can be just as if not more valuable in proving the worth of the library to community members and funding bodies. Miller, Fialkoff, and Kelley (2012, p. 34) write, “Pure data such as gate counts, computer uses, and more aren’t as satisfying to those who hold the purse strings as are measurements that articulate impact.” Outcome measures focus on the “so what” of an evaluation. While a library may be able to report that fifteen people attended a course on how to use Facebook, this number is meaningless if the attendees did not walk away with the ability to log into Facebook and post a status. While attendance numbers do have their place and importance in evaluation, it is the outcome – in this case, the ability to log into Facebook and share a status – that cements the value of a program. Certainly, fifteen people could attend such an event and leave, still feeling as if they do not know how to accomplish the tasks addressed in the lesson. The program would be, from some perspectives in this instance, not of value and not worth funding with either time or money unless changes were made.
Miller, Fialkoff, and Kelley (2012, p. 34) recognize the difficulties of evaluating outcomes, however, stating, “The trick…is that outcomes are ‘squishy’ while outputs are ‘hard’ – and libraries are accustomed to hard proof of what they deliver.” Although it may take practice, the solution is to determine the why of a given program or service prior to implementing it. For example, if a library wants to implement a service in which staff assist customers with developing a résumé, the staff should first come up with a rubric to assess the effectiveness of such a document. Input from funding sources or community members about what makes an effective résumé may be especially helpful in later proving the success of the service. Researching professional or scholarly material on the subject, too, can add weight to the selected criteria. When the library staff determines using active as opposed to passive language on résumés is received better by hiring managers through their research, for example, the library can judge the effectiveness of the résumé service by observing the use of active and passive language throughout each résumé before and after coaching and serving the customer. Outcomes are about answering questions such as, “Did the job seeker demonstrate the ability to use active language in a résumé when possible?” Essentially, outcomes prove that the customer has learned or otherwise changed for the better. Focusing on how a service or program changes a community either in the small- or large-scale can be an effective method to show, rather than tell, a group of people how the library is having a positive impact on the community.
While using evaluation methods to increase funding or the perception of value of a library in a community is both important and admirable, it is equally important to use this data and information to improve programs and services. Whether outputs or outcomes, data may not immediately point to what is or is not working within a program or service. Returning to the Facebook example from an output perspective, if the program only sees one or two attendees, the reason why attendance is low may not be evident just from the fact that only one or two individuals came. The fact that only one or two attended, however, can give the organizer an opportunity to assess content and advertising. From this point, trial and error is often the best method to discover what does and does not work. Alternatively, the organizer may request the attendees for feedback to determine how they heard about the event, what made them interested in attending, and what they might change, to start. This is another form of evaluation which can help the organizer make adjustments to the program moving forward.
Outcome evaluation may be more helpful in this endeavor. With the résumé example, if customers continue to use passive language more than active language or do not meet whatever other outcome criteria is determined prior to implementing the service, it’s probably an indication that there is something wrong with the method of coaching. There may, of course, be other factors, but outcome criteria is, in general, easier to address when making improvements than output criteria because it is more specific.
INFO 266 Community and Collection Evaluation
Although this assignment focuses on evaluation of a library’s collection, measurable outcomes are assessed to evaluate the collection. The most prominent measure is one of simple proportions. In this assignment, I point out the relative lack of foreign language materials for a sizeable community speaking or learning English as a second language, especially as compared to books in English for English speakers in the area. The holds service provided by the library is also examined as an insufficient way of providing foreign language and ESL materials to ESL and foreign language speakers. As this assignment assesses the collection as a service to foreign language speaking customers in the community with measurable criteria, I feel it fulfills Competency N.
[embeddoc url=”http://24hourlibrary.straydots.com/wp-content/uploads/2016/02/266-Community-and-Collection-Evaluation.docx” viewer=”microsoft”]
LIBR 261A Teen Space Evaluation
Providing a space for teens in addition to a collection is a service which requires careful thought and consideration of the needs of teens. Reference services, in addition to the physical space of the “teen space” in a public library, are evaluated in this assignment. Using suggestions as posed by YALSA as the measurable criteria, this assignment assesses this teen space and finds, overall, that there are many improvements to be made, such as an increase in space and furniture as well as a clearer indication of a service point for teens. As these and other aspects of the teen space are addressed, this evaluation satisfies Competency N.
[embeddoc url=”http://24hourlibrary.straydots.com/wp-content/uploads/2016/02/261A-Teen-Space-Evaluation.docx” viewer=”microsoft”]
Evaluation models of all kinds can help a library improve their programs and services if applied correctly. Outcomes, outputs, and whatever other measurements a library decides to use should be recorded accurately, as should the efforts to change the results of those measurements. Regular evaluation leads to a progressive environment which not only serves its community well, but proves the value of its programs and services with facts, rather than just impassioned speeches that may or may not convince a community. Evaluation need not be expensive and libraries have many methods from which to choose the appropriate technique for their needs. Whatever path a library opts to take when it comes to evaluation, that they are pursuing it at all shows a commitment to improvement and great service to their community.
Miller, R., Fialkoff, F., and Kelley, M. (2012). Moving from outputs to outcomes. Library Journal, 137(1). Retrieved from Library, Information Science & Technology Abstracts with Full Text.