Introduction
Evaluation is defined in different ways by different researchers. According to Crawford & Bryce, (2003), evaluation refers to the periodic assessment of a project using both internal and external data to establish the effectiveness of the intervention with the goal of enhancing learning. Stem et al. (2005) provide another definition of evaluation as generating or gathering data on a particular issue to enable the development of its understanding in details. On the other hand, a more traditional definition is that the American Evaluation Association (AEA), which defines evaluation as involving the assessment of 'the weaknesses and strengths of personnel, policies, programs, and products as organizations set to work on their effectiveness (AEA, 2010).
Evaluation employs a variety of social science methods (qualitative, quantitative, mixed) to pursue different kinds of information (e.g., needs assessment, modelling, implementation and outcome information) in a range of different contexts (academic, government, non-profit, private enterprise) and from a variety of different perspectives (internal, external, both). Evaluation can also be considered a "trans-discipline," providing tools to other disciplines, but also enjoying a stand-alone status (Scriven, 1998). There is within the field a rich diversity of prescriptive theories of practice (Shadish, 1998). This diversity of methods, roles, and approaches has at times contributed to the difficulty in defining evaluation as a unified profession (Stevahn, King, Ghere, and Minnema, 2005). The problem is reflected in the many different definitions of assessment from within the field. However a review of these by Geva-May and Pal (1999) reveals several standard features, namely: systematic methods (Nagel and Freeman, 1975; Rossi, Lipsey and Freeman, 1989); valuation or judgement of the merit or worth of an object (Joint Committee, 1981; Eisner, 1979; House, 1980; Scriven, 1966) and a comparative aspect, as in assessments of two or more different approaches to a problem (Alkin and Ellett, 1990). Other standard features include an element of feedback for improvement, including both formative functions (Scriven, 1966) and monitoring functions (Chelimsky, 1985) and an implied role in decision making (Cronbach, 1963; Stufflebeam et al., 1971)
Based on these definitions Johnson (2012) states that an evaluation system provides information in a continuous manner that helps in ensuring the functioning of an entity. On the other hand, Leeuw and Furubo (2008) define evaluation system as an arrangement where evaluation is not a one-off activity but is embedded in the events of the program or project to ensure that information is always available to be used in decision-making throughout the application. The two definitions of evaluation system highlight the significance of evaluation as part of a program or project and not an independent occurrence (Johnson, 2012). These arguments agree with the suggestions of Willette and Fleischman (1982) that an evaluation system should assess the effectiveness of a program to reduce the commitment of resources and inputs into a project that is not achieving the desired outcomes. With these definitions of evaluation, it is clear that assessment is deeply related to one's improvement of effectiveness and decision making.
Evaluation system
Measuring the actual output of a program depends on using an evaluation system that ensures that the indicators used to link the inputs to the production (Bledsoe & Graham, 2005; Chen & Cheng, 2007; Moore et al., 2015). Stame (2004) argues that basing evaluations on theories helps to cover the gap that is always present in linking inputs to outputs. Establishing results that can reflect the goals of the program are essential in building sustainability and stability of an organization in an environment that is continuously changing (Bledsoe & Donaldson, 2015; Jacobs et al., 2012; Jokela et al., 2008; Leeuw, 2012). Researchers and authors have proposed some theories that can be used to guide the development and implementation of evaluation systems (Deane & Harre, 2014; Hansen et al., 2013; Nakrosis, 2014; Walshe, 2007). Researchers argue that the problem arising from evaluations is that programs are often found to be ineffective, yet the method used might be the limiting factor to determining effectiveness (Chen & Rossi, 1980; De Silva et al., 2014; Donaldson & Gooler, 2003; Douthwaite et al., 2017; Stame, 2004). Stame (2004) proposed that basing evaluation systems on theory compared to methods can be significant in solving the evaluation problem. Therefore, it is essential to understand the assumptions that are proposed for evaluation systems.
Finding a clear and comprehensive definition of an evaluation policy is limited in the literature. The meaning of evaluation and evaluation systems indicate the need for designing systems that provide a flow of information that can be used to make informed decisions on the effectiveness of a program (Johnson, 2012). Although researchers consider evaluation policy in different fields, it is essential to acknowledge the arguments of Johnson (2012) that the definition is still lacking. Wholey (1970) suggests each United States federal agency should have a precise definition of program objectives and output measures and the development of evaluation work plans as part of an overall strategy that prioritizes evaluation questions. Leeuw and Furubo (2008) argue that all active evaluation systems have a clearly stated that a) permanence; b) results are intended to be used; c) has a distinct epistemology, and d) established organizational responsibility for evaluation. Consequently, Cabrera and Trochim (2006) argue that evaluation systems must be based on theories that have been instrumental in assessment based on previous research and experience. The OECD (2016) also state that organizations that utilize evaluation systems that apply theory in design and continuous-use promote adequate allocation of resources to activities and the realization of the overall goal of the program. On the other hand, Mark et al. (2009) reiterate that an evaluation system has to be set within an explicit evaluation policy to increase consistency and transparency in the evaluation process.
Dahler-Larsen (2006) argues that it must be able to provide an endless source of information that an organization or a program can use to make informed decisions on functionality. Dahler-Larsen (2006) lists among the necessary ingredients for construction of a working "evaluative information system" and evaluation unit situated in such a way as to command a critical mass of human resources, as well as managerial attention and legitimacy, since these will determine the design and content of the system, its chances for successful implementation, and its survival over time. (p.70) Other key ingredients include clearly stated evaluation criteria for quality, a self-representation that explains the justification for the evaluation system, and sufficient financial and political, support, as including the full buy-in of those implementing and collecting data within the system. As such, the author states that an evaluation system must describe the use of indicators. These are performance, auditing, and how different methods of evaluation can be combined "in support of organizational learning and policy decision making" (p.65). According to Dahler-Larsen (2006), all these factors must somehow be brought into alignment with each other for an information system to adequately build and sustain a healthy evaluation function. Establishing a comprehensive set of mutually complementary evaluation policies is one way to accomplish this. Especially if procedures are developed through a process that is open and inclusive of stakeholders (Mark, 2009). These sentiments are reflected in the suggestions of Organisation for Economic Co-operation and Development (OECD) (2016) that introduction of theory as a basis for evaluation is essential in enhancing the evaluation system and contribute to learning in the organization or program. Another important consideration in setting up an evaluation system is the establishment of an evaluation policy.
Trochim (2009) defines evaluation policy as principles and rules "that a group or organization uses to guide its decisions and actions when doing an evaluation" (p.16). Trochim (2009) adds that all agencies and organizations that conduct evaluations or have evaluation systems have either a written or an unwritten evaluation policy. Therefore, this definition implies that an evaluation policy cannot be set up and used by an individual evaluator but has to be implemented by an organization or a collective entity (Johnson, 2012). In distinguishing an evaluation policy from a theory, Trochim (2009) argues that approaches or evaluation theories turn into strategy as soon as an organization sets the method as the guide in their evaluation activity. Consequently, evaluation theories-like the theory of change- can be used as the evaluation policy to guide the implementation of the evaluation system of an organization (Johnson, 2012; Trochim, 2009).
Early in 2009, the American Evaluation Association issues a statement of principles entitled "An Evaluation Roadmap for a More Effective Government" calling for the integration of program evaluation as an essential management function of government (AEA, 2009). AEA Roadmap (2009) offers a set of principles for developing agency-level policy on evaluation, including 1) scope and coverage of evaluation; 2) management of evaluation; 3) protecting quality and independence of evaluation and 4) transparency in goal setting, evaluation methods, and results. The Roadmap also suggests various ways of organizing the evaluation function within an agency, emphasizing that different agencies' evaluation needs vary depending partly on the structure of their programs, so they should be free to shape their evaluation policies. This statement of principles calls for a "government-wide effort" and suggests that the agencies themselves develop written evaluation policies across and within federal agencies.
Shortly after AEA's Roadmap (2009), Trochim (2009) published an intuitive taxonomy of evaluation policy, with eight evaluation policy types depicted as the slices in a layer cake. The eight-policy type "slices" in this taxonomy are Goals, Participation, Capacity Building, Management, Roles, Process and Methods, Use and Meta-evaluation. The taxonomy of evaluation policy is also dived into rings with general policies in the outer rings and related, but progressively more specific sub-policies falling in the circles and relevant, but gradually more particular sub-policies falling in the rings progressively closer and closer to the centre. Following the work of Carver (2006) onboard governance structures, for hierarchical organizations, Trochim assigns each layer of the cake a level of the organizational hierarchy. In a hierarchical organization, Trochim argues, the general policy is best set at the top, with successively more specific policies delegated to successively lower levels of the organization. At the lowest level are specific step-by-step procedures.
Cooksy (2009) and Mark (2009) call for broader input by evaluation practitioners and researchers in further developing and refining the concept of evaluation policy and Trochim's taxonomy. They argue this is needed to identify and missing, high-level categories, to assures a comparable level of generality for all groups and more clearly establish the boundaries of the conceptual domain (Cooksy 2009).
Evaluation System in International Development Cooperation
Evaluation of development assists in the meeting of demands from the public for accountability a...
Cite this page
Evaluation System in International Development Cooperation Paper Example. (2022, Jun 22). Retrieved from https://proessays.net/essays/evaluation-system-in-international-development-cooperation-paper-example
If you are the original author of this essay and no longer wish to have it published on the ProEssays website, please click below to request its removal:
- Annotated Bibliography on Operations and Supply Chain Management
- Essay Example: Behavior of the WTO from Critical Theory and Postcolonial Perspectives
- Apple Price Question Essay
- Project Procurement Paper Example
- Essay Example on Most Suitable Org for Diabetes Treatment: Mayo Clinic
- Involvement and Learning - Research Paper Example
- Essay Examining Integrated Advertising and Sponsorship in Corporate Marketing Through Televised Sport