In this paper I intend to examine the policy evaluation in New Directions for Evaluation. The article 'Shared Measures for Evaluating Common Outcomes of Informal STEM Education Experiences' by a number of authors (Amy Grack Nelson, Megan Goeke, Ryan Auster, Karen Peterman, Alexander Lussenhop) presents the use of shared measures which can be used to evaluate outcomes of informal STEM (Science, technology, engineering, and mathematics) education experiences. Here we look at what are shared measures and what qualities characterize them. We also look at the benefits that are related to these measures as used in STEM. We also look and examine the survey and observational tools that are used to measure common ISE (Informal STEM Education) outcomes. At the end of this we look at some of the recommendations that can be used to make it even better as a method that can be applied. This may include have an approach that is multifaceted that can improve capacity as well as try to increase access to the instruments that are used in the evaluation process. This shows that we can enhance the capacity by making sure that all that are concerned are well versed with how the method works.
Definition and Purpose
For a very long time the use of this method (Shared measures) as a way to evaluate common outcomes has been developing at a rapid rate and at the same time has been getting a lot of attention. This has been especially witnessed in the Informal STEM Education (ISE). This means that this area has been gaining increased attention which makes it possible to be used today (Furr & Bacharach, 2014). Before we can even go further there is a need to first define what shared measures are as well as give the overall overview of the program.
When we talk about shared measure we are talking about an instrument that has been developed and is used to measure a specific outcome or to try and develop what is common in a given project or programs. Evaluators usually have intended outcomes that they usually what to measure in programs. When we talk about measurable parts of the outcomes we are simply looking at what we can call constructs (Furr & Bacharach, 2014). Shared measures looks at the formation or putting into use some demanding measures that can be shared by different programs as long as they are addressing the same outcome. These programs must be addressing the same construct for the shared methods to be used. The evaluators have to be keen on the aspects of measurable outcomes and this means that they have to differentiate between 'common measure' and "measure of common construct". As much as this may appear close the difference is very large. The main purpose of shared measure is to measure specific outcomes that are shared across a number of programs.
One thing that is very important to mention at this time is that the development of the Shared measure and its rapid use means that there is a need for the evaluators to understand it better as well as know the technical qualities that it possesses. The first quality is validity and this comes in a number of ways. One of the ways is through content validity and this relates to how the construct of interest represented in the content of an instrument is well presented (Furr & Bacharach, 2014). This can be done through gathering evidence by reviewing the literature as well as the feedback especially from experts who are conversant with construct that is being measured. Another type is response process validity that is closely related to the mental processes that one is using especially in the case that he or she is responding to questions. This is according to (Furr & Bacharach, 2014). One of the most common method used in gathering this type evidence is think-aloud interviews as this is a method that is used to bring to light the exact thing that someone is reasoning. This is a type of interview. The use of internal structure validity relates to statistical analyses that investigate the link between items in a given instrument.
The other important technical feature is reliability which is simply the measure of accuracy or precision of a procedure of measurement. Reliability can be used to show or indicate the extent to which the scores that have been produced by a particular measurement procedure are consistent. (Thorndike & Thorndike-Christ, 2010). This simply means that in reliability we are looking at consistency of the scores and this can be done using different methods like Test-retest. A measure will be considered to be reliable if it can produce consistent results in a number of times. This means that even in terms of observation the measure has to be reliable to produce results that are same and that do not differ. All these are the technical qualities of shared measures than can be used in evaluation (Thorndike & Thorndike-Christ, 2010). These technical qualities ensure that good evaluation results are derived from the processes. This may go a long way to enhance the outcomes of different programs.
Benefits of Shared Measures
One might be tempted to ask, "What are the benefits of Shared Measures?" What is making it progress so well and be loved is because of the fact that such measures support evaluator especially when conducting high quality evaluation. Shared Measures ensure that the evaluation that is carried out is of excellent quality. This may be partly due to the technical features such as validity and reliability (Thorndike & Thorndike-Christ, 2010). For instance in a case where the shared measure is available, the evaluator will not need to start developing an instrument from scratch and this plays a big role in saving both time and money. Developing an evaluation instrument from scratch is time-wasting and at the same time mind-boggling, the instrument might also not attain good validity and in some instance not being reliable.
When using an instrument that has already being tested and proven usually increases the confidence of both the evaluators and the clients. For clients, they are usually drawn to an instrument that have been tested and proven to be of quality and that is why shared measures are usually preferred by many. Lastly, a shared measure is used to compare results across a number of programs. That means that this comparison usually helps in interpreting of the scores as well as the program performance (American Evaluation Association, 2018). Additionally, a minor benefit of shared measures is that there is development of common definitions of the constructs and outcomes across the experiences in ISE. However, it is prudent to also note that a major concern in using shared measures is misuse and this happens especially if evaluators do not have the knowledge in educational measurement. The American Evaluation Association's Evaluator Competencies do not have any specific reference which can guide in understanding validity and reliability. There are also no skills especially of judging or making quality data collection instruments. This is the downside of shared measures as used in America (American Evaluation Association, 2018).
Informal STEM Education evaluators use a number of methods to gather data for instance interviews, observations, surveys, focus groups, artifact review among others. The most common types used are observation tools and survey tools.
Shared measures work very well and produce good results when doing evaluation. However, for future improvements there is a need to carry out several things. One way is that I recommend the development of good competencies and this will aid in reducing misuse by evaluators. There is a need to educate all evaluators on educational measurement which will go a long way to create a good way to carry out evaluation. The American Evaluation Association needs to create Evaluators' competencies that have specific reference. This will be good as it will make sure that the evaluators are judged using these competencies thereby improving the process of evaluation. There is a need to create and improve skills in areas where they will be needed in judging as well as making quality data instruments. When there are quality data instruments, the evaluators will be greatly helped in doing excellent evaluation. If all these recommendations are taken into consideration they will help in improving shared measures and evaluators will be able to produce good evaluation reports in ISE. It will also be good to develop systems and structures which will aggregate data from shared measures and these can be used to generate new knowledge about ISE experiences and outcomes.
American Evaluation Association. (2018). The 2018 AEA evaluator competencies. Washington, DC: Author.
Furr, R. M., & Bacharach, V. R. (2014). Psychometrics: An introduction (2nd ed.). Thousand Oaks, CA: SAGE.
Thorndike, R. M., & Thorndike-Christ, T. (2010). Measurement and evaluation in psychology and education (8th ed.). Boston, MA: Pearson Education, Inc
Grack Nelson, A., Goeke, M., Auster, R., Peterman, K., & Lussenhop, A. (2019). Shared measures for evaluating common outcomes of informal STEM education experiences. In A. C. Fu, A. Kannan, & R. J. Shavelson (Eds.), Evaluation in Informal Science, Technology, Engineering, and Mathematics Education. New Directions for Evaluation, 161, 59-86.
Cite this page
Evaluating Shared Measures for Outcomes of Informal STEM Education - Essay Sample. (2023, Feb 27). Retrieved from https://proessays.net/essays/evaluating-shared-measures-for-outcomes-of-informal-stem-education-essay-sample
If you are the original author of this essay and no longer wish to have it published on the ProEssays website, please click below to request its removal:
- Importance of Security Personnel in Schools Essay Example
- Essay Sample on Gaining Life Skills Through College: A Positive Experience
- Essay Sample on School Improvement: High School That Work Program
- Essay Example on High School Students Talk About Tobacco Use & Prevention
- Essay Example on End of Term Thank You: Celebrating Your Child's Success
- Essay Example on Learning: Adaptive Functioning from Classical to Observational
- Essay Example on Principal-Led Instruction: Aligning School Vision to Achieve Goals