Quantcast
Channel: assessment – EvaluATE
Viewing all articles
Browse latest Browse all 9

Blog: Part 2: Using Embedded Assessment to Understand Science Skills

$
0
0
RBKlein
Rachel Becker-Klein
Senior Research Associate
PEER Associates
KPeterman
Karen Peterman
President
Karen Peterman Consulting
CStylinski
Cathlyn Stylinski
Senior Agent
University of Maryland Center
for Environmental Science

In our last EvaluATE blog, we defined embedded assessments (EAs) and described the benefits and challenges of using EAs to measure and understand science skills. Since then, our team has been testing the development and use of EAs for three citizen science projects through our National Science Foundation (NSF) project, Embedded Assessment for Citizen Science. Below we describe our journey and findings, including the creation and testing of an EA development model.

Our project first worked to test a process model for the development of EAs that could be both reliable and valid (Peterman, Becker-Klein, Stylinski, & Grack-Nelson, in press). Stage 1 was about articulating program goals and determining evidence for documenting those goals. In Stage 2, we collected both content validity evidence (the extent to which a measure was related to the identified goal) and response process validity evidence (how understandable the task was to participants). Finally, the third stage involved field-testing the EA. The exploratory process, with stages and associated products, is depicted in the figure below.

We applied our EA development approach to three citizen-science case study sites and were successful at creating an EA for each. For instance, for Nature’s Notebook (an online monitoring program where naturalists record observations of plants and animals to generate long-term datasets), we worked together to create an EA of paying close attention. This EA was developed for participants to use in the in-person workshop, where they practiced observation skills by collecting data about flora and fauna at the training site. Participants completed a Journal and Observation Worksheet as part of their training, and the EA process standardized the worksheet and also included a rubric for assessing how participants’ responses reflected their ability to pay close attention to the flora and fauna around them.

Embedded Assessment Development Process

Lessons Learned:

  • The EA development process had the flexibility to accommodate the needs of each case study to generate EAs that included a range of methods and scientific inquiry skills.
  • Both the SMART goals and Measure Design Template (see Stage 1 in the figure above) proved useful as a way to guide the articulation of project goals and activities, and the identification of meaningful ways to document evidence of inquiry learning.
  • The response process validity component (from Stage 2) resulted in key changes to each EA, such as changes to the assessment itself (e.g., streamlining the activities) as well as the scoring procedures.

Opportunities for using EAs:

  • Modifying existing activities. All three of the case studies had project activities that we could build off to create an EA. We were able to work closely with program staff to modify the activities to increase the rigor and standardization.
  • Formative use of EAs. Since a true EA is indistinguishable from the program itself, the process of developing and using an EA often resulted in strengthened project activities.

Challenges of using EAs:

  • Fine line between EA and program activities. If an EA is truly indistinguishable from the project activity itself, it can be difficult for project leaders and evaluators to determine where the program ends and the assessment begins. This ambiguity can create tension in cases where volunteers are not performing scientific inquiry skills as expected, making it difficult to disentangle whether the results were due to shortcomings of the program or a failing of the EA designed to evaluate the program.
  • Group versus individual assessments. Another set of challenges for administering EAs relates to the group-based implementation of many informal science projects. Group scores may not represent the skills of the entire group, making the results biased and difficult to interpret.

Though the results of this study are promising, we are at the earliest stages of understanding how to capture authentic evidence to document learning related to science skills. The use of a common EA development process, with common products, has the potential to generate new research to address the challenges of using EAs to measure inquiry learning in the context of citizen science projects and beyond. We will continue to explore these issues in our new NSF grant, Streamlining Embedded Assessment for Citizen Science (DRL #1713424).

Acknowledgments:

We would like to thank our case study partners: LoriAnne Barnett from Nature’s Notebook; Chris Goforth, Tanessa Schulte, and Julie Hall from Dragonfly Detectives; and Erick Anderson from the Young Scientists Club. This work was supported by the National Science Foundation under grant number DRL#1422099.

Resource:

Peterman, K., Becker-Klein, R., Stylinski, C., & Grack-Nelson, A. (2017). Exploring embedded assessment to document scientific inquiry skills within citizen science. In C. Herodotou, M. Sharples, & E. Scanlon (Eds.), Citizen inquiry: A fusion of citizen science and inquiry learning (pp. 63-82). New York, NY: Rutledge.

The post Blog: Part 2: Using Embedded Assessment to Understand Science Skills appeared first on EvaluATE.


Viewing all articles
Browse latest Browse all 9

Latest Images

Trending Articles





Latest Images