Sexual Assault Prevention Evaluation Checklist
This Sexual Assault Prevention Evaluation Checklist provides a summary of what prevention evaluation is, the importance of evaluating prevention programs, and what is involved in the evaluation process. This user-friendly guide explains what prevention evaluation can look like and is designed as technical assistance for programs newly asked to include evaluation strategies in their prevention work. It focuses on how to incorporate evaluation with minimal cost when additional funding is not available. For program-specific technical assistance about evaluation strategies, please call MCASA at 301-328-7023 or email [email protected].
What is prevention evaluation?
- Program evaluation is a set of practices and approaches that help us to gauge the efficacy of our prevention programs and report results to others.
- If our prevention work is a mountain, evaluation is a set of tools, equipment, and maps that help us to interpret the signs around us and make sure that we are climbing towards the summit.
- Evaluation requires prevention educators to identify specific outcomes as the program goals and to consider how they can best determine whether those outcomes have been accomplished. For example:
- What skills, knowledge, attitudes, or behaviors does this program want to change?
- How are we measuring that change? How will we know that change has taken place?
- It lets us tell the story of our work—to funders, to the public, and to our fellow practitioners. It gives us a language to discuss change in more concrete terms. We can see the change taking place in our classrooms, in our communities, and in our rape crisis centers—and evaluation is a way that we can help others who cannot see this change first-hand to witness it too.
- It helps us figure out how to focus our efforts. When we see that one program has been particularly effective, we can focus on expanding it, applying its approaches to other programs, and ensuring it continues into the future. Evaluation also prevents us from spending lots of time, energy, and other resources working on programs that are not showing such strong results.
- It gives us new avenues to explore and keeps us moving forward.
- Evaluation helps us to see where progress is being made and where we can continue to scale up sexual assault prevention programs.
- Evaluation prevents us from making assumptions about our communities’ readiness, receptiveness, and response to sexual assault prevention programs.
How do you do evaluations?
Sometimes it can be hard to translate the abstract goals of evaluation into concrete strategies for incorporating evaluations into sexual assault prevention programming. Here, we offer three suggestions for how to administer prevention program evaluations. These are by no means the only methods for evaluation, and we encourage you to use any methods that allow you to accurately assess whether a program is meeting its goals.
Pre- and post-tests
- Pre-and post-testing is an evaluation method that measures participants’ knowledge, attitudes, behaviors, and/or beliefs both before beginning and after completing sexual assault prevention activities. By measuring the difference between the “starting” and “ending” scores, we can see where we have helped our participants to grow—and what areas of our work might need some tweaks in the future.
- There are some benefits and drawbacks to using pre- and post-tests for evaluation. A pre- and post-testing strategy:
- Allows you to learn more about participants’ baseline knowledge coming into the training and to determine how much the program changed their knowledge, attitudes, etc.
- Helps us to avoid making assumptions about students’ backgrounds, progress, or engagement with prevention activities.
- Provides information to help tailor future programs
- Is well-suited to programs and activities focused on attitudes and information.
- One thing to note: A pre- and post-test strategy may be less useful for evaluating programs related to particular skills (for example, healthy communication in relationships, communicating consent and/or non-consent, or bystander intervention behavior).
- Additional post-test follow-up can be helpful if this method is used to measure skills and behavior. (For example, giving a follow up post-test to program participants 6 months after the conclusion of the program.)
- Make sure pre-tests are collected before you continue with the lesson. Keeps your data safe, prevents participants from changing answers or taking notes on the sheets.
- Consider how you want to score the answers. Write-in questions are a better test of the content in most cases, but can be more challenging to score—so it’s important in these cases to be clear about what constitutes an “appropriate” or “inappropriate” answer for data collection purposes.
Activity-based assessment methods
While pre- and post-tests can generate useful results, it is not always the most appropriate or engaging evaluation method. Here are two other options that can help to integrate evaluation into regular sexual assault prevention program activities.
- There are many ways to make evaluations of prevention programs more interactive to the participants. One way of doing this would be to use “voting”. Having participants “vote” on what they feel are effective prevention strategies (for example, bystander intervention strategies) will help promote discussion and measure participants’ comprehension on concepts being taught.
- Present participants with a scenario. Ask them to write responses on post-it notes, and then place the post-it notes on a poster (or in another location) based on categories.
- For example: when teaching a bystander intervention workshop, ask participants to write down a strategy for intervening, and then put that sticky in a location that corresponds to the type of strategy it is (such as the direct, delegate, or distract categories).
- Use an easel pad to create a list of options. Let participants “vote” on which option they think is best using dot stickers, checkmarks, or by writing their names.
- Set up a clear question with multiple options. Have participants write down their votes—and perhaps a brief explanation for why they are voting for that choice—and have them place them into an anonymous “voting box.”
- Regular sexual assault prevention activities and projects can also become a valuable tool for evaluation when they are paired with a rubric. Simply put, a rubric is an outline for how to determine if participants’ work (whether in the form of a skit, a poster campaign, a role-play, a writing assignment, or another creative project) reflects the key messages and goals of the prevention program.
- Figure out what components are important, as either things to include (e.g. positive bystander participation, healthy masculinity) or things to avoid (e.g. actions that condone victim-blaming, toxic masculinity). Then, assign point values to these in a checklist that can be used to score the activity.
- For example: One goal of a program might be to encourage support for survivors and discourage victim-blaming. Skits can be utilized as a program activity to evaluate change. In order to evaluate this type of program activity, creating a rubric as a scoring system will help with the evaluation process. The table below shows an example of a rubric that could be used to assess if this particular goal has been achieved.
- It is helpful to share information with your participants when assigning a project. It may be helpful to emphasize the overall goals of the program for them to keep in mind and know what you are looking for.
Example of a rubric to evaluate a skit on sexual assault prevention:
The program skit must include 3 or more of the following positive messages to receive a score of “2”
- Exhibit bystander behavior skills to intervene in a situation
- Uses language and positive messages such as “It wasn’t your fault”
- Demonstrating support for the victim by believing in them
- Confronting victim-blaming messages in appropriate ways
The program skit must include 1-2 of the positive messages listed above to receive a score of “1”
If the program skit did not include items from the “Best” list, and includes any of the following, it would receive a score of “0”
- Not addressing victim-blaming language
- Actively participating in victim-blaming language
- Not utilizing bystander intervention skills in a situation
- Collect the data from the scoring rubric to assess if the results of the program activity reflect the stated goals.
What do you do with the data?
It is important to know what types of data can be collected in an evaluation and what you can do with that data.
- Quantitative data consists of numbers—for example, the number of students who voted for the “correct” response to a scenario. Quantitative data is often easy to work with and understand, since it is easy to look at fractions or percentages with this type of data. Quantitative data is typically collected using surveys.
- Qualitative data consists of other information—usually words or ideas, such as information taken from writing assignments, interviews, and focus groups. In order to measure whether a program has met its goals, this data needs to somehow be converted into numbers. It can also be coded using qualitative analysis software’s, where by analyzing the codes you assign the quotes, you can then uncover overarching themes in the data.
- Often, the best way to do this is by “scoring” the data to turn it into quantitative data. “Positive” results or “appropriate” responses can be marked as a “1,” and “negative” results or “inappropriate” responses can be marked as a “0.” These numbers can then be used to generate quantitative information—for example, what proportion of students shared an “appropriate” response to the exercise in question.
- In other situations, however, quantification loses important nuance. Use narrative descriptions when needed.
- As a general rule, it is important that any data you collect is useful data. It is disrespectful of people’s time and may even be unethical to collect data if it will not serve a purpose. A good rule of thumb: if you’re asking a question “just because” or “out of curiosity,” without a particular reason or plan for using the data, it is not worth asking that question.
- Before conducting any evaluation, make sure that you outline specific evaluation questions that you want to ask in order to guide the evaluation process. Make sure you’ll be able to act on what you learn, whether through modifications to existing programming or by exploring new programming avenues.
- It is essential to address human subject’s considerations when people are participating in an evaluation. This means having participants consent to being included in this evaluation through a written consent form. It is also important to de-identify the data (meaning, taking out personal, identifying information, such as names). Keeping the collected data in a safe and secure location is critical (for example, in a password-protected file is ideal).
Checklist for evaluation:
Step One: What are the goals and objectives of your prevention program?
- Make sure to outline your goals and objectives of the sexual assault prevention program.
- Create a list of evaluation questions you want to answer that align with the program’s goals and objectives
Step Two: How can you determine whether those goals have been achieved?
- Figure out what you are going to measure: What attitudes, behaviors, beliefs, or knowledge are you trying to change and what is possible to measure?
- Are you interested in finding out if participants’ have demonstrated a new skill or behavior as a result of the program?
- Are you interested in measuring participants’ knowledge and understanding of a new concept?
Step Three: How do you plan to collect the data?
- Develop a plan for data collection. What is your method for collecting data?
- Are you using pre- and post-test measures? Are you using an activity-based assessments? Are you collecting data through surveys, interviews, focus groups, etc.?
- It is important to assess what data collection method is most appropriate for the purposes of your evaluation.
Step Four: How can you quantify that knowledge?
- Plan out how you will be analyzing the data. Is the data quantitative or qualitative? Did you use a mixed methods approach? (Meaning, collecting and using both quantitative and qualitative data).
- Can you create a scoring system for whether participants’ contributions do or do not meet your expectations?
- Should you also include narrative assessments to most accurately describe outcomes?
Step Five: Choose evaluation methods that fit the sexual assault prevention activities you are presenting and the resources available.
- Ask yourself: What evaluation method is most appropriate for measuring outcomes related to sexual assault prevention activities?
- Are you developing your own survey measures or using existing ones that have been used in the past to evaluate sexual assault prevention activities?
Step Six: How will you use your data?
- Create a plan outlining using your data, who will have access to data, and how results will be shared.
- How do you plan to share the evaluation results with interested stakeholders (such as funders, practitioners, community members, etc.)?
- How do you and your evaluation team plan to use the data to improve program outcomes?
- It is important to use data from the evaluation to make improvements to sexual assault prevention activities; collecting data without using it is a waste of time.
- An important thing to remember: evaluation is an ongoing process, not a once-and-done kind of task. As we make revisions to sexual assault prevention work, we must continue to evaluate them and continue to improve them.
Program Evaluation Resources
If you would like learn more about program evaluation for sexual violence prevention programs, check out these resources listed below:
- Technical Assistance Guide and Resource Kit for Primary Prevention and Evaluation (From the National Sexual Violence Resource Center): http://www.nsvrc.org/sites/default/files/Projects_RPE_PCAR_TA_Guide_for_Evaluation_2009.pdf
- An Interactive Online Course: Evaluating Sexual Violence Prevention Programs: Steps and Strategies for Preventionists (Offered by the National Sexual Violence Resource Center): http://www.nsvrc.org/elearning/20026
- Veto Violence: EvaluACTION-Putting Evaluation to Work (From the CDC): http://vetoviolence.cdc.gov/evaluaction
Preparation of this checklist was supported by the CDC under grant number #PHPA-G2093, awarded by the Center for Injury and Sexual Assault Prevention, Maryland Department of Health and Mental Hygiene. The opinions, findings, and conclusions in this document are those of the author(s) and do not necessarily represent the official position or policies of the Centers for Disease Control and Prevention (CDC).