In April 2015, this NEO Shop Talk post introduced you to something called a “Tearless Logic Model” that was developed by a group of community psychologists at Wichita State University and published in the Global Journal of Community Psychology Practice. I am here to report that I have used this, and it is true. It really is a tearless process. My evidence? Nobody cried during or after the process when I used it!
Let’s start by talking about why logic models might make people cry. Often when evaluators are presenting logic models and talking about evaluation they use terms and phrases such as “outcomes-based planning” and “that is an output, not an outcome.” Using profession-related jargon is like speaking in a special language that prevents everyone else in the room from participating in the conversation. That can be a painful experience and can make some people feel as though they are being excluded on purpose, perhaps even making them produce tears! The Tearless Logic Model process was designed to make sure everyone feels included, and that they understand and can participate in the conversation.
I decided to use the tearless logic model with a group from a small nonprofit organization that was working to start a community kitchen. I received a call from a consultant who was working with them who asked me if I would be able to help them develop a logic model. It was perfect. They were a bunch of people from the community who had absolutely no experience with evaluation. They had heard of logic models somewhere and were expecting someone to show up with a lot of technical forms and use terms to go with them.
Instead, they were surprised with some simple questions and paper on the wall. After a couple of questions, they let me know that they really needed to get to working on that logic model, and I reassured them that we would. We completed the process and they were truly surprised in the end that they had been creating the logic model. After the meeting I further organized the information into a more formal framework and sent it to them and they were pleased with the results. Moreover, because the group had collaboratively created the logic model and agreed on the activities, outputs, and outcomes, they were ready to buy into the whole evaluation process.
Recently, I was fortunate to hear Dr. Greg Meissen, one of the authors, talk about this tool. He has used it with many community and other types of groups and it continues to be a useful tool. With large groups, you can break up into smaller groups to answer the questions and then bring all their answers together at the end. He noted that, when the group is varied and includes professionals with a lot of knowledge about evaluation and individuals who know nothing about evaluation, the Tearless Logic Model evens the playing field by taking the jargon out of the process and introducing the concepts in terms that everyone can understand. He also noted the value in having a good facilitator.
So, the next time you are dreading development of a logic model with a group of people, check out this tool. It really does make the process painless, and thus, “tearless.” If you use it, be careful not to slip into evaluation jargon or technical terms. In the end, after you rearrange the columns into the logic model flow, you can have the group check to see if there are connections among the activities, outputs, and outcomes. You especially want to make sure that every outcome is linked to an activity.
Resource: Lien, A.D., Greenleaf, J.P., Lemke, M.K., Hakim, S.M., Swink, N.P. Wright, R., & Meissen, G. (2011). Tearless Logic Model. Global Journal of Community Psychology Practice, 2(2).
The NEO welcomes our new evaluation specialist, Susan M. Wolfe. Susan will be contributing her evaluation expertise to the National Library of Medicine’s recently announced partnership with the NIH All of Us Research Program. a landmark effort to advance precision medicine. The All of Us program aims to build one of the largest, most diverse datasets of its kind for health research, engaging with one million or more volunteers nationwide who will sign up to share their information over time. NLM and All of Us will work together to raise awareness about the program and improve participant access through community engagement efforts with public libraries across the United States. You can read more about the All of Us partnership here.
Susan is an evaluator and community psychologist who works with local, state, national, and international organizations through her consulting firm, Susan Wolfe and Associates. She formerly served as program analyst for the US Department of Health and Human Services Office of the Inspector General; director of a longitudinal homelessness research study funded by the National Institute of Mental Health; and assistant director of research for a large community college district. A teacher and writer, Susan has been an adjunct lecturer with several universities and published numerous peer-reviewed journal articles, book chapters, and books. She has a PhD in Human Development from the University of Texas at Dallas, an MA in Ecological (Community) Psychology from Michigan State University, a BS in Psychology from the University of Michigan-Flint, and a diploma from the Michigan School of Beauty Culture.
What exactly is a community psychologist?
Most disciplines within psychology are focused on individuals. Community psychologists go beyond the individual to look at the individual in interaction with the environment. Environment includes the social, cultural, economic, political, and physical environmental influences. We work to promote positive change, health, and empowerment at the individual and systemic levels.
How does being a community psychologist affect your evaluation work?
Community psychology provided me with a great foundation for evaluation work. My training included a lot of research and evaluation methods and ecological theories. These theories remind me about how interconnected everything is and that when you change something in the world, because of the interconnectedness, something else is likely to be affected. For example, when gentrification occurs in neighborhoods we often think of that as a good thing because it revitalizes the neighborhood and prevents further decline. However, on the other hand, many people are displaced as rents rise and they can no longer afford to live there, and some become homeless. When I evaluate a program, I automatically start looking at it within its context, including where it fits within a system, how it affects the system, and how the system affects the program. I also add a racial equity and social justice perspective to my work where it is applicable.
What is one of your favorite evaluation experiences?
I’ve had too many favorite experiences, so I will describe my most memorable. I was working for the U.S. Department of Health and Human services when Hurricane Katrina struck. One of the tragedies from the hurricane was the deaths in nursing homes, which prompted a request for an evaluation of nursing home emergency planning among the Gulf States. I was appointed as co-lead for the study, which had a very tight timeline. We incorporated a lot of context measures into the design. Team members did site visits to all the Gulf States. Data collection was interesting, but also emotionally taxing as we witnessed the devastation to the sites and the people who lived there – especially in Louisiana and Mississippi. We talked with nursing home directors, emergency managers, mayors, police chiefs, nursing home ombudsmen and many others and learned a lot about the complexity involved in making the decisions whether to evacuate or not, and then implementing the plans either way. There are risks if they stay, and other risks if they leave, so it isn’t simple.
What made that experience so special?
The report received a lot of attention and we were left with a feeling that we produced a report that could make a difference. Our team received the Inspector General’s Award for Excellence in Program Evaluation for it.
What attracted you to the All of Us Research Project?
I was excited at the prospect of being involved in a project of such significance for medical practice. For the past several years I have done a substantial amount of work with health disparities. The idea that so much data will be gathered to enable scientists to learn more about individual and group differences across multiple levels (biological, environmental, behavioral) will, hopefully, help to reduce and eliminate the disparities. How could I not be attracted to this!
What bit of personal information would you like to share to help us know you better?
I am really introverted, although most people don’t believe me when they meet me. I love working at home with just the company of my Chihuahua, Chiweenie, and cat. I like to travel a lot, all over the country and world. I crochet mediocre things for my family – like blankets and hats, and I like to hang out at home, cook, clean the kitchen, and watch TV. I am married to Charles, have two grown children, a daughter-in-law, two grandchildren, and another grandchild on the way.
Final note: Susan works remotely for the University of Washington Health Sciences Library from Cedar Hill, Texas, and can be reached at firstname.lastname@example.org.
The NNLM Evaluation Office staff had a rare opportunity in early November. We had our first-ever, in-person staff meeting. Our staff members all work virtually from their offices in Georgia, Texas, and Washington. We traveled to Washington DC in early November to attend the American Evaluation Association, so we took a morning for a staff mini-retreat. This is our first all-staff photo, which we took in front of the AEA banner.
Allow me to introduce you to the NEO bloggers. Starting from the left is Kalyna Durbak, Karen Vargas, me (Cindy Olney), and Susan Wolfe. Susan is our newest staff member, who has a special assignment with the NNLM. We will introduce her and her project in the near future. Look for NEO Shop Talk posts from Susan on topics related to participatory evaluation and culturally responsive evaluation.
We are thankful for all of our readers and wish you a wonderful holiday.
As you already know, the whole NEO team attended the Evaluation 2017 conference last week. I learned enough to fill up quite a few blog posts. Today’s is about some free tools I found out about that can help communities get comfortable working with data.
I went to a presentation by the Engagement Lab at Emerson College. The purpose of this Lab is to re-imagine civic engagement in our digital culture. Engagement Lab has created a suite of free online tools to encourage the communities that they work with work with data even if they are beginners. The products have super fun examples on each page so you can see if they would work for you.
The one I thought might be best for the NEO (and for this blog) is the one called WTFcsv which stands for what you probably think it stands for (there’s an introductory video that includes a lot of bleeping). The idea is that if you are new at using data and have a ton of data in a CSV file, what the bleep do you do with it?
The web tool has some examples you can look at to understand how the tool works. I like the “UFO Sitings in Massachusetts” example that shows among other things the most common reported shapes/descriptions of UFO sitings in MA (“light” is the most common, followed by “circle”). It even comes with an activity guide for educators to help people learn to work with data.
I wanted to see how this would work with National Network of Libraries of Medicine data. A few years ago NNLM had an initiative to increase awareness and use of National Library of Medicine resources, like PubMed and MedlinePlus. I uploaded the CSV file that had the results of that project. This image is a screen shot of the results (you can click on it to make it bigger). I think it did a good job of making charts that would give us something to talk about.
The good news is that it only takes minutes to upload the data and see the results. Also, below the data is a paragraph with some suggestions of conversations you might want to have about the data. WTFcsv is a tool for increasing community interaction with data, so this is very helpful. The results stay up on the website for 60 days, so you can share the link with a group.
Most of the bad news has to do with trying to make an example that would look good in this blog. In order to find data that would make a nice set of images to show you, I went through a lot of our NNLM NEO data. And I did have to reformat the data in the CSV file for it to work nicely. But if you were using the tool as a starting point, it’s okay for the data to not quite work with the WTFcsv resource – the purpose is to give you something to talk about, and it certainly does that (even if the something is that you might need to reconfigure your data a little).
The titles of each chart only allowed very few characters. So I had to shorten the titles of the data’s columns down to what may only partly represent the data. However, I was making something to show in a screenshot, which is not what this tool is designed for. If I had left the titles long, they would show up when you click on the chart and get some of the additional information.
The presenter showed us four additional tools from the Emerson College Engagement Lab made that are available for anyone to use for free.
Wordcounter tells you the most common words in a bunch of text, including bigrams (pairs of words) and trigrams (sets of 3 words together). This can be a starting point for textual analysis.
Same Diff compares two or more text files and tells you how similar or different they are. Examples include comparing the speeches of Hillary Clinton and Donald Trump, or comparing the lyrics of Bob Dylan and Katy Perry, among others.
Connectthedots shows how your data are connected by analyzing them as a network. They show a chart where each node is a character in Les Misérables, and each link indicates that they appear in a scene together.
If you want to know more about applying these tools in a real life situation, the Executive Director of the Engagement Lab, Eric Gordon, has an online book called Accelerating Public Engagement: A Roadmap for Local Government.
The whole NEO team attended AEA’s Evaluation 2017 conference last week. I am still processing a lot of what I’ve learned from the conference, and hope to write in more detail about them in the upcoming months. Until then, here are some of my highlights:
I attended the two-day Eval 101 workshop by Donna Mertens, and the half day Logic Model workshop from Thomas Chapel. Both workshops gave me a solid understanding of how evaluators plan, design, and execute their evaluations through hands-on training. I know I’ll be referring to my notes and handouts from these workshops often.
The conference website defines these presentations as “20 PowerPoint slides that automatically advance every 15 seconds for a total presentation time of just 5 minutes.” Just thinking about creating such a presentation makes me nervous! The few that I saw have inspired me to work on my elevator pitch skills.
I attended a delicious lunch with fellow evaluators who are active on Twitter. Though I am not very active on that platform, they welcomed me and even listened to my elevator speech about why public libraries are amazing. Each attendee worked in different evaluation environments, and came from all over the United States and around the world. It was a fun way to learn more about the evaluation field.
It’s hard to pick a favorite session, but one that stood out was DVR3: No title given. Despite the lack of a title, the multipaper presentation will stay with me for a long time. The first presentation was from Jennifer R. Lyons and Mary O’Brien McAdaragh, who talked about a personal project sending hand-drawn data visualizations on postcards. The second presentation, by Jessica Deslauriers and Courtney Vengrin, shared their experiences using Inkscape in creating data visualizations.
First NEO meeting IRL
This is my favorite part of the conference. I’ve been working with the NEO for over a year, and yet this was the first time we were all in the same room together. It was such a treat to dine with Cindy and Karen, and work in the same time zone. We also welcomed our newest member, Susan Wolfe, to the team. Look for a group photo in our upcoming Thanksgiving post.
I recommend librarians interested in honing their evaluation skills to sign up for the pre-conference workshops, and to attend AEA’s annual conference at least once. It opened my eyes to all sorts of possibilities in our efforts to evaluate our own trainings and programs.
The promotora’s uncle was sick and decided it was his time to die. She was less convinced, so she researched his symptoms on MedlinePlus and found evidence that his condition probably was treatable. So she gathered the family together to persuade him to seek treatment. Not only did her uncle survive, he began teaching his friends to use MedlinePlus. This promotora (community health worker) was grateful for the class she had taken on MedlinePlus offered by a local health sciences librarian.
This is a true story, but it is one that will sound familiar to many who do health outreach, education, or other forms of community service. Those of us who coach, teach, mentor, or engage in outreach often hear anecdotes of the unexpected ways our participants benefit from engagement in our programs. It’s why many of us chafe at using metrics alone to evaluate our programs. Numbers usually fall short of capturing this inspiring evidence of our programs’ value.
The good news is that it isn’t difficult to turn anecdotes into evaluation data, as long as you approach the story (data) collection and analysis systematically. That usually means use of a standard question guide, particularly for those inexperienced in qualitative mythologies.
For easy story collection methods, check out the NEO tip sheet Qualitative Interview “Story” Methods. While there are many approaches to doing qualitative evaluation, this tip sheet focuses on methods that are ideal for those with limited budgets and experience in qualitative methods. Most of these story methods can be adapted for any phase of evaluation (needs assessment, formative, or outcomes). The interview guides for each method consist of 2-4 questions, so they can be used alone for short one-to-one interviews or incorporated into more involved interviews, such as focus groups. Every team member can be trained to collect and document stories, allowing you to compile a substantial bank of qualitative data in a relatively short period of time. For example, I used the Colonias Project Method for an outreach project in the Lower Rio Grande Valley and collected 150 stories by the end of this 18-month project. That allowed us to do a thematic analysis of how MedlinePlus en Español was used by the community members. Individual stories helped to illustrate our findings in a compelling way.
Do you believe a story is worth a thousand metrics? If so, check out the tip sheet and try your hand at your own qualitative evaluation project.
Note: The story above came from the project described in this article: Olney, Cynthia A. et al. “MedlinePlus and the Challenge of Low Health Literacy: Findings from the Colonias Project.” Journal of the Medical Library Association 95.1 (2007): 31–39. PMC free article.
It’s the spookiest time of the year! To help celebrate, we’re visiting our favorite fictional town, Sunnydale.
If you’re a long-time reader of Shop Talk, you might already be familiar with the posts about librarians reaching out to the vampire population in Sunnydale. The first post about Sunnydale was Developing Program Outcomes using the Kirkpatrick Model – with Vampires, which featured librarians developing an outcomes-based plan for an evening class in MedlinePlus and PubMed. Since then, the librarians of Sunnydale have been busy creating logic models, evaluation proposals, and evaluating their social media engagement.
Whether you’re a new subscriber or have been reading the Shop Talk since its inception, the Sunnydale posts allow us to have a little fun while teaching evaluation skills. We will update this list with new Sunnydale posts, so be sure to bookmark this page for future use.
We hope you enjoy this trip to Sunnydale, and have a fang-tastic Halloween!
Developing Program Outcomes using the Kirkpatrick Model – with Vampires
July 28, 2016 by Karen Vargas
The Kirkpatrick Model (Part 2) — With Humans
August 2, 2016 by Cindy Olney
From Logic Model to Proposal Evaluation – Part 1: Goals and Objectives
August 26, 2016 by Karen Vargas
From Logic Model to Proposal Evaluation – Part 2: The Evaluation Plan
September 2, 2016 by Karen Vargas
Beyond the Memes: Evaluating Your Social Media Strategy – Part 1
January 13, 2017 by Kalyna Durbak
Beyond the Memes: Evaluating Your Social Media Strategy – Part 2
January 20, 2017 by Kalyna Durbak
Finding Evaluator Resources in Surprising Places
April 21, 2017 by Kalyna Durbak
Logic Model Hack: Constructing Proposals
June 2, 2017 by Karen Vargas
Evaluation Questions: GPS for Your Data Analysis
September 8, 2017 by Cindy Olney
Photo Credits: Annie, Cindy’s cat, bares her fangs. Photo courtesy of Petsitter M.
Last week we talked about how to think about questionnaire design in terms of social exchange theory – how to lower perceived cost and raise perceived rewards and trust in order to get people to complete a questionnaire.
But there’s more to getting people to complete a questionnaire than its design. There are the words you use to ask people to complete your questionnaire (often in the form of the content of an email with the questionnaire attached). And there’s the method of distribution itself – will you email? Mail? Hand it to someone? How many times should you remind someone?
As we have said in the previous post, Boosting Response Rates with Invitation Letters , we recommend the Dillman’s Tailored Design Method (TDM) as a technique for improving response rates. In TDM, in order to get the most responses, you might communicate with your respondents four times. For example, an introduction email before the questionnaire goes out, the email with the questionnaire attached, and two reminders. How does this fit with social exchange theory?
Let’s go back to the three questions we said in the last post that you should always consider, and apply them to communication about and distribution of your questionnaire:
- How can I make this easier and less time-intensive for the respondent? (Lower cost)
- How can I make this a more rewarding experience for the respondent? (Increase reward)
- How can I reassure the participants that it is safe to share information? (Increase trust)
Remember, when you are asking someone to complete a questionnaire for you, you are asking them to take time out of their lives that they cannot get back. Remember too that they have been asked to complete many, many questionnaires in the past that have taken up too much of their time, annoyed them, or have clearly been designed to manipulate them into donating money or in some other way abused their trust. You have to use your communication and distribution strategy to overcome these obstacles. Here are just a few ideas.
Decrease Perceived Cost
- Be sure respondents have multiple ways of contacting someone with questions. This reduces the possibility that someone will put off responding to the questionnaire because they have a question and have to figure out who to ask.
- Add a “reply by” date in all of your invitation emails. Many people find it easier to follow up with a task if there is a clear deadline.
Increase Perceived Reward
- Ask respondents directly for their help and tell them specifically why you are asking for their opinions. This may help your respondents’ to understand that they are uniquely able to answer the questions and feel like they are contributing to something. For many people, this is a reward.
- If you can afford to, consider including a token gift with your first invitation email. This small gift of $1 or $2 as a token of appreciation demonstrates trust and respect that the organization distributing the questionnaire shows for the respondents. And research shows that it provides better results than a larger amount of money given only to people who respond.
Increase Perceived Trust
- Have someone trusted by your participants to endorse your project (such as, by signing your pre-letter, posting to a circulated newsletter or blog). This builds trust that you are trusted among their peers.
- Tell them how you will keep their responses confidential and secure.
- For mail surveys, use first-class postage – this will increase their trust that you take the questionnaire seriously.
Source: Dillman DA, Smyth JD, and Christian LM. Internet, Phone, Mail, and Mixed-Mode Surveys: The Tailored Design Method, 4th edition. Hoboken, NJ: Wiley; 2014.
Getting a high response rate is an important part of trusting the information you get from a questionnaire. Don Dillman, a guru of questionnaire research, says that to get a good response rate it helps to see questionnaires as part of a social exchange. Social Exchange Theory is the theory that “people are more likely to comply with a request from someone else if they believe and trust that the rewards for complying with that request will eventually exceed the costs of complying.”1 Specifically he says that when designing your questionnaires, distributing them, or communicating about them, you need to think specifically about ways to lower the perceived cost of responding to the questionnaire, and increase perceived rewards and perceived trust.
What do we mean by perceived cost, rewards and trust? A cost might be the amount of time it takes to do the survey, but the perceived cost is how long that survey feels to the person answering it. For example, if the survey is interesting, it could be perceived as taking less time than a shorter, but confusing or poorly worded survey. A reward could be an actual monetary reward, or it could be the reward of knowing that you are participating in something that will make important change happen. Perceived trust could be trusting that the organization will make good use of your responses.
Today I will only focus on questionnaire design — in future blog posts we will write about how social exchange theory can be used in communicating about and distributing your questionnaires.
One of the things I like about social exchange theory in questionnaire design is that normally I would be looking at the questions I’m writing in terms of how to get the information that I want. This is fine of course, but by looking at the questions from a social exchange perspective, I can also be thinking about ways I might write questions to improve someone’s likelihood of completing the survey.
Ask yourself these three questions:
- How can I make this easier and less time-intensive for the respondent? (Lower cost)
- How can I make this a more rewarding experience for the respondent? (Increase reward)
- How can I reassure the participants that it is safe to share information? (Increase trust)
Here are some ideas that might get you started as you think about applying social exchange theory to your questionnaire design.
- Only ask questions in your survey that you really need to know the answers to so you can keep it as short as possible.
- Pilot test your questionnaire and revise to ensure that the questions are as good as possible to minimize annoying your respondents with poorly worded or confusing questions.
- Put open-ended questions near the end.
- Ask interesting questions that the respondent want to answer.
- As part of the question, tell the respondent how the answer will be used, so they feel that by answering the question they are being helpful (for example “Your feedback will help our reference librarians know how to provide better service to users like you.”)
- A status bar in an online survey lets you know how much of the survey is left and helps you to trust that you won’t be answering the survey for too long
- Assure respondents that you will keep responses confidential and secure. While this may have already been stated in the introduction, it could help to state it again when asking a sensitive question.
For more information, see:
NNLM Evaluation Office: Booklet 3 in Planning and Evaluating Health Information Outreach Projects series: Collecting and Analyzing Evaluation Data https://nnlm.gov/neo/guides/bookletThree508
NEO Shop Talk Blog posts on Questionnaires: https://news.nnlm.gov/neo/category/questionnaires-and-surveys/. Some
NEO Questionnaire Tools and Resources Guide on Data Collecting: https://nnlm.gov/neo/guides/tools-and-resources/data-collection
1 Dillman, Don A., et al. Internet, Phone, Mail, and Mixed-Mode Surveys : The Tailored Design Method, John Wiley & Sons, Incorporated, 2014. ProQuest Ebook Central, p. 24.
Our organization has a culture of evaluation.
Oooh, doesn’t that sound impressive? In fact, I confess to using that term, culture of evaluation, in describing the NNLM Evaluation Office’s mission. However, if someone ever asked me to explain concretely what a culture of evaluation actually looks like, it would have taken some fast Googling, er, thinking on my part to come up with a response.
Then I discovered the Community Literacy of Ontario’s eight-module series, Developing A Culture of Evaluation. In module 1, Introduction to Evaluation, they ground the concept in seven observable indicators seen in organizations dedicated to using evaluation for learning and change. (You can read their list on page 11 of module 1).
That led me on a hunt for more online resources with suggestions on how to build a culture of evaluation. I located some good ones. Here’s an infographic from Community Solutions Planning and Evaluation with 30 ideas for evaluation culture-building that most nonprofits could adopt. John Mayne’s brief Building an Evaluation Culture for Effective Evaluation and Results Management describes what senior management can do to make (or break) an organization’s culture of evaluation. My investigation inspired me to think of ways we can all foster a culture of evaluation in our own teams and organizations.
Put Evaluation Information on Meeting Agendas
Embrace organizational learning and use evaluation information as your primary resource. Find ways to integrate performance and outcome measures into daily planning and decision making. A good place to start is in staff or team meetings. Usage statistics, social media metrics, attendance or membership rates are examples of data that many organizations collect routinely that might generate good discussion about your programs. If you don’t have any formally collected data related to agenda topics, consider asking your team to collect some data informally. Check out module 3, Collecting Data, for examples of both informal and formal data collection guidance. Module 3, Taking Action, has some practical examples of how you can share evaluation data and structure discussions. (I particularly like the template on page 9 of this module.)
Take Calculated Risks Using Evaluation Data
When planning programs, collect and synthesize evaluation data to get an overview of factors that support and challenge your likelihood of success. One of the best tools for doing this is a SWOT analysis (SWOT stand for Strengths, Weaknesses, Opportunities, Threats). This NEO Shop Talk post describes how to extend the traditional SWOT discussion to identify unknowns regarding your program success. The SWOT analysis can both help you synthesize existing information about your customers and environment, as well as identify areas where you need more information. You might want to revisit module 3’s discussion on informal data collection to help when you lack existing evaluation information.
Report Findings Early and Often
Like cockroaches, exhaustive final reports will likely survive until the end of time. if you are truly committed to a culture of evaluation, however, you need to break with this end-of-project tradition and find opportunities to share findings on an ongoing basis. Data dashboards are one example of how to engage a broad audience in your organization’s evaluation data. However, they require time and expertise that may not be out of reach for many organizations. One nice tip from the Community Solution 30-ideas infographics is to make friends with your organization’s communication team. They can help you find opportunities in publications, websites, and social media channels to share evaluation findings. Your job will be to add substance to the numbers. While quick facts can be interesting, it is better to talk about numbers as evidence of success. You also should not be shy about publishing less stellar findings and explaining how your organization is using them to improve programs and services.
Engage Stakeholders in the Evaluation Process
A stakeholder is anyone who has a stake in the success of your program. They should, and usually do, influence program decisions. It’s up to you to make sure they are engaging with evaluation information as they develop informed opinions and advice. Rather than giving them well-synthesized findings in annual reports or presentations, engage them in the actual practice of evaluating programs. NEO Shop Talk has a number of posts that can help you structure meetings and discussion with stakeholders about evaluation findings. Check out these posts on data parties, audience engagement, and Liberating Structures.
Of course, a culture of evaluation requires foundational evaluation activities. I highly recommend all of the modules in Community of Literacy of Ontario’s Developing A Culture of Evaluation. The content is succinct and easy to read, and relatively jargon free. (The jargon they do use is defined.) The NEO’s booklet series “Planning and Evaluating Health Information Outreach Projects” is another how-to resource on the basics of evaluation.
The full citation for John Mayne’s paper is