top of page
Writer's pictureJoanna Prout, PhD

Culturally Competent Evaluation: An Ongoing Journey

Joanna Prout, PhD, Lead NCS3 Evaluator

When I learned about cultural competence as a social worker in the early 2000’s I had a textbook that outlined the values of different cultures. There was a picture of a clock next to a statement that while white people valued timeliness, being on time was less important in Hispanic and African American culture, where time was more flexible. I dutifully took notes and passed the multiple-choice test that followed, but something felt off.


Since that time, I have taken a lot of classes and done clinical work with communities including urban African American youth, Hispanic families recently settled in the Midwest, and rural Appalachian children with disabilities. I have worked with many unique and inspiring individuals, both as colleagues and clients. My desire to help communities in need, interest in mental health, and passion for data and analytics has led me to a career as a program evaluator.


I took cultural competence seriously, reading updated guidelines, exploring the values of each culture, ensuring that I used the right language. I was prepared to read as many textbooks as needed to be a good evaluator. But, it wasn’t that easy. I couldn’t memorize facts and then pass a test. As outlined by the American Evaluation Association in their summary of cultural competence (http://bit.ly/AEA_CulturalCompetence), there is no set of rules to follow to be culturally competent:


Cultural competence is a stance taken toward culture, not a discrete status or simple mastery of particular knowledge and skills. A culturally competent evaluator is prepared to engage with diverse segments of communities to include cultural and contextual dimensions important to the evaluation. Culturally competent evaluators respect the cultures represented in the evaluation.


Instead of memorizing information, I had to rethink my whole approach to evaluation and determine what could be kept and what would need to change.


Coming from an academic background with an emphasis in behavioral psychology, “good” research had a high level of experimental control so we could get “clean” data. The research team had a plan for what we were looking at and we were sticking to it. The Institutional Review Board (IRB) wouldn’t let us change it even if we wanted to. We, as people, did not get involved in the experiment as that would make it less meaningful. This approach has value and I am glad I have a foundation in it.


But, doing evaluation in a manner that is culturally competent is different. We must be flexible and involve the people who are receiving the intervention in each step of the process. We must be willing to change and adapt, especially if we learn that our values have led to an evaluation plan that does not align with the values of program participants. An evaluation is “good” not solely based on if the data allows us to determine what caused what but also based on if the data can be used by participants in a meaningful way. Can we use the information we have gathered to collaborate with participants to help them achieve their goals? If participants see no value in completing the evaluation, we have to ask ourselves why and if there is something we can do instead that they would be excited to be a part of.


Prioritizing collaborative evaluation is an ongoing struggle. In most of the work we do, evaluation indicators are determined by funders based on their priorities. They may align with what the people in the program value and they may not. There are limited additional opportunities to collect information, as time is tight, and funding is limited. Getting community input is essential but asking people to give time and energy acting as volunteer cultural consultants is unfair. Paying community members to provide cultural expertise to an evaluation team is not a widely accepted practice and requires planning and creative funding methods. In addition, we want to make sure that their input could be rapidly integrated into the evaluation in a meaningful way.


As the NCS3 continues our work evaluating culturally responsive, trauma-informed school mental health, we are committed to culturally competent evaluation practices. We are eager to learn more from people who have tackled these challenges and adopted culturally competent evaluation practices. Have you integrated community members into evaluation planning and implementation? How do you compensate them and ensure that you can act on their input?


Selected Resources:




Comments


bottom of page