I am a conservation scientist. Like any other scientist, I develop and test hypotheses, trying to figure out how the world works. Once I learn something, I publish my results in academic journals where other scientists can evaluate and build upon what I’ve learned. Because I’m a conservation scientist, however, I also need make sure the people who directly impact prairie conservation (ranchers, land managers, policy makers, etc.) get my information and use it to improve the way grasslands are managed and restored. If I fail to influence the actions of others in positive ways, I fail as a conservation scientist.

It doesn’t matter how much we learn about employing prescribed fire effectively if we’re not able to help others use the lessons we learn.
In science, keen observational skills and creativity often spark innovations, but rigorous collection of data is required to see whether a great idea actually makes sense or not. While I’ve had some good ideas, I’ve also come up with plenty of grassland management and restoration strategies that turned out to be duds. In each case, I learned a little more about prairie ecology and our land stewardship improved as a result.
I’m proud of the work I’ve done over the years to develop new and better ways of restoring and managing prairies. I know those strategies are effective because I’ve spent a tremendous amount of time testing them, through both observation and rigorous data collection. My computer is full of spreadsheets and graphs showing how prairie species and communities respond to various treatments.
I’m also proud of the work I’ve done to share what we’ve learned with others, but until recently, I’ve done very little to evaluate the effectiveness of that work. I’m not alone – most of my colleagues in the world of conservation science do a great job of measuring the natural world and its responses to human activities, but do very little to evaluate whether their work is actually influencing conservation. It’s fairly ridiculous when you think about it. We would never think of devoting ourselves to a new invasive species control technique without testing its effectiveness, but for some reason we’re satisfied to rely on blind optimism that our outreach strategies are changing the world.
Come on, folks! We’re scientists! We love data, and we’re good at developing and testing ideas. Why do we apply that passion and aptitude to only part of our work? Why aren’t we testing whether our ideas are reaching the intended audience and influencing on-the-ground conservation work? How can we adjust and improve our outreach strategies if we don’t have any data to work from?
To be fair, measuring outreach impacts requires a very different kind of scientific approach than most of us are comfortable with. Instead of counting plants or observing behavior of birds, bees or bison, we have to assess the attitudes, motivations, and actions of people. Many of us took our career paths because we prefer the company of birds, bees and bison to people, but that doesn’t give us leave to just ignore people altogether – especially when the success or failure of our work hinges upon their actions.
Fortunately, we don’t have to work alone. There are lots of scientists who are already good at studying people, and many of them are happy to work with us. I’ve had very enthusiastic responses from those I’ve asked advice from, and their input has been very helpful.

We should probably take some of the energy we spend studying animals and put it towards studying the way people respond to our outreach efforts.
Whether you’re a scientist who actively shares your results with your target audience, or someone who relies on others to translate and transmit that information, there are some basic questions we should all be trying to address. This is far from a comprehensive list, but it’s a start.
Defining Audience and Message
What lessons and messages from my work are most important?
Who is the audience for those?
What messengers/media will best reach the audiences?
What are the current attitudes/actions of my audience? What are the main drivers of those those attitudes and actions?
Who are the credible voices my audience looks to for guidance?
How can I reach those credible voices?
Evaluating Success
Are my messages reaching my target audience?
How many people in that audience am I reaching?
Are my messages changing attitudes and/or actions?
At what scale, and to what degree am I making a difference?
Which messages, messengers, and media are most effective for reaching each of my audiences?

Many of us host field days, at which we can share what we’re learning with others. How many of us are assessing the effectiveness of those field days and other outreach strategies?
I’ve spent a lot of time thinking about audiences and messages, and it’s really helped me focus both my research and outreach more effectively. Recently, I’ve also started trying to answer some of the questions in the above “Evaluating Success” category. I’m making some progress, but I need to do much more.
I can tell you how many presentations I’ve given over the last two years (40) and how many people were in those audiences (3,447). I’ve also been keeping track of calls and emails asking for advice on prairie restoration and management. Unfortunately, while I have a lot of numbers, I can’t easily translate them into acres of improved management or enhanced habitat quality.
I have, however, made at least some progress toward measuring conservation impact on the ground. Much of that success came from survey work by one of our first Hubbard Fellows, Eliza Perry. Eliza conducted interviews with some land managers and private lands biologists who had attended field days at our Platte River Prairies. Among her many findings were that almost all respondents said what they learned from us had influenced their work, and they conservatively estimated that over 330,000 acres of land had been restored or managed differently because of that influence. Beyond that, Eliza was able to identify key factors that led to our success and suggest ways to improve our effectiveness.
In addition, Eliza surveyed readers of The Prairie Ecologist Blog and I conducted a follow-up survey three years later. Those surveys helped quantify the demographics of readers (e.g., about 2/3 of respondents have direct influence on prairie management). The surveys also measured the degree of influence the blog has on readers’ understanding of prairies and approach to managing or restoring prairies (when applicable). We even got a rough estimate of the number of acres on which management had been influenced by the blog (over 300,000).
Being able to quantify outreach impact, even when the numbers are fuzzy and incomplete, has been really helpful. It helps me justify my job, for one thing, and assures both me and my supervisor that the time I spend writing, giving presentations, and consulting with others has value. Most importantly, it helps me assess what is and isn’t working and adjust accordingly.
While it’s still not fully within my comfort zone, I’m trying hard to make sure I’m measuring the effectiveness of our outreach efforts, just as I do our prairie management and restoration work. I would love to hear from people who are trying to do the same thing, especially if you’ve found effective evaluation strategies. As more of us focus on measuring the success of our outreach work, we’ll be able to learn from each other and establish some common metrics. Hopefully, we’ll also become more effective at translating what we’re learning into large scale and meaningful conservation impact!