Wednesday 5th Feb, 12:00 pm
Research in science communication
Efforts aimed at determining “what activities work when” in science communication typically focus on evidence that is provided in evaluation reports to funders. However, this evidence is heavily influenced by the contexts in which these reports are written.
To examine these contexts, I conducted a series of interviews with science communication “evaluation experts” from Australia and the UK. These interviews represent a range of perspectives in science communication evaluation including policymakers, academics, consultants and funders (including government). Based on these interviews, I will discuss several assumptions about evaluation that influence how science communication evaluation is performed and interpreted. In particular, disagreement about what it means to “evaluate” and how or whether “evaluation” is different from “research”, may have important implications for establishing an evidence base for science communication.
I will also introduce some of the differing perspectives offered in my interviews, and discuss some possible ways of overcoming these differences. These possible solutions centre around clarification and acknowledgment of multiple and potentially conflicting evaluative perspectives, improvement of evaluation through research on (as opposed to practice in) evaluation, and a change in evaluation models for science communication.