Jessie Wilson (Allen and Clarke)The idea of big data and open data - and the increasingly inevitable incorporation of these approaches into evaluations - is terrifying for some and tantalising for others. For those falling into the former category, a lack of understanding, familiarity, and/or confidence in approaching big/open data has the potential of limiting one's own evaluative practice. In other contexts, limitations with and/or misapplications of big/open data can also impact on the validity and credibility of the evaluation designs and findings we produce.
The purpose of this interactive AES conference session is two-fold. We will: 1) address these fears, concerns, and limitations about use of big/open data in evaluations; and 2) begin to learn how to use these approaches in our own evaluative practices. Although I have a strong quantitative research background, I am just beginning my own big/open data journey within an evaluation context. As such, I promise to be encouraging and honest about how we evaluation professionals can start to become, in the words of Michael Bamberger, more 'sufficiently conversant' with these new approaches and begin building them into our ever-transforming toolkits to enhance how we evaluate policies, programs and interventions.
With the above purposes in mind, the session will use a World Café approach and practical, real-world Australasian examples to discuss and share learnings about:
- what big data and open data is and is not and differences between these approaches;
- evaluative situations in which the use of big/open data is and is not appropriate, framed by various considerations (e.g., evaluand, evaluation methodology, evaluation questions and criteria, stage in the evaluation's project cycle); and
- limitations of big/open data use in evaluations (e.g., data reliability and quality, ethics, consent) and management of these limitations.
Participants will also be provided with a guide for how to assess big/open data quality within an evaluation context.