• Document: validity in Interpretative Phenomenological Analysis Sarah Vicary, Alys Young and Stephen Hicks
  • Size: 249.09 KB
  • Uploaded: 2019-06-13 10:13:22
  • Status: Successfully converted

Some snippets from your converted document:

A reflective journal as learning process and contribution to quality and validity in Interpretative Phenomenological Analysis Sarah Vicary, Alys Young and Stephen Hicks Abstract Using selected, contemporaneous illustrations from the reflective journal of a doctoral student undertaking data analysis for the first time, this article examines the relationship between journaling as a learning process when undertaking computer assisted qualitative data analysis and establishing quality and validity in Interpretative Phenomenological Analysis (IPA). The writing of the journal is shown both to enact some potential validity criteria (e.g. in producing an audit trail) whilst also recording and reflectively prompting the process of learning, interpretation and bracketing, thus evidencing transparency. By using a journal inside the software package and alongside the stages of the IPA, analysis within the software package, it is argued that quality and validity become dynamic, not static constructs. These constructs are intimately linked to the researcher-learning-process and permit a critical stance to be taken. This is an Author Accepted Version of a manuscript published in Qualitative Social Work (2016). Copyright Sage. 1 Keywords Learning process, reflective journal, quality and validity, computer assisted qualitative data analysis, interpretative phenomenological analysis (IPA) Introduction Interpretative Phenomenological Analysis (IPA), as a qualitative research approach initiated and developed primarily in the field of health psychology (Smith, 1996; Smith et al., 2009), has attracted significant debate about what might constitute quality and validity within its methodological framework (Chamberlain, 2011; Shaw, 2011; Smith 2011a; Smith 2011b; Todorova, 2011). Interest in quality and validity has centred on all stages in the research process; sample identification, data collection, data analysis (interpretation) and representation in print (Brocki et al., 2006; Gee 2011; Larkin et al. 2006). At issue is whether and how criteria might be developed particular to IPA or whether the debates that discuss quality and validity remain the same as for all qualitative research methodologies (Guba and Lincoln, 2005; Lincoln and Guba 1985; Seale and Silverman 1997; Robson, 2002; Rolfe 2006). This article focusses on how a student tackled the issue of quality and validity in IPA data analysis while carrying out an IPA project for the first time as part of a PhD. It examines the activity of journaling, 2 (i.e. the use of a reflective researcher-created regular written log) as a learning process when undertaking data analysis. Specifically by using a journal inside a data management computer software package (in this case QSR Nvivo 10) and alongside the stages of the IPA analysis, also within the software package, we argue that quality and validity become dynamic, not static, constructs intimately linked to the researcher- learning-process. We examine the extent to which journaling enacts criteria by which ‘quality’ might be defined and recognised. We show how its process as student learning tool meshes with the double hermeneutic, essential to the IPA approach, whilst holding it up to critical examination. Excerpts from the researcher’s journal presented in italics are used throughout to illustrate. Quality and validity in data analysis Numerous authors attempt to produce criteria for assessing the quality and validity of qualitative research both generically (Hammersley, 2008) and across a range of disciplines such as social work (Barusch et al. 2001), nursing (Rolfe, 2006) and psychology (Yardley, 2000). Hammersley’s précis defines a contrasting spectrum; at the one end a finite set of observable and universal indicators and at the other a list of considerations agreed in local circumstances. Hammersley’s own preference for 3 methodological approaches such as IPA is the latter. He argues that criteria which need to be taken into account come about as part of the judgement process, are used in particular contexts and are cyclical and living. Studies that come later, he suggests, will judge their own quality and validity against similar previous studies but will do so by interpreting and sometimes re-interpreting criteria according to the situation (Hammersley 2008 p.160). Initially, Smith and colleagues applied the criteria to measure quality and validity as outlined in Yardley’s four broad principles to IPA: sensitivity to context, commitment and rigour, transparency and coherence and finally impact and importance (Yardley in Smith 2009 pp180-183). More recently this applicat

Recently converted files (publicly available):