Categories
General

ARP – Blog Post 4: OBSERVING

Analysing practice: thematic and artefact-based interpretation

The analytical stage of this action research project required a significant shift in my thinking. While data collection felt relational and dialogic, analysis initially introduced a sense of uncertainty. I was unsure how to move from rich qualitative material toward meaningful interpretation without reducing participants’ voices or oversimplifying complexity. This discomfort prompted reflection on my own analytical habits and disciplinary background, and became an important part of the research process itself.

The analysis followed a reflexive thematic analysis approach (Braun and Clarke, 2006; 2021), understanding themes not as truths embedded within the data, but as interpretive patterns constructed through sustained engagement, reflexivity, and theoretical positioning. As a practitioner-researcher working within the same institutional structures under investigation, my interpretations were shaped by professional proximity and shared language. Rather than attempting neutrality, this analysis acknowledges subjectivity as an analytic resource, while remaining attentive to its limitations.

My initial sense-making process drew on affinity mapping, a method familiar from my background in branding and design. Martin and Hanington (2012) describe affinity mapping as a way of organising complexity through visual clustering rather than predetermined categories. Using Miro, I worked with digital post-it notes to group interview excerpts and questionnaire responses, allowing patterns, repetitions, and tensions to surface visually. At this stage, the emphasis was not on defining themes, but on orientation – understanding what was present in the data before deciding what it might come to represent. Short analytic memos were written throughout this process to capture emerging interpretations, questions, and moments of tension. Engaging with Braun and Clarke’s later work (2021) helped reframe my uncertainty. Rather than seeking stability through categorisation, I began to understand ambiguity as intrinsic to qualitative enquiry. Thematic analysis is not linear or cumulative; it is recursive, requiring repeated movement between data, interpretation, and reflection. This perspective allowed me to work with provisional meanings rather than prematurely fixing conclusions.

As analysis developed, codes were reorganised, merged, and at times abandoned entirely, reinforcing that qualitative analysis is recursive rather than cumulative. Three interrelated themes became increasingly visible. The first concerned orientation and coherence, which did not simply refer to navigational clarity, but to students’ sense of trust in the learning journey and confidence that focused tasks were situated within a meaningful pedagogical arc. The second addressed workarounds as pedagogical labour, highlighting how educators compensate for platform limitations through additional effort. The third revealed ongoing tensions between pedagogical intention and institutional structure. These themes aligned closely with the project’s research questions, addressing both educator practice and digital design constraints. Unlike linear coding models, themes were not repeatedly merged or hierarchically restructured. Instead, emphasis shifted through depth of engagement, as certain patterns became increasingly prominent through recurrence, intensity, and relevance to the research questions. One theme in particular – navigation, structure and cognitive load – emerged as having the strongest bearing on the overall enquiry. This theme extended beyond usability concerns to encompass educators’ observations of student disorientation, fragmented learning journeys, and the emotional labour required to compensate for unclear systems. Its prominence informed the direction of subsequent analysis and became central to understanding why platform design mattered pedagogically rather than merely technically.

To explore this further, I conducted an artefact review of Moodle alongside alternative platforms referenced by participants, including Miro, Padlet, and Notion. The objective was not to evaluate tools comparatively, but to better understand why colleagues consistently pivoted away from Moodle in their teaching practice. Examining navigation depth, hierarchy, spatial organisation, and visual structure helped trace how pedagogical intentions were either supported or constrained by interface design. This artefact-based analysis allowed participant accounts to be examined materially, revealing how platform structure directly shaped cognitive load and teaching strategies.

Following theme development, a light quantitative weighting was applied to indicate relative emphasis across the dataset. This was not intended to produce statistical claims, but to support reflective judgement by visualising which themes carried the greatest analytic weight within the project. Importantly, this stage also revealed absence. While the project was motivated by concerns around student experience, learner voice remained indirect, mediated through educator accounts. Recognising this limitation became part of the analysis itself, reinforcing the ethical and institutional boundaries shaping the enquiry.

Through this process, I came to understand analysis in action research not as the pursuit of certainty, but as the development of confident, situated judgement. Allowing interpretation to remain reflexive and provisional strengthened both the findings and my understanding of how pedagogical meaning is constructed within digital learning environments.

References
Braun, V. and Clarke, V. (2006) ‘Using thematic analysis in psychology’, Qualitative Research in Psychology, 3(2), pp. 77–101.
Braun, V. and Clarke, V. (2019) ‘Reflecting on reflexive thematic analysis’, Qualitative Research in Sport, Exercise and Health, 11(4), pp. 589–597.
Braun, V. and Clarke, V. (2021) Thematic analysis: a practical guide. London: Sage.
Martin, B. and Hanington, B. (2012) Universal methods of design: 100 ways to research complex problems, develop innovative ideas, and design effective solutions. Beverly, MA: Rockport.

Figure 1: Photo of analogue working through anonymised questionnaire responses. Printed survey data were manually annotated using colour coding and handwritten notes to support close reading and early pattern recognition. This process helped surface differences between concise written responses and richer interview narratives, informing subsequent thematic comparison (Huber, 2025).

Figure 2: Photo of printed semi-structured interview transcript 1 pages were annotated through colour highlighting, underlining, and margin notes to support detailed engagement with participant language. This stage functioned as an initial sense-making process prior to digital thematic clustering (Huber, 2025).

Figure 2: Photo of annotated semi-structured interview transcript 2, working physically with the transcript supported reflexive interpretation and helped prepare the data for later thematic grouping and indicative weighting (Huber, 2025).

Figure 4: Screenshot of digital affinity mapping conducted in Miro, showing clustered themes derived from interview transcripts and questionnaire responses. Colour-coded post-it notes were used to support visual sense-making and the identification of recurring patterns (Huber, 2025).

Figure 4 (detail): Detail view of the digital affinity map developed in Miro, focusing on the theme navigation, structure and cognitive load. Post-it notes were rearranged to support closer inspection of recurring issues related to platform navigation, visual hierarchy, and students’ ability to orient themselves within the learning journey (Huber, 2025).

Figure 5: Screenshot of artefact analysis comparing Moodle with alternative digital platforms (Miro, Padlet and Notion). The visual mapping highlights how differences in navigation structure, search functionality, and spatial organisation relate to educators’ reported use of supplementary tools (Huber, 2025).

Figure 6: Screenshot of a colour-coded interview transcript supporting the quantification of qualitative themes. Transcript 1 was annotated with the help of AI using colour markers aligned to established thematic categories. This stage functioned as a preparatory step for comparative weighting rather than thematic development (Huber, 2025).

Figure 7: Screenshot of an indicative thematic weighting matrix across interviews, questionnaire responses, and artefact analysis using AI. Dot weighting represents relative prominence of themes and depth of engagement rather than numerical frequency (Huber, 2025).