Video Pre-Processing

Video and audio will need to be synced up prior to analysis. We separated a continuous video stream of a day of interactions into separate recordings corresponding to each “group” that interacted with the system. Research has shown that tracking and separating groups interacting with exhibits “in-the-wild” is quite complex [1]. We suggest two approaches to alleviate this challenge: 1) develop a set definition of what defines a group beginning/ending an interaction, and apply that consistently to the analysis, and/or 2) if you conduct interviews or questionnaires with certain groups, those can be the groups that you analyze in subsequent video analyses.

When analyzing videos, we establish a fixed unit of analysis in order to avoid discrepancies resulting from subjective variability in both the unit of analysis (when the event is taking place) and the code (what is taking place). Each video is divided into a series of 10 second segments and each code is given a ‘1’ if it occurred during that time segment and a ‘0’ if it did not occur (this is called a “one-zero sampling” approach in the literature [2]). We have used both Excel spreadsheets and the coding software Atlas.ti to break videos down into 10 second segments—you may use your preferred method/software. For intellectual codes, which rely more on the content of verbal utterances (click here more detail), we transcribe the dialogue in the videos and ascribe one code to each line of dialogue (using an Excel spreadsheet). This procedure is adopted because we have found in prior experience that multiple lines of dialogue often appear in a single 10 second interaction segment. For transcription, we typically have one analyst transcribe the videos and a second analyst check the transcription for mistakes before analyzing the video.

References

  1. Florian Block, James Hammerman, Michael Horn, Amy Spiegel, Jonathan Christiansen, Brenda Phillips, Judy Diamond, E Margaret Evans, and Chia Shen. 2015. Fluid grouping: Quantifying group engagement around interactive tabletop exhibits in the wild. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 867–876.
  2. Peter K Smith. 1985. The Reliability and Validity of One-zero Sampling: misconceived criticisms and unacknowledged assumptions. British Educational Research Journal 11, 3: 215–220.