FULL TITLE: Measuring Digital Competencies for Mobile Learning: Exploring Relationships between Survey and Performance Data
This observational study explores relationships between self-reported digital experience and observed activity on a mobile device. Performance activity is recorded using four streams of synchronized, audio-video data and automated facial-expression detection. Data is analyzed using Noldus FaceReader and The Observe XT. [Keywords: GTCU, digital competency, technology competency and use, self report, experience, confidence, self-efficacy | Project Code: EILAB-S1]
Although real-time observations of human-computer interaction can provide rich data of the processes and outcomes of people’s skill sets, they are time consuming and expensive to conduct (Litt, 2013, p. 624). Therefore, researchers need reliable self-report instruments. Such instruments are particularly important for data collection “in the field,” over lengthy periods of time, and across large samples. A significant challenge, however, is one of comparing self-report data to observed performance so as to understand the relationship between these phenomena. This is especially important given the uncertain validity of self-reported, digital ability claims owing to overestimation, underestimation and contextual variability in relation to observed performance (Hargittai, 2005; Hargittai & Shafer, 2006; Kuhlemeier & Hemker, 2007; Talja, 2005). The Internet Skills Scale (ISS) (Van Deursen, Helsper, & Eynon, 2015), developed on a significant base of observational and statistical data (Van Deursen, Van Dijk, & Peters, 2012; Van Dijk & Van Deursen, 2014), represents one solution to the problem. This study examines the self-report instrument of the General Technology Competency and Use framework, which collects self-reported experience and confidence (not direct ability) claims. It compares these claims to goal-oriented and procedurally ill-defined activities performed on a mobile device, rather than well-defined procedural tasks performed on a desktop computer (Van Deursen, 2010).
This study examines relationships between two data sets:
1. An individual’s self-reported frequency and confidence of use ratings, collected via an abbreviated version of the GTCU self-report instrument, and categorized according to three GTCU orders: Social, Information and Epistemological.
2. Assessed performance on a randomized, goal-directed, task-based and procedurally ill-defined scenario, categorized by the same orders, using a mobile device in a lab setting.
The research questions are:
1. How can technology use of high competence and low competence be operationalized in a manner consistent with the GTCU framework? To address this question, we will utilize five conjectured Performance Assessment Indicators (PAIs) as defined in the Assessment Methodology document. (We may identify further indicators as we engage in video analysis of participant activity.)
2. Does the GTCU self-report instrument function as a reliable predictor for technology use of high and low competence? That is, in each of the three GTCU orders, are there significant correlations between an individual’s self-reported FCRs and their measures/assessed performance, such that those with high FCRs achieve high performance scores, and those with low FCRs receive low performance scores?
3. Which of the two GTCU survey measures–frequency of use (which relates to experience) and confidence of use (which relates to self-efficacy or ability perception)—exhibits the most significant correlations to measured/assessed performance in each of the four GTCU orders?
4. Which specific GTCU self-report items, in each order of competency, function most reliably as predictors for measured/assessed performance in that order?
5. If frequency and confidence ratings show meaningful correlations to measured/assessed performance for an activity on a particular device (e.g., search for music on a computer), do they also correlate with performance of the same activity on a different device (e.g., search for music on a gaming console)? (This question relates to the phenomenon of high confidence and low experience ratings provided for certain tasks on certain devices. The assumption is that, at least for certain tasks, an individual’s abilities acquired on one type of device are transferrable to a different type of device.)
Desjardins, F. J., & Peters, M. (2007). Single-course approach versus a program approach to develop technological Competencies in pre-service language teaching. In M.-A. Kassen, L. Lavine, K. Murphy-Judy, & M. Peters (Eds.), Preparing and Developing Technology Proficient L2 Teachers (pp. 3-21). Texas, USA: Texas State University.
Desjardins, F. J. (2005). Information and communication technology in education: A competency profile of francophone secondary school teachers in Ontario. Canadian Journal of Learning and Technology/La revue canadienne de l’apprentissage et de la technologie, 31(1), 1-14.
Desjardins, F. J., Lacasse, R., & Belair, L. M. (2001). Toward a definition of four orders of competency for the use of information and communication technology (ICT) in education. Paper presented at the Computers and Advanced Technology in Education.
Hargittai, E., & Shafer, S. (2006). Differences in actual and perceived online skills: the role of gender*. Social Science Quarterly, 87(2), 432-448.
Hargittai, E. (2005). Survey measures of web-oriented digital literacy. Social Science Computer Review, 23(3), 371-379.
Kuhlemeier, H., & Hemker, B. (2007). The impact of computer use at home on students’ Internet skills. Computers & Education, 49(2), 460-480.
Litt, E. (2013). Measuring users’ internet skills: A review of past assessments and a look toward the future. New Media & Society, 15(4), 612-630.
Talja, S. (2005). The social and discursive construction of computing skills. Journal of the American Society for Information Science and Technology, 56(1), 13-22.
Van Deursen, A. J. A. M., Helsper, E. J., & Eynon, R. (2015). Development and validation of the Internet Skills Scale (ISS). Information, Communication & Society, 1-20. doi:10.1080/1369118X.2015.1078834
Van Dijk, J. A., & van Deursen, A. J. (2014). Digital skills: unlocking the information society. New York: Palgrave Macmillan.
Van Deursen, A. J., Van Dijk, J. A., & Peters, O. (2012). Proposing a Survey Instrument for Measuring Operational, Formal, Information, and Strategic Internet Skills. International Journal of Human-Computer Interaction, 28(12), 827-837.
Van Deursen, A. J., & Van Dijk, J. A. (2010). Measuring internet skills. International Journal of Human–Computer Interaction, 26(10), 891-916.