Creating a New Paradigm for Simulation Based Training by Combining Augmented Reality, Real Time Cognitive Evaluation and Artificial Intelligence.
On June 3rd, 2022, a multidisciplinary team from Queen’s University presented their research at the IDEaS MarketPlace Conference held at the Infinity Convention Centre in Ottawa. Composed of researchers in Engineering, Medicine, AI, Psychology and Education, the team’s research focus was to develop a dynamically adaptive simulation system that modulated an augmented reality trauma simulation in response to the level of cognitive load being experienced by the participant. The research has important implications for designing dynamically adaptive simulation based training and learning environments, as well as for advancing human-computer interfaces more generally.
Korey Thomsom (Programmer), Dr. Dan Howes (Trauma Physician) and Dr. Ali Etemad (ECE) at the Ingenuity Labs booth.
Advancing Human Computer Interfaces: Cognitive load, which can be characterized as the ratio of the cognitive resources used by an individual (or team) during engagement with an activity against available cognitive resources, can act as a reliable index of expertise. Generally, novices within a given domain experience higher levels of cognitive load during task activity – generated by the need for conscious control – than experts, which have a higher degree of task “automaticity”. The level of cognitive load experienced by a learner matters to learning outcomes: if a learner is cognitively overwhelmed, little to no learning takes place. Similarly, if a learner is cognitively underwhelmed, the learner disengages and little to no learning takes place. Capturing and indexing cognitive load in real time provides human computer interfaces (HCIs) with the ability to dynamically adapt to the cognitive and affective state of the user, potentially modulating a broad range of parameters to support performance optimization.
Capturing and Indexing Cognitive Load: The team captured highly granular physiological data, known to be strongly correlated with cognitive load, from four sources: eye-tracking (pupillometry, saccades, blinks and gaze direction), electrocardiogram (ECG), electroencephalogram (EEG), and electrodermal conductance (EDA). The data was captured during the execution of two tasks – the NASA MATB-II, a computer-based multitasking environment, and immersive driving simulation. Task complexity in both environments was programmatically modulated and the physiological data captured in real time in association with cognitive load levels expressed by the participants during task execution. Machine learning models were then developed to optimize cognitive load indexing using the real time data generated during task execution using a range of techniques, both within and between domains.
An example of the augmented reality display used to provide highly realistic simulated patient trauma presentations.
Despite the significant challenges to in-person research presented by COVID-19, the research team were able to generate sufficient data to demonstrate an approximately 70% accuracy rate during within-domain testing, and up to 68% accuracy between domains, matching current state-of-the-art outcomes.
Modulating Augmented Reality Trauma Simulations: A second aspect of the research was developing a compelling and modulatable augmented reality simulation environment that could be used as a stand-alone, or as an overlay to traditional teaching medical mannequins. These augmented reality overlays really bring the “patient” to life, overcoming the “believability” effect for many learners, leading to a lack of realism which can negatively impact engagement and learning outcomes.
Next Steps: The next steps for the research team include refining the cognitive load indexing mechanisms and operationalizing the feedback mechanism to fully automate augmented reality modulation based on cognitive state.