Emotion, Cognition, Behavior and the Brain Science of Learning
How Technology Can be Leveraged to Optimize Training
by Todd Maddox, Ph.D. and charles nduka, M.D.
Continuous learning is the rule not the exception in modern day society. It is more common for individuals to hold multiple distinct jobs in their career, than to hold the same job from initial hire to retire. In addition, technology is changing at such a rapid pace, that new learning is constant, even if one’s job does not change. Continuous learning is the norm whether learning a new technical skill, effective interpersonal interaction skills, working with new software, or getting a grip on a new version of existing software.
Learning is a complex brain-based process. One’s ability to learn is directly affected by emotional, motivational, cognitive, and behavioral considerations. The best way to optimize learning is to understand the complex interplay among these factors. We believe that emerging technologies, along with a detailed understanding of brain science, provide the mechanism for taking training to the next level.
In this article, we briefly review the brain science behind learning, and briefly review some of the emerging technologies that we believe hold great potential for enhancing learning. This is by no means a comprehensive review. Instead, we focus on technologies that we find innovative and have experience with.
Brain Basis of Learning
Learning tasks can be loosely divided into two types: cognitive skills and behavioral skills. Cognitive skills include memorizing the set of steps to follow to complete some task, learning a set of rules and regulations, or learning a new software tool. Behavioral skills come in at least two forms. Some behavioral skills are called soft skills or people skills. These involve the ability to behave in a way that shows empathy, embraces diversity, and minimizes unconscious biases. People skills are about how we treat and converse effectively with other people. They involve verbal, but also non-verbal skills. A second form of behavioral skills are technical motor skills. These range from an ability to troubleshoot and fix a piece of equipment, such as a small engine, to an ability to perform complex motor skills such as flying a fighter jet or performing neurosurgery.
An extensive body of scientific and neuroscientific research (much of it from the first author’s research laboratory) suggests that the human brain contains at least two distinct systems that are recruited during learning.
Cognitive Skills Learning System
The cognitive skills learning system in the brain is comprised of the prefrontal cortex and the medial temporal lobes, and, as the name suggests, evolved to learn cognitive skills. Learning within this system requires significant cognitive energy. One must focus attention on the task and use working memory to store and process information in the interest of transferring it into long-term memory storage. Learning in this system is cognitively effortful, but is passive, involves observing and learning by reading or watching, and rehearsing the information mentally. For example, if you are studying a checklist of steps required to insert a central line into a patient in a hospital, you would read a description of each step, store and process it in working memory and continue to mentally rehearse those steps until you can easily recall them.
Behavioral Skills Learning System
The behavioral skills learning system in the brain principally resides in the basal ganglia, and, as the name suggests, evolved to learn behavioral skills. Learning in this system involves gradual and incremental change that iterates toward fast and accurate generation of appropriate behaviors. Interestingly, learning is this system is not cognitively demanding. In fact, some of the first author’s research suggests that “overthinking” behavioral skills can be detrimental to learning. Learning in this system is active, it involves learning by doing, and physical repetition. Without going into the detailed neuroanatomy, behavioral skill learning relies critically on interactivity in the form of real-time immediate corrective feedback. You generate a behavior and receive feedback (literally within a few hundred milliseconds, no more). If the behavior is rewarded, then that behavior will be more likely to occur next time you are in the same situation. If the behavior is punished, then that behavior will be less likely to occur next time you are in the same situation. This interactive back-and-forth in real-time is what leads to behavior change.
When we are interested in people skills, the behaviors in question might be to make eye contact, to display interest perhaps through a head nod, body positioning and positive emotional facial expressions. When the learner evidences these behaviors, the recipient might do the same which serves as a reward and increases the likelihood that behavior is repeated by the learner. On the other hand, if the learner averts their gaze, shakes their head, or shows indifference/negativity in their facial expressions and body positioning, the recipient might look puzzled or sad which serves as a punishment and decreases the likelihood that behavior is repeated by the learner.
When we are interested in technical skills, the behaviors in question might be to use surgical tools to perform some fine motor behavior. In this case, effectively performing each step is rewarding and will increase the likelihood that the learner uses their muscles in the same way next time. On the other hand, if the task is unsuccessful because of poor positioning of one of the tools, then that will serve as a punishment and will decrease the likelihood that the learner uses their muscles in the same way next time.
Optimized Training Through VR and Other Emerging Technologies
The goal of any training program is to impart the relevant information on the learner in an effective and efficient manner. Given critical differences between learning systems the first step is to determine which system mediates the task to be learned. This will influence how training progresses. Once the appropriate learning system is identified, one can utilize some innovative technologies to optimize and personalize the training. We offer a few examples.
Cognitive Skill Learning: Often cognitive skills are learned quite well with text and slide shows. If you need to learn a set of rules and regulations, this is a good method of delivery. At the core, cognitive learning requires engagement. The term engagement is often used to describe a lot of processes such as website visits, responses to social media posts, and interactions in the physical world. At its core, engagement is the product of attention and physiological, emotional, cognitive or behavioral response. These parameters are measurable using sensing technologies. Without attention there is no engagement, and the amount of engagement is directly related to the physiological/cognitive/emotional/ or motor response elicited. Cognitive skill learning does rely heavily on attention and processes such as working memory. Cognitive processes are measurable with technologies such as functional MRI imaging which show the changes in brain activity during active processing. Acquiring a cognitive skill requires focused attention and effective use of working memory. Using unobtrusive technology to monitor engagement and attention would be ideal. Training can proceed when engagement and attention are high and the learner can be given a break when they are low. A number of technologies exist that allow one to measure fatigue. The majority use computer vision to track eye movements such as for driver monitoring. A couple of companies are using wearable technologies, for example to count blinks. However, there is a good body of researchshowing that simply counting the frequency of blinks is unreliable. Emteq is focused on measuring multiple biometric and behavioral signals from the users face by incorporating sensors into the glasses frame. The advantage of enabling unconstrained fatigue monitoring in the field as obvious, especially for security personnel, or handling hazardous materials and those in high stakes professions such as surgeons.
At other times, cognitive skills are better learned using visual representations as opposed to text or slide shows. In these cases, virtual reality (VR) or augmented reality (AR) solutions are ideal. There are numerous examples where a technician is learning to identify the knobs, dials and controls on some piece of machinery by visually inspecting the machine with some VR or AR device that is labeling the parts in real-time. Virtual anatomy and physiology training tools in which a learner is presented with a 3D dynamic representation of the body is another example. From a brain science perspective, this facilitates the development of accurate visual mental representation and places a much lighter load on attention and working memory.
Behavioral/Soft Skills: As outlined above, soft or people skills involve effective interpersonal interaction and real-time communication. Critically, these interactions encompass verbal and non-verbal behavior. How many times have you been in a situation where a person says one thing verbally and something completely different non-verbally? Effective people skills require convergence in the verbal and non-verbal message. One of the authors (CN) is involved in teaching medical students and surgical trainees. A vital skill for any clinician is the ability to inspire confidence and project empathy in patients. At present, very little emphasis is placed on this, and where attempts are made to formally assess people skills, for example by hiring actors as mock patients, the costs are very high. People skills are expensive to train and the effectiveness of training is difficult to objectively measure. Training with technologies such as interactive VR or AR provide a scalable solution. One can be placed in a variety of scenarios to enhance generalization and transfer, and one can even be placed in a situation in which they are someone else. For example, a middle-aged Caucasian male, can experience a day in the life of an African American female as demonstrated by Jeremy Bailensen’s lab.
Technologies are emerging for measuring and training the nonverbal element of people skills, in particular facial expressions. The measurement of facial expressions as a way to monitor cognitive performance is a well-recognized research strategy. The best teachers and trainers understand when their students are confused or frustrated, and adapt their teaching appropriately. A number of researchers have tried to apply machine learning techniques to make predictions related to user experiences. Such systems would enable the design of human-centered systems that can adapt to user’s psychological state, generate responses and provide assistance to instructions accordingly. Also, attempts have been made to digitize facial expressions to allow human-machine, or human-virtual human interaction. Advances in sensory technologies and artificial intelligence have enables Emteq to develop two innovative solutions for VR and AR. The VR system uses surface muscle signals (electromyography) via multiple pads embedded in the foam padding of the headset. This allows facial expressions normally hidden by the HMD to be read, interpreted and transmitted into a virtual environment. The elements of face to face interaction such as attention, mirroring expressions, conversational regulators and gestures can all be measured directly from the face. Emteq have also demonstrated a prototype glasses-based expression reader for use in AR. Non-contact sensors enable lightweight sensing of upper facial expressions (which contain the most salient information about a person’s emotions such as anger, frustration, confusion) and which could support enterprise uses. For example, where an engineer in the field is taking instruction via AR from a remote expert, the system could advise the instructor that the engineer does not fully understand or is getting stressed or frustrated.
Behavioral/Technical Skills: VR flight simulators have been in existence for many years training military pilots. These systems incorporate realistic visual input, as well as audio, tactile, and kinesthetic feedback. Commercial applications are growing in a number of sectors. Surgical training applications are growing that include VR and realistic haptic feedback. Numerous examples in the vocational skills sector exist as well. In many ways VR and AR solutions in this arena are more challenging because visual input is simply not enough. The auditory, and especially, tactile components are critical. Even so, this is a sector that will reap huge benefits from this technology. Training surgeons with cadavers is expensive and ineffective. The situation at NASA is similar.
Immersive technologies hold huge promise for improved learning of technical, cognitive and emotional skills. An understanding of the basis of skills acquisition and the importance of real-time feedback are key to the success of current and future innovations. The internet democratized knowledge by making it easy to access information via the web. The promise of immersive technologies is in the democratization of skills. The old model of passive learning, accessible to the few will soon be replaced by active experiences personalized to the individual’s learning style regardless of location.
Todd Maddox, Ph.D. is the CEO and Founder of Cognitive Design and Statistical Consulting, a Contributing Analyst at Amalgam Insights, Inc, and the Science, Sports and Training Correspondent at Tech Trends. His passion is to apply his 25 years of scientific and neuroscientific expertise, gained by managing a large human learning and performance laboratory, to help build better training products in a broad array of sectors. These include soft, hard and technical skills in the corporate, medical and educational training sectors. Todd also works with elite and amateur athletes to speed learning and enhance muscle memory. His scientific research shows that the learning of different skills is mediated by different learning system in the brain, each with distinct optimized training procedures. Todd received his Ph.D. in Quantitative and Cognitive Psychology at the University of California, Santa Barbara that was followed by a two-year post-doctoral Research Fellowship at Harvard University. Todd then embarked on a 25-year academic research career achieving status as a leader in the fields of human learning and memory with an emphasis in understanding the computational interplay between motivation, personality and incentive structures and their effects on optimized learning, memory and training. Todd is fascinated by the brain and the brain-basis of all behavior. Todd published nearly 200 peer-reviewed scientific articles, and was the recipient of a number of federal grants. Todd is especially interested in applying his optimized training expertise to the emerging technologies of VR/AR/MR, as well as eLearning, and he is currently writing a book focused on bringing the science of optimized training into the commercial sector.
Dr. Charles Nduka, M.D. is a practicing surgeon and co-founder and Chief Scientific Officer at Emteq (www.emteq.net). Charles’ first foray in VR dates back 1994 as part of a project on surgical training using first generation VR. He is a facial surgeon and internationally recognised expert in disorders of facial expression. He graduated from both Oxford University and Imperial College, London with distinctions in surgery. He holds an Honorary Senior Lecturer post at Imperial College and has over 100 peer reviewed publications. Charles has been awarded a number of awards and R&D grants to develop facial expression technology, including from the National Institute for Health Research (NIHR), the Wellcome Trust. He co-founded Emteq in 2015 to develop the ability to track and transmit facial expressions in VR and AR. Using an array of patent pending sensors embedded into the headset, combined with an artificial intelligence analytics platform, emotional responses are tracked to the VR/AR content. Potential uses for the technology include modifying or optimising content based on user response, enabling social interaction within AR/VR, and testing user responses to simulated or products and services. Emteq’s platform can be used with any VR headset and has a number of healthcare and wellness-related uses including monitoring responses to stress, autism research and facial rehabilitation.