Are Learning Styles an Illusion?

gears_in_head_outline_pc_1794 opposite

Do you ascribe to the idea that people have different learning styles, such that learning is supported by the optimal match of instruction with style but hindered by a mismatch?  In other words, to use a common learning-style discriminator, optimal learning requires instructing identified verbal learners with a verbal sensory experience while implementing a visually oriented instructional design for the visual learners.  If you do hold this view, then you must understand that you are swimming upstream against a torrential current of contrary evidence.

Lack of evidence or meaningful theory

Criticism of learning styles as indicators for teaching have existed as long as the many popular instruments that supposedly measure these differences.  A recent decade of loud objection to learning styles began with an important 2008 publication by a team of cognitive psychologists led by Harold Pashler.  Their article was commissioned by Psychology in the Public Interest to assess the scientific evidence for applying learning-styles assessments in school contexts.  The authors addressed this task by reviewing the literature to test the “meshing hypothesis”.  Consider students A and B who are assessed as having opposite styles (e.g., verbal versus visual learners).  According to the meshing hypothesis, student A will outperform student B when instruction matches A’s style but the reverse will be true if the instruction of the same material matches B’s style.  Despite the importance of the meshing hypothesis to decades-long advocacy for teaching to individual learning styles, Pashler and his colleagues found very few studies with research methodologies appropriate for testing the hypothesis.  Furthermore, the relevant existing studies failed to support the hypothesis.  The Pashler et al. article received great attention in professional publications and the popular media – I remember reading about it in a syndicated education column in my local newspaper.  The results were championed by skeptics of learning styles in not only educational psychology contexts but also in disciplines ranging from culinary training to medical school.  The assertion of learning styles as myth is now highly visible.

Gazing beyond the problematic meshing hypothesis, it is also essential to scrutinize that most learning-styles advocates pay little attention to the validity and reliability of the instruments that they use to measure their learners’ styles.  In an exhaustive 2004 review of learning-styles instruments, Frank Coffield and colleagues analyzed 13 instruments that are widely applied in higher education.  The instruments subjected to psychometric analysis scored weakly on reliability, construct validity, or both.  These include the family of surveys that focus on verbal and visual (and commonly kinesthetic) learning styles, the very popular Kolb Learning Styles Inventory and the venerable Myers-Briggs Type Indicator, which although focused on personality type has been extended into the learning realm.  Even in the business management world where Myers-Briggs consultants have earned their keep, the measure has fallen out of favor.

There is also the limitation of learning style self-assessment, which is not an objective measurement of learning.  For instance, one study showed no relationship between stated preferences for visual or verbal learning and the measurement of learning that emphasized one modality over the other. Therefore, is there any justification for adjusting instruction to match a learner’s style, or to be inclusive of an array of learners’ styles, if there is so little confidence in the assessment of these styles? Ironically, one could also argue that the conclusions of the Pashler group are weakened by testing a hypothesis with data collected with instruments of questionable validity and reliability.  Nonetheless, at best, learning-styles advocates can only say that the meshing hypothesis remains untested while also acknowledging that they have little more than anecdote to continue using practices for which there is no evidence.

Despite the compelling evidence-based argument against learning styles, their popularity and acceptance remains strong.  Nearly all of 109 articles published between 2013 and 2015 regarding learning styles and higher-education instruction expressed positive intentions regarding learning styles and 89% reached positive-outcomes conclusions regarding use of learning styles; even though only one of the studies included a test of the fundamental meshing hypothesis.  Recent surveys show that a large majority of teachers at all levels – school to university – continue to accept the validity of accommodating learning styles into their instruction.

The cognitive and educational psychologists who labored to demonstrate many flaws of popular perceptions of learning styles are frustrated by the persistent advocacy and use of purported assessment of learning styles in education.  Thirty European and North American psychologists and neuroscientists signed a letter to the influential British daily, The Guardian, during the 2017 Brain Awareness Week.  They not only argued the absence of evidence for learning styles but also that there was potential educational and fiscal harm to continue using the idea in education.  A commentary by prominent educational psychologist Paul Kirschner was triggered when the journal’s editors invited him to enlighten their readership after Kirschner posted a tweet berating the journal for publishing another paper about “learning styles bull!”

Notably, and curiously, missing from recent critical scrutiny of learning-styles instruments is mention of the Felder-Silverman Index of Learning Styles (ILS).  Developed by engineering education expert Richard Felder, the ILS is the second most commonly used learning-styles assessment tool in higher education.  The ILS differs from surveys that have been strongly attacked.  The ILS has been the subject of psychometric analyses that add confidence in the reliability and validity of what is being measured (although we should still debate what “it” really is).  Unlike most surveys that pigeonhole learners into particular categories, the ILS reports varying levels of preference for modalities of information intake and approaches to knowledge processing.  Even more importantly, Felder never advocated for matching learners’ learning style with particular instruction.  Instead, he emphasized placing learning preferences in context with the learning task, and stretching learners to use less desirable learning approaches in order to develop broader learning skills.  He encouraged faculty to use the ILS scores to employ a broad battery of approaches to be inclusive of learning styles. In response to the Pashler study, Felder vigorously defended the ILS and pointed to several studies where the meshing hypothesis was supported by ILS data.

Nonetheless, it is difficult to understand what conceptual or theoretical framework encapsulates the various learning-styles approaches, including the ILS.  The typical learning-styles survey classifies a respondent on several dichotomous scales or categorical end points.  While some, such as the popular visual-verbal distinction, appear in many survey outcomes, most instruments have distinct ways of categorizing a learner.  Examples include: intuitive versus analytic, arousal avoidance versus arousal seeking, simultaneous or successive planning of learning, concrete versus abstract, sequential versus global, accommodating or assimilating, convergent or divergent … and the list goes on.  These many ways of labeling a learner must raise the question of what “learning style” really means, whether these labels can really be confidently measured, and if they need to be scrutinized in different contexts. Felder defends this state of affairs by contending that the number of learning styles are unlimited and cannot practically be encompassed in a single theory.  However, the theory that we seek is a rationalization for why learning styles (a) should exist, and (b) should impact learning.  That framework remains largely lacking and, as a result, some learning styles scales may miss the point.

For instance, the visual-verbal scale is featured in many learning-styles instruments, including Felder’s ILS.  Regardless of subjective statements of preference, the human brain depends on a dual coding system that utilizes both phonological and visual inputs.  Therefore, purposeful emphasis on one sensory modality over the other based on a subjective measure of a preference runs the risk of impeding learning processes.  The alternative view is that modality of information input during learning depends on the learning task.  Therefore, the modality does matter, but it matters the same way for everyone who pursues the same learning activity.

The concrete-abstract distinction is also problematic.  This distinction originates with Kolb’s LSI and loads heavily on Felder’s sensing-intuitive scale.  However, rather than being a learning style, per se, operational learning from concrete or abstract approaches has long been regarded – since the work of Jean Piaget in the early 1900s – as a developmental process.  Learners progress from concrete to abstract thinking over time. For example, in reviewing my ILS data collected from about 400 college students and 300 faculty members, there is a strong skew among students to be sensors (concrete learning) whereas the more mature learners composing faculty are strongly skewed toward intuition (abstract learning). We should expect learners in different development stages and promote learning from both concrete example and abstract generalization.  However, these are not different learning styles.

But – humans don’t all learn the same way, either

I could end, here, and this essay would simply be another effort to wake up educators to a fallacy of adjusting instruction to account for alleged learning-styles differences among their students.  However, I feel that several issues require further scrutiny.

In addition to the Pashler et al. paper, I always encourage those interested in learning styles to read the 2014 paper by Maria Kozhevnikov and colleagues, which places learning styles into the broader and older concept of cognitive styles.  Although difficult to measure, cognitive style refers to individual approaches to cognitive functioning, including acquiring information, processing information, and decision making.  Kozhevnikov and co-authors argue that the falsification of the meshing hypothesis does not preclude the existence of cognitive style.  Even Pashler, Kirschner, and others acknowledge that people have learning preferences, which are less strongly perceived than alleged styles, but these are dismissed as being of minimal importance for educational practice.  However, these authors cite no evidence to justify “minimal importance”.   Needing further attention is the affective consequence of encountering instruction that is contrary to preferences.  Just because the cognitive function of the human brain accommodates wide ranging instructional approaches does not mean that students will engage equally and self-efficaciously with different approaches if there is a mismatch with a cognitive-style preference.

I have had many conversations with scholars who approach learning differences and styles from a critical-theory perspective.  Their work is typically rooted in anthropological and sociological frameworks, rather than psychology, and they rarely or never cite the psychology research mentioned previously.  They usually avoid discussion of quantitative learning-styles surveys but commonly encourage application of Howard Gardner’s multiple intelligences concept, despite the comparable demolition of that idea by cognitive psychologists.  They view cognitive psychologists as seeking to generalize human behavior and brain function, which means brushing over the individual or group differences that are so meaningful to anthropologists and sociologists and scholars seeking social justice for those not flourishing in higher education constructs founded in western European traditions. Favoring these opinions, the Kozhevnikov review summarizes neuroimaging studies that demonstrate differences in brain activity for Asians versus Americans during learning experiences, as well as other research that justifies thinking that there is a neurological basis for cultural differences in cognitive style.

A related idea arising from the anthropological perspective is epistemological differences of learning focused in individual actions versus the collected action of a group.  These differences are referred to as high- versus low-context learning, individuated versus integrated epistemologies, or independent versus interdependent approaches.  Collectively, the research argues that these contrasting views of where knowledge comes from along with how, and with whom, it is constructed varies by cultural ethnicity and race, country of origin, and socioeconomic status.  Northwestern University’s Nicole Stephens argues that most colleges and universities favor independent learners to the exclusion of interdependent approaches more common to first-generation college students from working-class families.  As a result, higher education enlarges, rather than shrinks, socioeconomic inequality in the United States. Here, again, data from neurological scans reveal differences in cognitive processing by learners of differing social class that correlate to independent versus interdependent self-construal. These sociologically grounded concepts, supported by psychology and neuroscience research and connected to the idea of cognitive style, are arguably more important than debating the meshing hypothesis when we seek to understand how best to teach diverse learners.

Other authors investigated the impact of different instructional approaches on the achievement gaps associated with students of varying race/ethnicity, socioeconomic status, or both.  The surging interest to replace the centuries-old tradition of the university lecture with active-learning pedagogies has been shown to decrease these achievement gaps.  My critical-theory colleagues claim that this is evidence of different learning styles that are rooted in culture, family, and community structures during childhood learning.  In this view, the traditional university curriculum favors the individuated, independent learning approaches of those from groups who have been privileged for success in higher education whereas active learning favors the integrated, collectivist culture of learning in working-class contexts and among African American, African, Hispanic, Indigenous, and Southeast Asian communities.  However, I am also impressed that these data demonstrating quantitative improvements in achievement gaps show that there was no decrease in achievement of the traditionally high-performing demographic groups.  Instead, all boats floated higher on the tide of active learning and the gap closed because of preferentially greater learning improvement for members of those populations traditionally less successful in higher education.  To me, this suggests an alternative hypothesis: Active learning is more natural to neurological function that evolved for hundreds of millennia before the first college classroom and is more closely associated with the psychological needs of autonomy, competence, and relatedness that should motivate learning according to self-determination theory.  Therefore, rather than active learning being favored in high-context/integrated/interdependent epistemological contexts, it may be that the low-context/individuated/independent students arose from more privileged experiences to prepare for and succeed in college and have better adapted to system of learning that is, frankly, unnatural, but who do flourish when engaging instead with active learning.

Time for a reboot

So, are learning styles an illusion?  In our desire to honor perceived differences among learners, have we gone too far? All educators should accept that the meshing hypothesis stands as unsupported and can only be possibly, if unlikely, resurrected by focused study, rather than inference, using valid and reliable instruments that have a sound theoretical basis, and which do not currently exist. In the meantime, I feel that the term “learning style” is too burdened by a history of using a confusing array of measurements of dubious quality and then linked to the unsubstantiated meshing hypothesis. A possible way forward is to abandon the learning-style concept, and stop using the phrase, in favor of exploring new approaches to understanding individual and group differences that engage cognitive psychologists and neuroscientists alongside social psychologists, anthropologists, and sociologists.  Nonetheless, the ongoing learning styles debate is a battlefield engaged by opposing protagonists who have strong biases toward their point of view and are challenged to understand the other side.  As with political partisanship, it is unclear that confirmation biases will permit those on opposite sides of the debate to seek common ground for the necessary reboot to understand the educational significance of learner differences.

By Gary A. Smith, Assistant Dean of Faculty Development in Education and Director, Office for Medical Educator Development, School of Medicine, University of New Mexico.  This work is licensed under a Creative Commons Attribution-NonCommercial-Share Alike 4.0 International License.

Creative Commons icon

Advertisements

Feedback: It’s more than just “Good Job!”

Blog.softwareavaliacao.com.br
Blog.softwareavaliacao.com.br

I have the opportunity to regularly meet with colleagues and discuss one of my favorite topics: Feedback.  Providing effective, constructive feedback in a way that avoids a visceral sense of fear or admonition is one of the greatest challenges we face as academicians.  Just the word “feedback” can incite palpitations, diaphoresis, breathlessness, and nausea—not that dissimilar from the heralds of a heart attack, right? Additionally, learners can feel vulnerable to feedback, and when feedback is given inappropriately, it can induce a frightening fugue-like state, where the feedback provided neither improves the performance of the learner in a lasting fashion nor engenders trust that the institution has an investment in watching for the improved learner performance. So, what can we do to embrace the best practices of feedback?

Continue reading “Feedback: It’s more than just “Good Job!””

Research-based PowerPoint design … are you using best practice?

 

3515471358_721aff19b42

When I ask faculty or students to list their most ineffective learning experiences in college or professional life, some aspect of the use of PowerPoint is always on the list.  “Death by PowerPoint”, they commonly say.  In fact, if you search for “death by PowerPoint” in GoogleTM, you will find more than 3.5 million hits.  So, it’s not surprising that it seems like everyone has an opinion on how to make a better PowerPoint presentation; search for “PowerPoint tips” and you’ll find more than 26.7 million links.  What is astounding, however, is how much of this web-shared wisdom recycles myths and ignores the research on design of visual aids for learning during an oral presentation or even the legibility of the slides.  What does the evidence actually say for the maximum number of bullets to include on a slide?  Zero. Isn’t it true that we should use blue background because blue is calming and increases a sense of pleasantness?  Try again.

When criticizing “PowerPoint-enhanced” presentations it is important to separate the fault of the presenter from the fault of the program.  The program can actually accomplish a great deal to support effective learning during a presentation, primarily by supporting visual learning.  The problem rests with how it is used, especially when the presenter applies default templates that encourage meaningless titles, bullet lists of text, and distracting color-graphic backgrounds that have nothing to do with the presentation content.  Bullet lists are arguably most useful for the presenter, giving him or her something to read or to prompt memory.  The development of bullets in the software was a response to the original buyers of the product – the business world.  Catchy marketing phrases and lists of a few take-home productivity points were key aspects of the 1980s business meeting.  Slide templates that met the needs of this original PowerPoint market were then utilized, due to ease of access, by all PowerPoint consumers.  Henceforth, bullet lists became the socialized norm for visual presentation in everything from elementary-school book reports, to battlefield preparation, to the uninformative technical briefing that is partly implicated in the Space Shuttle Columbia disaster.  There is no indication that such brief outlines actually facilitate learning so why should educators use these templates?  Even one of the inventors of PowerPoint, Robert Gaskins, is baffled: “The real mystery to me is why PowerPoint, including its default presentation style based on traditional business presentations, has been adopted so widely in other contexts.”

Continue reading “Research-based PowerPoint design … are you using best practice?”

Flipping Your Classroom May Be Easier Than You Think

calicospanish.com
calicospanish.com

Take a moment to complete this true-or-false quiz:

  1. The flipped classroom means that direct instruction, such as lecture, is moved online for pre-class completion, and homework is moved into the scheduled face-to-face instructional time.
  2. The flipped classroom model is new, originating only about a decade ago.
  3. Effective flipped classroom instruction involves the production of videos for students to watch before class.

If you answered “true” to most or all of these questions, then you likely see the flipped classroom as something that is radically different from college teaching as it was practiced more than a decade ago, requires use of technology, and expects considerable commitment by faculty (and perhaps, support staff) to generate videos, animations, or other technology tools for online delivery.  Does that view motivate you to flip your instruction or does it seem intimidating and not a teaching approach that you feel that you have time to implement or confidence to commit to?

I will make an argument for answering “false” to each of these three statements in order to show that “flipping” has a track record longer than the popularity of the term and can be implemented without dedication of extensive time or money to the use of technology.

Continue reading “Flipping Your Classroom May Be Easier Than You Think”

Are small groups for learning or career preparation?

cc0-pixabay
CC0 pixabay

Do you assign small-group exercises because you understand that interactivity fosters greater learning?  Do you assign team projects as preparation for workplace expectations? Do you discourage students’ resistance to group work by pointing out that their future employers expect the development of teamwork skills?  If you answered “yes” to two or more of these questions, are you confident that your execution of cooperative/collaborative learning is consistent with your responses?

A case in point: A pharmacy faculty member dropped by my office to explain pushback from colleagues in response to a best-practices list for implementing cooperative learning strategies.  Of particular concern was the recommendation that student teams should be permanent for the duration of the course.  This practice is supported by the resilience of research regarding Bruce Tuckman’s team-development stages – forming, storming, norming, performing – in group settings ranging from psychotherapy to classrooms to diverse workplaces.  The evidence points strongly to the benefit of group members working together for periods of months before reaching high levels of performance output while recognizing intra-team tensions as expected storming rather than a symptom of group dysfunction.  So what was the problem? “Interprofessional healthcare teams consist of different people on almost every shift,” my colleague appropriately pointed out. “So the pharmacy faculty feel that changing team membership frequently, perhaps every day, would be more effective preparation for the students’ upcoming clinical training and future careers.”  Indeed, an apparent paradox.

What is our goal, as educators, for assigning work to small groups or teams?  For meeting course learning objectives, we likely focus on the learning achievement of the individual student while leveraging the benefit of interactive learning, perhaps in a cooperative-learning approach.  When we consider the workplace expectations for our learners in the future – and, perhaps for us at the moment – the focus is usually on the performance of the team that takes advantage of specific, differentiated expertise and roles within the team that require collaboration.  The first approach favors meeting the learning objectives of the course whereas the second favors workforce preparation.

Continue reading “Are small groups for learning or career preparation?”

Is there a right way to teach with clickers?

CC0 pixabay
CC0 pixabay

Is there a right way to teach with clickers?

2016 is the 25th anniversary of audience response systems in higher education classrooms. Commonly implemented with “clickers”, responses are increasingly collected by utilizing personal mobile devices and early approaches simply used lettered flashcards.  With such a long track record, it is worth asking: “Is there is a right way, and hence also wrong ways, to teach with clickers?”

An initial answer might be that there are many ways to use any technology tool and that whatever feels comfortable to the teacher, fits in with the instructor’s existing pedagogy, and is compatible with course objectives must be right. Perhaps there is a better question: “Is there a best way to learn from using clickers?”  The unambiguous answer to that question is, “Yes; peer instruction.”  The remaining paradox is that very few faculty members know or practice this research-based strategy.  As depicted in the metaphorical image, many directions may seem right but they may actually not lead you to the intended destination and running off road into the sand may unnecessarily taint your view of the technology.

Although many instructors use clickers to assess student learning, peer instruction (PI) uses clickers as a tool to activate learning through peer discussion while determining the course of subsequent instruction through instantaneous formative assessment. As originally developed and researched by Harvard physics professor, Eric Mazur, the PI process (1) begins by having students respond individually with their clicker to conceptually challenging multiple-choice questions, followed by (2) discussion and debate among nearby peers to resolve differing answers, proceeding to (3) a second collection of individual responses, and (4) a concluding debrief led by the instructor who is now aware of the challenges to student comprehension.  The whole process requires about 5-7 minutes per question and leads to improved understanding as indicated by typical increases in correct responses from 50% or fewer students prior to peer discussion to 80% or more following the discussion.  The peer discussion part is critical.  A recent meta-analysis of clicker use in higher education shows that the effect size for learning when including peer discussion is three times greater than when clicker questioning does not provide the discussion opportunity.

Continue reading “Is there a right way to teach with clickers?”