Measuring Flipped Classrooms

New ideas are great! That is…until the next new idea comes around? Hmmm?? Do we need another update?! Think for a moment, lest we forget, the Flipped Classroom is not so much about with replacing the OLD with the new. By understanding various bases for learning, one can pragmatically apply such to enhance the overall learning experience so learners can better comprehend, retain, and apply information. So how does one “grade” flipped classrooms compared to other models? Numerous metrics exist, and organizations should selectively draw upon them to best understand the impact of their trainings.

Measuring Flipped Classrooms

Impact of the Flipped Classroom

When trying to determine the impact of any training, information should be drawn from many sources. We should welcome training comments not just afterwards but at times prior to and during the classroom. Flipped classrooms have two-three distinct stages: first, delivery of training content—often on-line—prior to any classroom engagement; second, discussion of content—the homework, as it were—in the classroom setting with the “instructor” assuming the role of a mentor; and, third, post-classroom review, integration, and completion. Obtaining feedback at each stage is essential.

A. Informal Training Commentary

There are two types of informal training commentary:

  • In-Training Input. On-line and in-class commentary can be extremely valuable. Even before the classroom meeting, students should be encouraged to suggest and question the training content. I thought experts were creating the content? Yes, trainers should create eLearning content that is accurate, complete, logical, and thoughtful. Yet nobody is perfect, and even award-winning writers benefit from editors. Think crowd sourcing, the use of ostensibly disconnected individuals with the common goal of gathering information and/or completing tasks. Here, staff are not only valued but empowered, giving them a sense of ownership. The suggestions can be something as simple as pointing out typographical errors, adding useful examples, et cetera. The goal here, remember, is creating the best training. Indeed, the mentor can reinforce such suggestions during the in-class time, thereby affirming specific staff members in the context of creating an educational community. Soliciting comments prior to the classroom assists mentors to more efficiently utilize that time. In-class comments generally will take place on a one-to-one or group basis; remember, the classroom is a site of mentoring, not the site of the traditional top-down didactic.
  • Post-Training Opinions. Staff comments during trainings are frequently different from those collected afterwards. Managers can glean such attitudes at the proverbial water cooler or during individual and team meetings. The trainer, too, should definitely forward to the administrator a self-assessment of that specific meeting(s). Each should be cross-referenced with the other, and any striking difference should be addressed. Special attention should be paid to the role of technology, the structure and delivery of the training, and the overall efficacy of the teaching model.

B. Formal Learning Assessments

To assess the utility of Flipped Classrooms as an educational/learning model, focus should be placed upon two student areas: satisfaction; and, content understanding, retention, and application.

  1. Training Questionnaires: Training administrators need to assess subject matter (content), evaluate educational style (both training format and mentor performance), determine cognitive understanding, and reflect upon its professional and organizational value. Traditionally, administrators and teachers generally concerned themselves with only cognitive understanding and measured this through testing and grading. However, if organizations are truly concerned with actual learning, then they need to appreciate how learners receive trainings. Staff members and students who do not care generally do not learn. So training questionnaires are essential and should not be taken pro forma. Can’t the “learners” just get inline? An anecdote: while working as a waiter in graduate school, my colleagues and I would become frustrated with patrons regarding a given tip. We would regale others with that customer’s bizarre behavior. All of these conversations were just therapy. The customer’s reality is the reality, and the same goes in education. This is not to say that anything goes, but administrators need to refocus and appreciate that the traditional top-down pedagogy is not the sole solution. Hence understanding the perspective of learners is key for improving the learning process.
    • Types of Questions: When composing questionnaires, the type of question is just as important as the subject matter. Should the questions be Yes/No, multiple choice, or free expression? Generally it is best to have a combination of types. Binary questions (Y/N) should query subjects that have defined answers (was the instructor on time). Alternatively, Likert scale questions (multiple choice) are best used for opinion-based answers (the training emphasized ABC: not at all, not enough, a fair amount, too much). With Likert scales, consider whether or not to have an even or odd number of options. An even number forces learners to affirm one side of the spectrum, whereas an odd number leaves open the “neutral” response. Finally, open-ended questions (is there anything else you would like to add, et cetera) are essential to obtain information otherwise not captured in the preceding question types.
    • Timing of the Questionnaire: We do this at the end of the training, right? At the end of the training, yes, but not at the end of the classroom session. Frequently, learners (and instructors) are impatient to leave the classroom to attend to other matters. In fact, many participants do not even fill out the questionnaire or only do so hastily. So what is the remedy? Administrators increasing are requiring training participants to fill out questionnaires online after the training, often several days after the classroom session. Why? The intervening days give participants the proper perspective to more fully appreciate the training and separate it from emotions of unrelated matters. Moreover, administrators can make a response—a complete response—a requirement for credit or certification. While participants might try to skip out early on the class (and the questionnaire), none of them will be able to fully complete the training without their online response.
    • Answer Consistency: Don’t staff members just respond based on what the administrator wants to read? That is a tough issue. Sometimes learners are overly effusive (excellent, excellent, excellent . . .), sometimes they are overly negative, and sometimes, they just want to finish the questionnaire as quickly as possible! First, by increasing training effectiveness and value (through innovations like the Flipped Classroom), learners are more likely to take the time and offer constructive feedback. Second, administrators should design questionnaires to query smaller, constituent parts of training value. Answers about “micro” value assist administrators when comparing them to a final Likert scale about the “general” value of training.
  2. Testing Diagnostics and Efficacy:
    1. Pre-On-Line, Post-On-Line: Think of this . . . Training as diagnostics. What do you mean? OK, it is exactly what you think. That is, at differing stages of the training, users are queried about the subject and given differing storyboards based on their responses. This serves not only to gather information, but also to stylize the delivery of information for each user. So if a user’s response suggests a level of understanding demonstrating mastery, then the subsection is concluded with the progression to the next stage. If, however, the user’s response suggests he or she did not fully grasp essential parts of the stage, then that user would be redirected back through training in a fashion that even further explicates the content and its connection to the larger subject of the training. All of this serves to assist the learner in a stage-based approach by using a micro-learning algorithm and to provide the basis for improvements in the training itself. This information would be available to the mentor prior to the classroom session.
    2. In-Classroom Learning and Mentoring: Again, learners in the Flipped Classroom are not engaged using the traditional teacher-led didactic model; rather, mentors help learners to better understand material and appreciate why they might have misinterpreted the training content. If the mentor chooses to utilize group learning, then he or she should be careful that the groups do not devolve into unrelated social interactions, which, on one hand, is good for bonding amongst learners but not so good for learning. One way to handle such is to direct individual members of a group to assume different roles (for example, one person takes notes, et cetera). Diagnostics during this period, as one might suspect, tends to be more qualitative than quantitative; however, when assessed in a longitudinal fashion over numerous trainings, then such data can be useful in terms of streamlining the content and structure of the training as well as gaining a better understanding as to how to facilitate the classroom mentoring for that subject matter.
    3. Periodic e-mail/text reminders/queries: After the in-classroom learner-mentor sessions, learners should receive periodic e-mails refreshers about key content and subject overviews. On occasion, they can be prompted to follow links to an online portal testing them on the training, which, in turn, serves to condition the content and duration of any subsequent e-mails. The training administrator, of course, keeps abreast of learner responses to this process, which also serves to inform interactions in next part.
    4. Individual/Group Post-Training Review: Individual learners will discuss trainings with their supervisors to reaffirm content; supervisors should be able to appreciate whether or not staff members have been able to operationalize training content for work.
    5. Testing/Grading/Passing: Do Flipped Classrooms have traditional testing? How are learners graded? Or is there simply a Pass/Fail approach? All good questions, and there is not one easy simple answer. Part of this issue is based on the nature of the material; part of this issue is based on whether or not the training was required; and, part of this is based on organizational culture. For example, if the training is required in the context of a contract or certification, then, most likely, some type of direct testing and grading will be necessary in order to document proficiency and the like. OK, are you telling me that we didn’t really flip the classroom? No, that is not it at all J Tests in the context of determining levels of mastery is not a defining element of the flipped classroom per se. Rather, the Flipped Classroom is based on inverting educational dynamics and transforming educational roles. Whether or not learners take a “final” test is unrelated to the flipped learning model.

C. Discerning Training Efficacy

Keep in mind, discerning training efficacy is the equivalent to program analysis, which is an extremely large and nuanced field of study. The subsections below offer a structure of an approach, not a listing of specific techniques. Each one focuses on a different dimension and, moreover, how all of this is related to measuring the Flipped Classroom.

  1. Individual Performance: Determining whether or not a given training “worked,” that is, helped the learner do her or his job better can be tricky. First, obtaining any quantitative comparison regarding output or external client feedback is only possible if such information is available from prior to the training. Second, sometimes individual performance might be difficult operationalize. Third, qualitative comparisons are useful but limited. But, if prior quantitative data is available, then comparisons can be made; similarly, administrators and program directors can collaborate by assessing training and performance in a longitudinal fashion.
  2. Organizational Goals: Discerning training efficacy regarding organizational goals is, perhaps, even trickier than assessing individual performance. As we know, many factors—internal and external—impact whether or not an organization has been more or less successful in executing its mission. Again, sometimes it is possible to utilize quantitative metrics, and, ultimately, this is key, especially if an organization is investing greater capital and money in its system of training. So be sure to develop mechanisms and metrics when designing trainings in order to operationalize its value.
  3. Cybernetic Feedback: Cybernetic? Yea, now we’re talking! Finally the cool stuff!! Cybernetics sounds very futuristic if not like science fiction. (So, ehh, what is cybernetics?) If we think of organizational training as a system, that is, relationships amongst course content, educational style, and the learners, then learning is a process. Certain systems—and organizations—can be incredibly efficient; others can be terribly dysfunctional. Most, as we know, are somewhere in between. Being creative—remember, welcome to the Flipped Classroom—is a start, a big start. But simply importing learning models is not the solution per se. The solution comes from using tools to make sure the system, here, organizational training, is effective and adjusting the approach in a manner to make it even better. How do we do that: informal training commentary, formal learning assessments, and discerning training efficacy. Ultimately, organizations will modify their trainings while getting an even better appreciation of the value of those trainings vis-à-vis its organizational goals.

These past three blogs provided readers with an intellectual orientation and pragmatic tools about Flipped Classrooms. We will be discussing select topics about the Flipped Classroom from this point while taking the occasional detour to discuss emerging ideas and issues in the field of eLearning. See you next week!

Contact us to learn how we can help your company or organization utilize these tools.

Craig Lee Keller, Ph.D., Learning Strategist

Leave a reply

Your email address will not be published.

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>