Photo-Realistic Animation (Part Two): The Uncanny Valley

In the earlier blog dealing with the issue of photo-realistic animation, we looked at basic terminology and types of animation, which is a central storytelling tool in training. Four types of animation were identified: stick figures, cartoon characters, realistic characters, and photo-realistic characters. The last type was reserved for this blog. Photo-realistic animation injects an interesting and often unsettling element into the field of animation. In order to appreciate this element, one needs to investigate the nature of photo-realistic reproduction and the central issue, the emergence of the uncanny valley. Moreover, when assessing negative features associated with photo-realistic animation, one needs to place it in a broader cultural context.

The Work of Art in the Digital Age of Photo-Realistic Reproduction

The title chosen for this subsection is a conscious allusion to the famous essay written by Walter Benjamin, “The Work of Art in the Age of Mechanical Reproduction.” Written in 1935, Benjamin discussed how artistic works, such as painting and even theatrical plays, were devalued due the process of mechanical reproduction. He made this conclusion based on his argument that the true artistic value, the “aura,” of the work was lost in its reproduction. In short, art lost its cultural authenticity when it became a product of consumerism instead of aesthetic reflection. The culprits of his cultural descent included printmaking, photography, and cinema. This neo-Marxist critique, more importantly for Benjamin, explained how capitalist societies politically deluded and pacified the masses.

Walter Benjamin, c. 1924

Wikipedia Commons

Placing the issue of political analysis aside for the moment, the “artistic” goal, for some, when engaging in “mechanical reproduction” was to create duplicates as close to the original as possible. In many ways, the photograph was the model for this endeavor. We now are at a very different stage regarding the types of technology utilized in reproduction.

In the digital age, as noted in a previous blog, we see a comparable cultural lament; new forms of reproduction are replacing the older “mechanical” forms, the rise of the digital at the expense of the analog. While the photograph was viewed as the epitome of accurate reproduction, digital photographic formats have created the means for expanded content manipulation as well as the ability to generate new content devoid and separate from any external reality. This is where the world of photo-realistic animation comes into the picture.

Photo-Realistic Representation

The previous blog differentiated amongst four different forms of animation. Excluding photo-realistic representation, the other forms harbored no pretence of even trying to mimic reality in a manner that would fool an observer. Photo-realistic animation, however, does seek to replicate scenes, as a conscious goal, as one would see them in a photograph or live-action cinema. For some, this is where a problem emerges. Like in the era of mechanical reproduction, concerned parties view the issue of authenticity as the key element. For Benjamin, the loss of authenticity eclipsed the political identity of citizens; for those concerned with photo-realistic representation, that gap in authenticity generates an unsettling sensibility regarding the nature of human identity itself.

Masahiro Mori

The Uncanny Valley. The concept of the uncanny valley was coined Masahiro Mori, a Japanese professor in robotics. Born in 1927, Mori published the seminal article titled “The Uncanny Valley” in the journal Energy. Simply stated, Mori’s thesis is the following:

“I have noticed that, in climbing toward the goal of making robots appear like a human, our affinity for them increases until we come to a valley (Figure 1), which I call the uncanny valley.  . . . One might say that the prosthetic hand has achieved a degree of resemblance to the human form, perhaps on par with false teeth. However, once we realize that the hand that looked real at first sight is actually artificial, we experience an eerie sensation. For example, we could be started during a handshake by its limp boneless grip together with its texture and coldness. When this happens, we lose our sense of affinity, and the hand become uncanny.” (For an English translation, see:

Mori argues that the genesis of the uncanny valley is based on the human anxiety over death and the drive for self-preservation. Others have linked its genesis to the drive for mate selection, cognitive dissonance, and religious sensibilities among others.

Mori posits a strong relationship between the effect of movement and the uncanny valley. If robotic movement begins to mimic human motion, then its level of affinity is increased. However, simple replication of certain types of motion is not enough, as there is a large-range of human movements that might not or could not be programmed into a robot. As a result, Mori suggests that scientists consciously pursue a nonhuman design in order to generate a safe level of affinity.

While Mori’s concept specifically dealt with human-like qualities in robots, it is relevant in the entire range of projects that seek to replicate human qualities, which include appearance, movement, and voice.

The Range of the Uncanny Valley

To best appreciate the visual sense of the uncanny valley, one needs to be able to clearly see and interact with the range of images, so a selection of large images are included below:

Android: Hiroshi Ishiguro and Double (Hiroshi Ishiguro Laboratory), 2013


Movie: Polar Express (2004)

Video Game: Medal of Honor: Warfighter

In comparison, let’s look at a spectrum of images created, from the most-robot like to the most human:

Maya B. Mathura & David B. Reichling

Let’s look at other ways the uncanny valley phenomena has been observed in the range of evolution:

  1. Monkeys: Princeton researchers presented evidence arguing that monkeys also respond the uncanny valley.

  1. Computer chat bots have been categorized in the realm of the uncanny valley.

For many years if not decades, the ability to generate life-like creations have been limited by restrictions in technology. If we had better technology, wouldn’t we have better representations and reproductions? Well that’s true. Ironically, the issue of hyperrealism has not only impacted the digital world of computers but also the “authentic” art world of canvas-based painting.

Phillip Weber, Bless 3, Antonia (2012)

Countervailing Factors

There are a number of factors that force one to reconsider the operative concept of the uncanny valley:

  1. Human Robot/Interaction: In a recent significant research article in the field, Jakub A. Zlotowski and his colleagues posit that the perception of the uncanny valley can be impacted by human/robot interactions. That is, many of the studies on the uncanny valley focus on experiments with subject responses to digital images and/or videos. Zlotowski determined that “repeated interactions [with the robot] was sufficient to reduce eeriness irrespective of the robot’s embodiment.” This factor, of course, does not obtain in the same way when investigating the uncanny valley in scenarios based in the context of videos itself, for example, in the field of training.

  1. Japan and Robot Culture: While the notion of the uncanny valley originated from Japan, it is useful, extremely useful, to highlight the cultural differences between the East and the West. It should not be a surprise, simply based on popular media coverage, that the Japanese have a strong affinity for modern robot culture. This sensibility, though, is rooted in the Karakuri tradition, which shielded technology to promote “feelings and emotions;” a similar sense existed in Eastern attitudes toward marionettes. In comparison, the Western automata tradition was based on mimicry and scientific knowledge.

  1. Improving Technology: In a recent article in the magazine Wired, Sandra Upton argues that advances in artificial intelligence brings into question the continuing validity of the uncanny valley.  Upton argues that the algorithms in various digital tools, such as Adobe’s Sensei, are so powerful as to create products that almost are indistinguishable from reality. She states bluntly that this could be, actually, the technological basis for Fake News.

Closing Thoughts

When reflecting upon these discussions, we are directed back to some of our original questions. Benjamin investigated how mechanical reproduction impacted our ability to maintain authenticity and class insight. Mori broached the same question in the context of the fabrication of robots; for Mori, though, the matter dealt more with the issue of authenticity and human consanguinity with robots.

All withstanding, one is forced to deal with the 800-pound gorilla in the room: the limits and range of artificial intelligence. Mori dealt with this issue in his 1974 book The Buddha in the Robot: A Robot Engineer’s Thoughts on Science and Religion. In this work, he claimed that robots can give us insight into our own ethics and concluded that robots are capable of cultivating a Buddha-like nature. So our initial investigation looked at the experiential pitfalls of using photo-realistic animation and ended up by asking existential questions about the nature of humanity. But how does this factor into the more discrete issue of training? In short, as suggested by Mori himself, one should tread carefully in using animation that is too accurate, as technological limitations may lead you to slip into the uncanny valley.

As always, let us know what you think!

Craig Lee Keller, Ph.D., Learning Strategist


Photo-Realistic Animation (Part One): A Review of Terminology and Types

Last week, we finished up a three-part series comparing books with tablets, which served, among other things, to delve into the experiential dimension of eLearning. This two-part blog will continue in the experiential vein by reviewing user preferences regarding the issue of photo-realistic animation.

Before diving into photo-realistic animation and the peculiar world of the uncanny valley, let’s review some basic terminology and Types.

Animation Terminology

The word “animation” conjures up images of Saturday morning cartoons and major motion pictures. However, animation simply is the means of creating the illusion of motion by using images (pictures and/or photographs). The mechanisms of creating modern animation, as one might surmise, draws upon the history of optical discovery and innovation; witness the kineograph (the flipbook), praxinoscope, thaumatrope, and the zoetrope. Given childhood memories, my favorite is the flipbook: I remember going to the shoe store and putting a nickel in a machine with a handle that I’d crank to “flip” through a series of different photographs to create the illusion of motion, and, importantly for our purposes, a narrative. The flipbook also is the direct precursor of cinematography, the motion picture.

In the world of motion pictures, there are a variety of styles of animation: traditional, stop-motion, and computer generated. Traditional animation employed cinema frames (cells) that were manually drawn by artists to literally trace the motion of objects and characters, which was, in a manner of thinking, analogical; stop-motion animation entailed miniscule movements of actual objects that were photographed and compiled into a motion picture. Both forms are extremely time consuming in general and even more with increased levels of fluidity. Computer-generated animation, of course, has transformed the type of artisanship and increased the range of possibilities.

Let’s shift to the world of e-Learning . . .

Animation and eLearning

There are two primary ways of using the term animation in eLearning, both of which are employed in the context of presentations. The first deals with textual presentation techniques; here, think of PowerPoint presentations.

All of us have sat through countless PowerPoint presentations and noticed there are two schools of thought: those who use animation and those who don’t. Among the ones that do, there is a subspecies of presenters who go way overboard as it were. What are we talking about? Let’s look at some examples:

Texts that “appear,” “fly,” “blink,” “fade,” and the like—all with a click of the projector remote. Used judiciously, such features may enhance a presentation; used indiscriminately, they are maddening. For example . . . [though imagine it moving …]

  1. Animation Terminology

Ok. That was the first type of animation used in eLearning. That’s not what we are talking about. The form of animation increasingly used in presentations and on-line modules is more closely akin to the animation in the previous section: video content that employs a scene, characters, and context, in other words, a narrative. Another way of thinking about narrative in the context of eLearning is storytelling. When we think about storytelling and eLearning we think about training.

So how does storytelling fit into the scheme of eLearning?

The heretofore-traditional mode of training was with an in-person expert imparting information and moderating discussion. Obviously, this has continuing currency and value. But . . . for the training that is repeated, and repeated, for class after class, for new employee after new employee, there is value—not to mention savings—in animation. Moreover, animation often can demonstrate scenarios that cannot be easily replicated at in-person trainings.

Animation and Storytelling

As one can imagine, utilizing live-action “actors” to create training scenarios and storylines can be a painstaking if not expensive enterprise. That’s one of the reasons why training administrators rely upon animation, which generally speaking draws upon computer software programs that generate animated scenes.

Trainers can employ animation in a number of fashions: traditional animated frames with captions and/or talk-over narrative and, more intensively, actual video presentations. In the context of this following discussion, we are not using the term “animation” in the conventional sense of creating the illusion of movement. Rather, we are using the term to connote a style of representation associated with traditional notions of cartoon-style drawing. There are different styles of animated storytelling: stick figures, cartoon characters, realistic characters, and photo-realistic characters.

Stick Figures:

Given our level of digital sophistication, and contrary to what one might suspect, storytelling can be compelling and pedagogically valuable when using stick figures in training. Such figures have been used since the dawn of cinema; witness Émile Cohl’s short, Fantasmagorie (1908). See:

While having no pretension of realism or ostensible sophistication, stick figures can be rather compelling in their simplicity by generating a direct narrative without distracting backdrops. For example, R.J. Miller presented its value at a recent iConference, “Draw My Life: Creative Reflection Through Stick Figure Storytelling.”

The Draw My Life video notion has a wide following and has been used by celebrities such as Taylor Swift in their social media.

While “stick figure” animation may be characters, they may just as likely be other images sequentially drawn together; here, think whiteboard-brainstorming sessions. What might be lacking in stick figure animation can be gained with the concepts accented by compelling and affirmative audio storytelling. In addition, the appeal of stick figures is evident in the popularity of applications such as Pivot Animator and Stykz.

Cartoon Characters:

The use of cartoon characters can be highly useful in training sessions for a number of reasons. First, familiarity for younger learners can create an initial positive disposition. Second, cartoons can present characters that are non-threatening; this obviously can be of value with younger learners, but also can be useful for adults when dealing with difficult and emotionally sensitive subjects. A non-pictorial analogy is the use of dolls by therapists when working with child victims of trauma. Third, cartoons can assist educators in presenting abstract, complex concepts.

Rees J (2005) The Problem with Academic Medicine: Engineering Our Way into and out of the Mess. PLoS Med 2(4): e111., Wikipedia Commons, and,

JAG Global Learning

There is, moreover, research suggesting that the use of nonsensical figures can be understood and recalled better when receiving interpretive, contextual commentary. See BowerGH, MBKarlin, ADueck,  “Comprehension and memory for pictures,” Memory & Cognition, March 1975 (2): 216:20.

Separately, in economics education, research has revealed value in the use of cartoon characters in constructive and collaborative learning with the potential for impacting critical thinking. See van WykMM, “The Use of Cartoons as a Teaching Tool to Enhance Student Learning in Economics Education,” Journal of Social Science, 26(2): 117-30.

Alternatively, however, a more recent study has found that while cartoon characters might be thought to capture interest more quickly, participants liked human spokespersons better. BhutadaNS, BLRollins, and MPerrii, “Impact of Animated Spokes-Characters in Print Direct-to-Consumer Prescription Drug Advertising:An Elaboration Likelihood Model Approach,” Health Communication, 2017, Volume 32, Issue 4, 391-400.

In short, it is likely that the value of cartoon characters in storytelling is dependent upon content and context.

Realistic Characters:

Up until relatively recently—I’m talking about a decade or two—realistic style characters have been the animation of choice for many educational administrators. Why? First, many administrators believe realism facilitates translation into the work place. Realistic backdrops, scenarios, and characters—all of these things create a sense of resonance for workers when dealing with a subject. Second, for certain subjects, the style of realism is commensurate with the gravity of the educational content. So, for example, one might not want to employ stick or cartoon characters when training non-violent restraint techniques for use in a psychiatric hospital. Third, realistic characters can be created that more readily reflect the demographics of the employees and/or students. This latter factor probably could not be stressed enough in the context of students embodying and understanding the ideas and narrative of the story.

When comparing the “cartoon” animation in the previous section versus the images presented above, the difference in animation style is patently clear. In thinking of a description to compare the two, one might use the examples of animation in the traditional comics versus graphic novels.

The final form of animation deals with the photo-realistic style. Let’s reserve that and discussions about related issues for next week.

Craig Lee Keller, Ph.D., Learning Strategist

Books vs. Tablets (Part Three): Literature Review

To finish up this series about comparing the experiential dimension of books versus tablets, let’s look at some articles and research about the issue that touch upon how these different mediums impact learning—understanding, cognition, and retention.

Literature Review

As one might suspect, putting matters of scientific comparisons aside, there are two schools of thought regarding the value of books versus tablets. The first arguing that books will and can never be replaced due to their fundamental sense of familiarity, feel, and natural relationship with the reader; the second argues that books are destined to join the dust pile of history and that tablets will naturally supersede books in a matter of time. Research and studies on this matter seek to differentiate the two less on such visceral reactions and more on issues that can be better qualified and quantified. Let’s look at two reviews.

Twenty-five years ago, Andrew Dillon generated a review of the empirical literature (Dillon, A. (1992) Reading from paper versus screens: a critical review of the empirical literature. Ergonomics, 35(10), 1297-1326.) (Cut-and-paste below)

Dillon breaks down his review into several areas, but notes, all withstanding, that many of the studies are flawed in their research design:

  1. Reading Speed: while noting design flaws that existed in previous studies, Dillon affirms that reading speed is slower from computer screens.
  2. Accuracy: Dillon notes that judging accuracy is more difficult than reading speed per se, as accuracy deals with a number of different issues. If, for example, one is measuring accuracy in terms of “proofreading” texts, then the studies reviewed conclude that screens obtain poorer results than paper. But, the aforementioned studies simply included errors such as extra/missed spaces or double letters instead of common proofreading errors such as misspellings and errors of context and grammar.
  3. Fatigue: Dillon notes that different studies have produced differing conclusions depending upon the subject. While a number have concluded that screens created more eye fatigue, others have found that the difference from fatigue between paper and screens depended upon screen quality.
  4. Comprehension: Post-reading questions were used in many studies to assess comprehension. The studies Dillon reviewed found no discernable difference between paper and screens, though did find a difference between faster and slower readers, with slower readers having greater comprehension. Dillon concluded that comprehension is not adversely impacted when reading from screens versus books and, actually, in certain scenarios may be improved, for example, when writing essay type answers for an open book test using a hyperlinked statistics book.
  5. Preference: Dillon notes that the studies under review in this context did not offer much assistance, as most of the users were relatively “new” users, which may have inadvertently created a negative disposition toward screen reading. Similarly, the preference of books versus screens was heavily influenced by the quality of the paper/books versus the quality of screens. As such, Dillon concludes that preferences are not well understood at that time.

A more recent review, which was widely reported upon in the media, was written by Ferris Jabr in Scientific American: “The Reading Brain in the Digital Age: The Science of Paper Versus Screens,” April 11, 2013. (Cut-and-paste below)

Jabr notes, digital users previously were reported to have read “slower, less accurately, and less comprehensively on screens than on paper. [Though s]tudies published since the early 1990s have produced more inconsistent results: a slight majority have confirmed earlier conclusions, but almost as many have found few significant differences in reading speed or comprehension between paper and screens.”

Despite this leveling of differences, Jabr does note continuing differences, which include the tactile dimension of reading in addition to factors that contribute to one’s ability to intuitively navigate a text. This latter dimension also factors into issues regarding content recollection. In this regard, some researchers have found that memory often is linked to the experience of reading, which serves cognitive reflection when attempting to recall the information. For example, one may remember a given fact being in the last paragraph at the bottom of a page at the end of the chapter. Such experiential elements are often lost when using digital texts. Similar dynamics obtain regarding the “rhythm” of flipping through pages and developing a mental map of information in a text.

While participants in recent studies have shown a preference for paper over screens, as Jabr notes, there is information and experiences that cannot be duplicated on paper and can only be relayed via digital means, for example, the Scale of the Universe tool (

All withstanding, a recent CNN review on this subject have shown some alternative finding, which, in many ways are not surprising:

  • “Students overwhelmingly prefer screen to print.
  • “Reading was significantly faster online than print.
  • “Students judged their comprehension as better online than in print.
  • “Paradoxically, overall comprehension was better for print versus digital reading.
  • “The medium didn’t matter for general questions (like understanding the main idea of the text).
  • “But when it came to specific questions, comprehension was significantly better when participants read printed texts.”


The review above demonstrates increasing viability for digital reading in a world previously dominated by books and paper texts. However, it is clear, especially with the rise of digital natives, that preferences and, perhaps, learning realities may be changing with the increasing tide of digital content.

Next week, let’s look at a different experiential dimension of eLearning. For this blog, we’ll look at the issues associated with user preferences when comparing photo-realistic animation versus traditional animation.

Craig Lee Keller, Ph.D., Learning Strategist

Books vs. Tablets (Part Two): Cultural Context and Experiential Comparison

Last time, we broached the topic of comparing the value of traditional paperbound texts—books—versus computer tablets. Before making an experiential comparison, let’s look at the cultural context as to why this might even be an issue . . .

Cultural Context

The use of computerized tablets in education is quite new. Let’s take a quick review: prior to the iPad—generally considered the first mass market computer tablet—there were other portable electronic devices that used screens, some used for play and others for work. The first widely accessible handheld devices were used for gaming; such devices, of course, were the next step from desktop computer gaming, which supplanted the rows of pinball machines and the newer arcade games utilizing screens, such as Pac Man. By design, handheld devices promoted mobility and, for their youthful users, an opportunity to fixate on their gaming challenges.

Handhelds used for work were designed for a different target population, an older user juggling a range of responsibilities: witness the birth of the computerized personal assistant. While the Apple Newton is a progenitor in this field, the major player was the Palm Pilot, which stored contact information of colleagues, generate To Do Lists, and to plot activities in a calendar capable of providing audible reminders. The Palm was directly comparable to traditional weekly paper organizers, but with an added function, global search capacity. Soon, as expected, competitors arose from Microsoft, the Palm PC, which I actually used for Palm-like functions but also mobile data entry.  Here, as with desktops, we see the burgeoning field of software companies utilizing host operating systems to create applications, apps, to tailor a handheld device for the user’s needs.

During this time period, a cultural divide began to take shape, between those who embraced the new devices and technologies versus those who did not or, at the very least, those who were grumbling if not fumbling ambivalent users.  The advocates saw themselves carrying the technological flag of the future, while others criticized the use of handheld devices due to cost, durability, and their departure from basic sensibility. As with the rise of most impactful technologies, this divide was temporal, which pitted advocates of “progress” versus those of “tradition.”

This technology advanced, though, and generated what many viewed as a new type of social anxiety: Information Overload. How could one keep up with the explosion of information facilitated by the Internet? Was it any easier that all of this information was funneled and/or compacted into new handheld devices, that is, tablets? While the felt anxiety was real, the claim of it being a new type of anxiety was not. Remember the printing press, for example? There are scholarly articles that discuss the issue of information anxiety back in that period too. In addition to information overload, an associated anxiety was generated due to matters of technological competency.

As a result of this division, the cultural divide fed into a cycle of nostalgia and an appreciation of various retro markets. Think of the appeal of Moleskine notepads. This dynamic, while in part being a temporal, did not and does not fall along generational lines. Tradition, and in the minds of some, the analog, is striking back against the pace of technological process, which fed into the slow living movement, which seeks in part to define one’s sense of sensory appreciation and personal value versus the “speed” of modern living.

Regardless of the cultural divide, computer tablets are here to stay. We use them when swiping our credit cards and penning our signature at the new coffee shop. Indeed, it is most telling, perhaps, that Moleskine, originally founded in 1997 as Modo & Modo, began marketing the means of syncing paper and digital planners.

During the consumer ascendency of computerized personal assistants, there was a similar growth in the worlds of cell phones and laptop computers. Cell phones left their clamshell avatar and began to adopt the features of the computerized personal assistant; similarly, laptops sought to claim the computing power of desktops while maintaining its portable, less hefty profile. It’s not surprising, then, that tablets sought to incorporate elements of the new cell phone and laptops.

Experiential Comparison

The discussion below is neither intended as a scientific study nor a technical analysis; rather, it is solely intended to introduce the types of issues one might consider when actually performing such a study or analysis.

Types of Tablets: To begin our comparison, let’s look at the different types of tablets. The focus will not be on proprietary brands or operating systems; rather, one should look at capabilities. Tablets can be broken down into two primary categories: software application driven tablets with computing power versus tablets that function primarily as “readers,” that is, E-readers. While initially not the case, both categories now have incorporated Internet capacity as one of its primary features.

  • Function
    • Content Delivery: Both books and tablets deliver content, though the former is more closely akin with E-readers. Both may have indices; only tablets have a search capacity (restricted by information format).
    • Bibliographic Sources: Both books and tablets have bibliographic guides or references. Tablets with Internet capacity can access certain sources immediately, and with the rise of Internet features, such as, Google Books, tablets can even access otherwise distant content not available in digital format.
    • Referencing: Tablets with Internet capacity can offer content cross-referencing using hyperlink functions.
  • Physical Character
    • Portability and weight: Books and tablets are comparable in portability. Tablets, however, on the whole will be lighter, especially given its storage capacity. Even a single large academic textbook is likely to be heavy and more of a burden to transport.
    • Durability: Books are more durable than tablets in most ways, including likelihood of significant breakage. While users may be able to retrieve content from a storage area, tablet damage is more financially impactful than damages to books.
    • Storability: Given its digital content format, tablets have far, far more storage capacity than books. Problems exist for storing digital content for long periods of time, but similar albeit different problems exist in storing paper book content for long periods of time.
  • Tactile Character: For individuals who grew up reading paper books, magazines, and newspapers, the tactile character of the text can be extremely important. This is the ineffable feel and sense of the reading material, an element of the book versus tablet comparison that may very well be generational.
    • Engagement of composite text: What is it like to hold the text? One or two handed? At a table or in your chair or on your lap? In many ways, tablets are easier to use, since one need not worry the movement of folding pages; on the other hand, one cannot be as cavalier, as it were, when holding a tablet when compared with a book given the high cost of dropping the tablet.
    • Dexterity and style of manipulation: Manipulating a tablet is as easy as flipping through the pages of a book. Again the preference is likely generational; however, one need not “dog-ear” a book page, if one can simply create a digital bookmark.
    • Use of writing implements: Underlining/highlighting passages and marginalia are the set tools of the trade for any student. For some it might be easier to manipulate using paper texts—the ability to cradle the book and write with precision. But it’s probably only a matter of time until tablets can mimic that sense too. On the other hand, an E-reader can still “mark-up” pages without devaluing the text the way such would do with a paper text. Digital writing implements have gone through an evolution concerning their precision; tablet screens are going through a comparable evolution from resistive touch-screens to capacitive touch-screens. In addition, tablet software has made incredible strides toward handwriting and voice recognition.
    • Sensory tactile sensibility: This, for many, is the deal breaker when looking at the entire comparison. Generally speaking, tablets don’t and had never attempted to mimic the tactile feel of using paper. For those who grew up reading and using print, it is really hard to get over this (though all of them would be happy to give up the paper cuts).
  • Visual Character: Visual character is the most significant element to consider when analyzing books versus tablets in the context of education. The other elements dealt largely dealt with matters of preference, sensibility, and the like. However, visual character—the optical nature of the text—impacts cognition and retention the most. Research on this subject will be dealt with during the final blog. The comparison below concerns paper texts and tablets utilizing LCD screens. E-reader tablets, though, utilizing E-ink technology has proven to be more “paper-like” and less computer-like than LCD tablets.
    • Textual character integrity: To the naked eye, the textual character of tablets is comparable to paper texts. But if looked at in perspective, we understand why the issue exists: the character integrity of early dot matrix printers and/or computer screens was highly flawed by contemporary standards. Depending upon tablet choice, the digital resolution can vary, which can translate into varying ease of reading.
    • Textual contrast: In terms of color composition, tablets and digital imagery can mimic the textual contrast present in paper texts. The residual issue is related to the next, element, how the illuminated nature of the majority of tablets impacts the sensory act of reading.
    • Impact of lighting: Without recounting childhood imperatives about not reading in the dark, lighting is a significant factor for both paper text and tablets. The choice of lighting—incandescent versus florescent—is relevant in consideration of paper text; the nature of the tablet illumination is equally significant. Early computer screens dealt with issues of flickering; while that phenomenon is not an issue with tablets per se, there remains an issue regarding the light it emits. As noted above, this issue does not obtain in the same way when comparing certain readers, which utilize E-ink screens.
  • Cost/$$$$: Prior to tablets, this factor primarily centered on choices of hardbound versus paperback and new versus used. However, digital publishing and the Internet have ushered in a range of new choices. While cost is not really a factor when considering content cognition and retention, it is a factor faced by educational institutions when deciding how to use their scarce financial recources.
    • New, used, and rent: The issue of new versus used only obtains for paper texts, not for tablets. The costs, although, for digital content initially was far below that of paper texts. The shipment of paper texts, naturally, included additional costs into this equation, whereas digital texts can simply be downloaded. Colleges made it exceptionally easy to sell back college texts (generally at a substantially lower rate); students can rent content for tablets, through time restrictions for use for non-downloadable, Internet-based reading.
    • Editions, subscriptions, and updates: Educational choices of textual editions always plagued educators and students. Every several years, a textbook might be updated, which prompted many to obtain the new edition at a slightly elevated price point. This, however, created a countervailing force against the used book and “rental” market for paper texts. Digital publishing made creating new editions and updates easier, which made the “newer” content easier to access via the Internet. In this new consumer market, some educational institutions consider maintaining subscriptions to digital content, which enabled users to have access to updated material without making discrete expenditures for individual digital works.

We’ve covered a lot of ground in this blog. With this information as a point of reference, let’s look next at the research and studies that assess the value of traditional books versus tablets.

Craig Lee Keller, Ph.D., Learning Strategist


Books vs. Tablets (Part One): Analog Versus Digital

Over the past year, we have touched upon a range of topics in the field of eLearning. Such topics included flipping the classroom, self-paced learning, gamification, mLearning, and the like. Most of these topics are dependent on the ever-changing world of communications and digital technology. In the context of eLearning, it is almost a given that students, trainees, and others will be utilizing a tablet of one kind or another. One takes it for granted that eLearning practice will follow eLearning theory.  But if we simply follow the technological wave, then the actual learning value of any educational theory or approach is lost. For this series of blogs, the particular issue is the experiential dimension of using various technologies, in particular, the digital tablet. Remember, an important element of most thoughtful approaches to eLearning includes evaluation and improvement.

Content Delivery

Regardless of pedagogical philosophy, educators seek to instruct their charges with ideas. There are different means to communicate this information: from the oral tradition to radio waves, from handwriting to the Internet. Similarly, there are different formats for storing information: from scrolls to typescripts, from clay tablets to electronic tablets. Commentators may highlight digitalinformation, while intimating that analog is everything else. Such an understanding, however, detracts from the meaning of the word analog and clouds our ability to better compare different ways to employ and improve different technologies for eLearning.

The word analog is derived from the Greek word ἀνάλογος (analogos), meaning proportionate (the word analogy also is derived from this same source, meaning similar or comparable). Using this definition, data—information—is represented through an analog device using continuous, albeit variable quantities. The clock is the most frequent example of an analogical device. The hands of the clock progressively move in relation to the passing of time; a ruler, similarly, is an analogical device in showing demarcations of distance. For eLearning, that is “electronic” learning, the use of analog draws upon its understanding in electricity. For example, sound is propagated through waves that create vibrations in devices that etch grooves into, say, a wax cylinder or vinyl record based on varying degrees of voltage and resistance. However, much of the data/information communicated in the context of eLearning is textual.

Contrary to what one might suspect, textual data is never analogical; it is always symbolic. That is, textual data uses alphabetic characters to represent letters that represent sounds and ultimately words to represent ideas. There is no relationship between the symbol(s) and the vocalized letter/word in non-ideographic text. Tools such as a stylus, an ink pen, or printing press can create and store texts in various formats; texts can also be stored using binary code, that represent different letters, numbers, and words, that is, ideas. In short, textual information is solely symbolic; the key factor is the means of storing textual information. For eLearning, it is useful to determine the value of using one form of textual storage versus another: the book versus the tablet (or computer screen).

Information Format

As noted, the format of storing information has dramatically changed over the years. Revolutions in textual creation have occurred only three times: mechanical papermaking, the printing press, and digital publishing. Each revolution impacted a different element in the production and storage of information: paper—the substrate for imprints; printing press—the mechanism for imprinting texts; and, digital publishing—the means of distributing textual products. Of course, there have been a variety of formats and technologies throughout this entire period as well: linotype and newspapers, paperback books, et cetera. The important factor for us revolves around format ease of use and efficacy.

Let’s look at the various factors that determine whether or not a given format for information is easy to use and, importantly, whether or not the format of information hinders or promotes cognitive comprehension and/or retention. The following categories should be considered when analyzing a composite text:

  • Physical Character
    • Portability and weight
    • Durability
  • Visual Character
    • Textual character integrity
    • Textual contrast
    • Impact of lighting
  • Tactile Character
    • Engagement of composite text
    • Dexterity and style of manipulation
    • Use of writing implements
    • Sensory tactile sensibility

Next, let’s look at some of these factors as we begin to assess the value of traditional books versus tablets.

Craig Lee Keller, Ph.D., Learning Strategist

Experience API (Part 6)

The Future of ADL and Experience API

The ADL (Advanced Distributed Learning Initiative) continues to be the sole authority coordinating and directing activities associated with Experience API, otherwise known as xAPI. But what does the future hold?  To look at the future, let’s remember their current roles.

While being the “Thought Leader,” ADL stimulates major advances by proffering its Broad Agency Announcements (BAA); in fact, about half of the research and development for ADL takes place outside of the confines of government and by private businesses and universities.  Such coordination includes organizing a variety of communities of practices, International Defense Coordination, the Defense ADL Advisory Committee, and the ADL Global Partnership Network.

For FY 17, the ADL focused on the following topics of interest in its BAA:

  1. xAPI integration with simulation, teams
  2. Persistent Learning Profiles for Lifelong Learner Data
  3. Implementing and Testing xAPI Profiles
  4. TLA Ontologies for Semantic Interoperability
  5. Infrastructure Security
  6. Other Innovations

An extremely important facet of ADL’s work is facilitating Communities of Practice (CoP). A CoP is a “group of practitioners connected by a common cause, role or purpose, which operates in a common modality.” The CoP create common rules and documentation (profiles), vocabularies, and “recipes” (the syntactic format).


For the variety of different CoP:  For incredibly interesting work in the field of mobile computing, look at the CoP for Actionable Data Book.

The ADL’s International Collaboration spans international military organizations:

  1. NATO Training Group (North Atlantic Treaty Organization)
  2. Partnership for Peace Consortium (Over 800 institutions in 60 countries that focus on issues of defense and international security)
  3. Technical Cooperation Program (Australia, Canada, New Zealand, the United Kingdom, and the United States)

Similarly its Global Partnerships include countries as diverse as Canada, Finland, Korea, and Romania.

While xAPI is being developed and nurtured on a daily basis under the auspices of the partnerships noted above, much of the attention and work is focused on expanding xAPI to a variety of new educational and training applications for an increasing number of professions, for example, emergency medical technician training. Not too snazzy, but that’s the way a lot of science progresses—expansion and refinement of a given paradigm. For the most recent DoD update (DoD Instruction 1322.26 on Distributed Learning, October 4, 2017), cut and paste the following link to your web browser:

For new ideas, one should look at the cutting edge issues addressed in a recent ADL conference in collaboration with the National Training and Simulation Association: iFest 17. To see its agenda, cut and paste the following link to your web browser:

Current Fears, Science Fiction, and xAPI

The area xAPI advancement I find most interesting and provocative is in the area of artificial intelligence (AI). Generally speaking, xAPI facilitates improvements in training along with various advancements in educational analytics, et cetera. As one might imagine, AI already is being used in training applications that utilize xAPI, think various simulations and virtual reality trainings. At this time, concern over AI does not so much focus on xAPI but rather on its military uses. On a recent blog published by Saffron Interactive, Priyanka Kadam broaches the issue of a “ban on automated deathbots.” Kadam continues his blog by discussing useful AI applications in the world of education and training, witness xAPI. (

Founder of Tesla and the startup OpenAI, Elon Musk and other AI/robotics visionaries and/or founders have called upon the United Nations to ban autonomous weapons. Musk and a group of 116 tech leaders are worried about a burgeoning arms race of automated drones, tanks, and the like. One initial concern focuses on the beginnings of a new arms race, problematic in and of itself. Another concern relates to the changing character and speed of conflict not to mention the potential for black hat “hacking” into these military systems. But why would this even be an issue regarding xAPI?


Let’s think of a common example, drones. The current military utilization of drones is monitored by humans and is not autonomous. However, there’s already an aspect of drone functioning that is autonomous, that is, through various AI programs in the software and/or simply algorithms. Humans monitor drones and make decisions of whether or not to engage a potential target based on data screened through AI/algorithms. Given the problematic character of human attention, reaction, and other issues, generally speaking, this can serve to reduce human error.

Trends and Upcoming eLearning Conferences

When looking toward the future and trends, conferences are a great place for investigation. These days most academic and higher education conferences generally have sessions that touch upon eLearning. For example, a big topic over the years at the American Historical Association and the American Studies Association is the concept of the “Digital Humanities.” This topic is just like it sounds: how to utilize various computer and web-based technologies in the teaching and promotion of the humanities in academic and public settings. You guessed it—the material covered in these sessions generally is not “cutting edge” and simply conveys the application of mainstream ideas and technology for use in the “trenches.”

The conferences that really focus on new technological trends are those specifically geared toward the professionals tasked with the job of setting up eLearning at their respective institutions, be it within academia, government, or the private sector. While there are general, broad-based conferences in the field of eLearning, there are also more specialized conferences in sub-fields, for example, professional training or for Chief Information Officers.

Let’s list some of the conferences remaining during 2017 and in early 2018— including one we just missed: October 30–November 1: mLearn 2017: 16th World Conference on Mobile and Contextual Learning (Larnaca, Cyprus).

November 2017

  • November 16–18: 10th Annual International Conference on Education, Research & Innovation: 10 Years Building the Future of Learning (Seville, SP)

December 2017

  • December 6–8: OEB Global 2017: Learning Uncertainty (Berlin, GER)

January 2018

  • January 24–26: Association for Talent Development TechKnowledge Conference (San Jose, CA)
  • January 31–February 2: Human Capital Management Excellence Conference (Palm Beach Gardens, FL)

February 2018

  • February 11–14: The Institutional Technology Council eLearning Conference (Tuscan, AZ); Keynote Speaker: John Landis, Apple Learning

March 2018

  • March 2–3: 11th International Conference on eLearning & Innovative Pedagogies: Digital Pedagogies for Social Justice (New York, NY)
  • March 5–7: 12th Annual International Education, Technology, & Development Conference: Rethinking Learning in a Connected Age (Valencia, SP)

The following conferences are sponsored by the eLearning Guild, an eLearning organization for information, networking, and community:

March 27–29, 2018: Learning Solutions 2018 Conference & Expo (Orlando, FL)

June 26–28: 2018 Realities360 Conference (San Jose, CA)

October 24–26: DevLearn 2018 Conference & Expo (Las Vegas, NV)

The eLearning Guild conferences are major events, but each one is directed toward a different target population. The Learning Solutions conference focuses on developing knowledge and skill sets for addressing “real life” problems for individuals working in the “trenches.” There are sessions on all of the basic arenas of eLearning, e.g., games/gamification, instructional design, mobile learning, etc. An important part of these sessions is to present “best practices” in various sub-fields.

Per the eLearning Guild, the Realities360 conference focuses on “opportunities presented by virtual reality, augmented reality, and other alternate reality technologies.” This conference is “hands on,” and its Technology Showcase offers participants time to work with the new technologies and engage others as to how it might fit into their own learning needs. An interesting session during the 2017 conference was titled “Wayfinding, Storytelling, and Structuring Interaction in VR.”

Many consider the DevLearn conference to be one of the major events in eLearning each year. DevLearn offers a window into the “cutting-edge” technologies in a wide range of sub-fields and prides itself in showcasing an array of “thought leaders” in the field. Looking back at its 2017 Keynote Speakers:

  • Amy Web, “Sci-Fi Meets Reality: The Future, Today”
  • LaVar Burton (Actor/Director), “Technology and Storytelling: Making a Difference in a Digital Age”
  • Jane McGonigal, “How to Think Like a Futurist”
  • Glen Keane (Disney Animator/Legend), “Embracing Technology-Based Creativity”

DevLearn has sessions focusing on emerging technology, innovation, and management, among others. It also touched upon the following familiar subject: “Going Beyond SCORM: Using xAPI and WordPress as an LMS.” As you might imagine, a big draw for any of the conferences by the eLearning Guild or any other entity is the vendor showroom, which displays all of the latest strategies and technologies.

Craig Lee Keller, Ph.D., Learning Strategist

Experience API (Part 5)

Tin Can API/Experience API Concept

So, issues of nomenclature not withstanding, what were some of the key elements Rustici introduced with its Tin Can API? (For a copy of a slightly modified version of the Rustici deliverable to ADL [Tin Can API], cut and paste this link to your web browser: )

Rustici addressed the reality and the administrative needs that exist in our increasing complex, disaggregated, and de-centralized technological world. As noted, yes, there are so many different types of technologies, and, yes, there are so many different types of platforms, and, yes, there are so many different sources of information. Moreover, not all of these learning experiences take place online. So how to capture the range of these “experiences” for modern learner . . .

The key concept and innovation for Tin Can API and Experience API is as follows.

Whenever a learning moment has to be recorded, documentation of this experience is sent to a Learning Record Store (LRS) using the following format:

The basic notion is “I did this.” This format permits administrators to track when learners begin educational courses/modules, review a given page, answer a question, and/or finish (or fail) a given course of study. While the information might have originated within a proprietary Learning Management System (LMS), the data ultimately is routed to an independent LRS, which then, in theory, could be accessed by other parties and software applications.

Since a LRS is the ultimate destination for learning data, individuals can learn off-line and simply upload their learning data once given an Internet connection. Now this does not mean that a learner could be reading a hardback book and an article on a PDF reader and magically that information is transmitted to the LRS. Rather, the learning still needs to take place through a digital format that tracks steps taken by the learner. (The reading of a hardback book, in fact, could be added to the LRS, but this would simply need to be documented and inputted by an administrator.)

Let’s look at some examples:

  1. Peter began the intermediate course for sommeliers
  2. Peter read module 1
  3. Peter scored 50% on module 1 questions
    1. Peter scored 100% on module 1 questions about white wine
    2. Peter scored 0% on module 1 questions about red wine
  4. Peter read a refresher on red wine for module 1
  5. Peter scored 95% on module 1 questions

     .  .  .  .  .  .

     27. Peter achieved competency in Burgundy style wines

This information could have originated from a cell phone, a tablet app, a desktop computer at home, or a school-based workstation. Imagine Peter began the class as an outside student at the US Department of Agriculture, and then received a job working at the U.S. Food and Drug Administration. While at FDA, he continued his study as a sommelier though using a different LMS. His old learning records are still accessible even though the FDA is using a new LMS, since the records are stored in a LRS that is universally accessible using protocols developed by Tin Can API/Experience API.

Another important feature of Experience API is that it can record learning data derived from simulations and virtual reality environments. This data, of course, is qualitatively different from other data given its dynamic nature. In this regard, too, Experience API can record data from “groups,” as distinct from individuals, that participate in a learning process. For example, ADL highlights one related element in its portfolio: Hyper-Personalized Intelligent Tutor (HPIT), which “is able to detect non-cognitive factors (e.g., determination, boredom, motivation) in a learner . . .”


Similarly, SAVE (Semantically Automated Assessment in Virtual Environments) “provides a framework for learning procedural skills (e.g., repairing a car, flying an airplane, or shooting/maintaining a weapon system) through simulation.


Apart from the sleek, sexy uses of xAPI [note the devolution into an abbreviation], there are basic, fundamental uses of value regardless of whether or not or an organization employs novel gaming training or the like. Welcome to the ADL/DOE Learning Registry (LR) Project. ( There is a huge need for a tool like this—especially within the government or other large and multi-faceted organizations. Imagine an organization having a simple need, say, developing an emergency building evacuation training. Divisions on the east coast may have completely different missions and operations from divisions on the west coast, however the character of their building evacuation plans will likely be fairly similar discounting local elements. A training that one division develops can then be used and, perhaps, improved upon by another division. Maintaining a central LR valuable to leverage corporate expertise and intellect and minimize waste in expenses and time. In fact, many corporations have developed positions specifically for this function: Chief Knowledge Curators.


Credentialing increasingly is becoming an important element that is facilitated through xAPI, especially in government service. Witness the birth of MIL-CRED (Military Micro-Credentials), which is designed to create “a fully vetted, fully automated, personally controlled digital resume.” This project was developed to ease the transition from military to “civilian careers and educational opportunites.”


Administrators using xAPI can generate meta-data drawn across different groups of students over periods of time. This can be valuable in terms of fine-tuning elements of educational content and course focus. Ultimately, xAPI was built to document a relationship between training and job performance, which for administrators, managers, and supervisors is a key if not the key element in any program of workplace development.

Next Step: Actually, next and final step, is to look at the future of Experience API (xAPI) and the current collaborations and research initiatives of the ADL.

Experience API (Part 4)

O.K., last week we finished up with SCORM, which paved the way for a discussion about Tin Can API and, yes, Experience API.  Let’s get right into it . . .

SCORM had peaked in its level of development and value, and the ADL (Advanced Distributed Learning Authority) decided a newer version of SCORM would not meet its continuing need. As such, in 2011, ADL issued a contract to investigate, research, and basically re-think SCORM in order to advance its mission and goals. The Nashville-based business Rustici Software won this contract, and the firm initiated its work by starting a conversation, a conversation that became Project Tin Can.

Project/Tin Can

Rustici termed the research phase of the contract to be Project Tin Can. They embraced the image and notion of tin-can communication to convey the two-way communication between Rustici and the eLearning community.

Per Rustici, this process included seeking information through five different avenues:

  • nput from hundreds of xAPI stakeholders;
  • Interviews with key industry leaders;
  • LETSI SCORM 2.0 White Papers (this was, in many ways, a precursor of Project Tin Can; for an archive of these papers, see the Rustici site:;
  • Interactions with then current Rusti customers; and,
  • The ADL contract specifications.

A Rose By Any Other Name . . . Tin Can API/Experience API/xAPI

While the results of the Project Tin Can research produced Tin Can API, the latter was a qualitative successor to SCORM and an earlier version of the continually evolving Experience API.  xAPI, then, is simply an acronym for Experience [eXperience] API, neither a successor to nor a different version of Experience API.  

It really seems confusion arose and still arises from the period when Tin Can and Experience API virtually were synonymous. This was the period of and the immediate years following Rustici’s submission of its deliverables to the ADL. At that time, perhaps understandably, Rustici stated:

ADL will be transferring ownership of the spec to a public standards body after v1.0 is complete this spring. After that transfer, we don’t expect the official government name “Experience API” to last much longer [emphasis added].”


They had branded their process and deliverable with the “Tin Can” name, and their work was widely known by many in the industry as the Tin Can API.  Yet, the ADL used the name Experience API in their contract specifications and in their continuing usage. Experience API is the pervasive name that is used, and the name “Tin Can” is only formally used in reference to Rustici’s original contract work. Indeed, Rustici later called its response to the ADL contract “Project xAPI.”


Ownership versus Web Domains

The ADL awarded the contract—the BAA [Broad Agency Announcement, which in general, is for basic and applied research and development]—to Rustici and, as such, the work derived from that contract was and is the property of the United States Government. The issue of name “ownership” publicly arose in a May 2012 Google Group discussion:!topic/tincanapi-info/q87uy3XJXX8

The concern centered on the Rustici trademark petition for the names of “Tin Can” and “Project Tin Can.” No less a figure than Rustici President, Mike Rustici, weighed in on the concern to assure writers that the company had no proprietary claim on the use of “Tin Can” and sought to obtain trademark status to avoid it from being “pirated” by others who might be less community minded.

The continuing Google Group discussion continued on the topic as to whether or not Rustici would use the “Tin Can” moniker in any of its future commercial enterprises; to wit, Rustici replied that it would, however, the company would not prevent others from doing so.

Toward this end, while Tin Can, Experience API, and, for that matter, SCORM, are names under government “contract,” as it were, Rustici owns the web domains for, which is redirected to another one of their web domains,; they also own the web domain of In their separate domains, they clearly differentiate the administration, ownership, and stewardship of the respective names to the ADL, but that they also offer services for companies seeking to utilize the SCORM and/or Experience API specifications.  For a response by Rustici Software on this subject, please see: and

Note: To be clear, the above comments neither are intended nor take away from any of Rustici Software’s groundbreaking work in the field of eLearning; rather, the comments are included to simply clarify distinctions amongst terms and the like.

Next step: Finally, a focused discussion on the Tin Can API/Experience API Innovations and its evolution.

Craig Lee Keller, Ph.D., JAG Learning Strategist

Experience API (Part 3)

In our last blog, we further detailed the foundation for ADL and its areas of research.  One of these areas of research is Total Learning Architecture Structure, which provided the basis for interoperability between different systems. One of the results of this work was the Sharable Content Object Reference Model (SCORM). The initial edition of SCORM was released in January 2000 with a couple of SCORM iterations produced the following year in 2001. However, a new version of SCORM was introduced in January 2004, and DOD made SCORM use mandatory in 2006. In total, there have been four versions of SCORM 2004. The next generation of SCORM arose in 2010 with Project Tin Can, but we’re getting a little ahead of ourselves.

SCORM or What do you mean be a Sharable Content Object Reference Model?

To understand SCORM, let’s break it down into its constituent elements.

  • Sharable Content Object (SCO)—an object is the means of relating various pieces of data and their value.  For us, this refers to an element within a learning system, for example, a question or image. Each “object” is a part of the larger educational program. The desire and demand to make objects “sharable,” is linked back to our original quest for interoperability. In one sense, think of it as a specific lesson or module in an on-line course.
  • Reference Model—by reference, SCORM is referring to a computer term of art, that is, the means of finding specific data or datum located on a computer hard drive or, increasing, on a cloud-based server. In short, a reference provides the basis for discerning a physical location for information. Yes, there are all of these 0000s and 1111s out there in the digital world, so wouldn’t it be nice to be able to keep track of them. By reference model, SCORM is creating rules and protocols for references in the context of sharing that information with other Learning Management Systems (LMS).

To better understand let’s look how software designers create their programs. I remember writing programs in the defense industry. I already knew the languages of BASIC and easily learned FORTRAN in addition to LOCUS (an early spreadsheet proprietary program). My work was a mess, truly. LOL! I knew how to program, but insisted on writing my programs without a flowchart—breaking rule number 1. Anyway, you can imagine all of the problems I faced.

There are other rules in computer science that make it easier to write, track, and modify code. One of these approaches is object or class-based programming. Instead of lumping all the data together, in object-based programming, the programmer defines a group of fields or attributes, which then provides the basis for relating actual data values and associated operations and/or methods. This type of organization, then, provides the basis for generating a commonality that can be shared amongst different programs. That is, if data value, its characterization, and associated operations can be made uniform, then different programs can be capable of utilizing that same digital information. SCORM is about creating the basis for doing just that.

Now imagine trying to perform all of these functions while transmitting the information through the Internet in the context of a client-server relationship. Sharing information through a cycle of request and response from the client (you) to the server (the repository of data and generally the program) gets complicated enough. Imagine trying to force-feed your information into a different LMS. Whhhew! You get the picture. Yes the horror as it were. So, again, that’s the basis for creating SCORM.

SCORM Protocols

Let’s be clear. SCORM is neither a software program nor a programming language. Rather SCORM provides standards for data and programming that make it possible to have data sets that can be interchangeable amongst differing LMSs. So, software designers are extremely mindful to utilize SCORM when designing and coding their proprietary LMS. There have been numerous limitations to SCORM. Why? It’s simple: trial and error. Software designers within and without the government have found flaws or limitations to the SCORM protocols, which, of course, gave rise to successive iterations of SCORM. The SCORM protocols are the rules utilized by different Application Programming Interfaces (API). An API is the part of the programming language that facilitates a communication between different computer systems. API and software developers use SCORM to create the standard of interoperability for eLearning systems. Now there exists a SCORM API, but that is just one of many forms of an API.

A major SCORM component was adopted with the 2004 version. Researchers with the ADL created the notion of “sequencing.” The sequencing protocol specified that learners could only experience content objects in a specified order. Such can be valuable, but it also can be a limitation.

Next step: The movement away from SCORM toward Tin Can API and Experience API.

Craig Lee Keller, Ph.D., JAG Learning Strategist