Header Image - Latest Blogs...

Books vs. Tablets (Part Two): Cultural Context and Experiential Comparison

Last time, we broached the topic of comparing the value of traditional paperbound texts—books—versus computer tablets. Before making an experiential comparison, let’s look at the cultural context as to why this might even be an issue . . .

Cultural Context

The use of computerized tablets in education is quite new. Let’s take a quick review: prior to the iPad—generally considered the first mass market computer tablet—there were other portable electronic devices that used screens, some used for play and others for work. The first widely accessible handheld devices were used for gaming; such devices, of course, were the next step from desktop computer gaming, which supplanted the rows of pinball machines and the newer arcade games utilizing screens, such as Pac Man. By design, handheld devices promoted mobility and, for their youthful users, an opportunity to fixate on their gaming challenges.

Handhelds used for work were designed for a different target population, an older user juggling a range of responsibilities: witness the birth of the computerized personal assistant. While the Apple Newton is a progenitor in this field, the major player was the Palm Pilot, which stored contact information of colleagues, generate To Do Lists, and to plot activities in a calendar capable of providing audible reminders. The Palm was directly comparable to traditional weekly paper organizers, but with an added function, global search capacity. Soon, as expected, competitors arose from Microsoft, the Palm PC, which I actually used for Palm-like functions but also mobile data entry.  Here, as with desktops, we see the burgeoning field of software companies utilizing host operating systems to create applications, apps, to tailor a handheld device for the user’s needs.

During this time period, a cultural divide began to take shape, between those who embraced the new devices and technologies versus those who did not or, at the very least, those who were grumbling if not fumbling ambivalent users.  The advocates saw themselves carrying the technological flag of the future, while others criticized the use of handheld devices due to cost, durability, and their departure from basic sensibility. As with the rise of most impactful technologies, this divide was temporal, which pitted advocates of “progress” versus those of “tradition.”

This technology advanced, though, and generated what many viewed as a new type of social anxiety: Information Overload. How could one keep up with the explosion of information facilitated by the Internet? Was it any easier that all of this information was funneled and/or compacted into new handheld devices, that is, tablets? While the felt anxiety was real, the claim of it being a new type of anxiety was not. Remember the printing press, for example? There are scholarly articles that discuss the issue of information anxiety back in that period too. In addition to information overload, an associated anxiety was generated due to matters of technological competency.

As a result of this division, the cultural divide fed into a cycle of nostalgia and an appreciation of various retro markets. Think of the appeal of Moleskine notepads. This dynamic, while in part being a temporal, did not and does not fall along generational lines. Tradition, and in the minds of some, the analog, is striking back against the pace of technological process, which fed into the slow living movement, which seeks in part to define one’s sense of sensory appreciation and personal value versus the “speed” of modern living.

Regardless of the cultural divide, computer tablets are here to stay. We use them when swiping our credit cards and penning our signature at the new coffee shop. Indeed, it is most telling, perhaps, that Moleskine, originally founded in 1997 as Modo & Modo, began marketing the means of syncing paper and digital planners.

During the consumer ascendency of computerized personal assistants, there was a similar growth in the worlds of cell phones and laptop computers. Cell phones left their clamshell avatar and began to adopt the features of the computerized personal assistant; similarly, laptops sought to claim the computing power of desktops while maintaining its portable, less hefty profile. It’s not surprising, then, that tablets sought to incorporate elements of the new cell phone and laptops.

Experiential Comparison

The discussion below is neither intended as a scientific study nor a technical analysis; rather, it is solely intended to introduce the types of issues one might consider when actually performing such a study or analysis.

Types of Tablets: To begin our comparison, let’s look at the different types of tablets. The focus will not be on proprietary brands or operating systems; rather, one should look at capabilities. Tablets can be broken down into two primary categories: software application driven tablets with computing power versus tablets that function primarily as “readers,” that is, E-readers. While initially not the case, both categories now have incorporated Internet capacity as one of its primary features.

  • Function
    • Content Delivery: Both books and tablets deliver content, though the former is more closely akin with E-readers. Both may have indices; only tablets have a search capacity (restricted by information format).
    • Bibliographic Sources: Both books and tablets have bibliographic guides or references. Tablets with Internet capacity can access certain sources immediately, and with the rise of Internet features, such as, Google Books, tablets can even access otherwise distant content not available in digital format.
    • Referencing: Tablets with Internet capacity can offer content cross-referencing using hyperlink functions.
  • Physical Character
    • Portability and weight: Books and tablets are comparable in portability. Tablets, however, on the whole will be lighter, especially given its storage capacity. Even a single large academic textbook is likely to be heavy and more of a burden to transport.
    • Durability: Books are more durable than tablets in most ways, including likelihood of significant breakage. While users may be able to retrieve content from a storage area, tablet damage is more financially impactful than damages to books.
    • Storability: Given its digital content format, tablets have far, far more storage capacity than books. Problems exist for storing digital content for long periods of time, but similar albeit different problems exist in storing paper book content for long periods of time.
  • Tactile Character: For individuals who grew up reading paper books, magazines, and newspapers, the tactile character of the text can be extremely important. This is the ineffable feel and sense of the reading material, an element of the book versus tablet comparison that may very well be generational.
    • Engagement of composite text: What is it like to hold the text? One or two handed? At a table or in your chair or on your lap? In many ways, tablets are easier to use, since one need not worry the movement of folding pages; on the other hand, one cannot be as cavalier, as it were, when holding a tablet when compared with a book given the high cost of dropping the tablet.
    • Dexterity and style of manipulation: Manipulating a tablet is as easy as flipping through the pages of a book. Again the preference is likely generational; however, one need not “dog-ear” a book page, if one can simply create a digital bookmark.
    • Use of writing implements: Underlining/highlighting passages and marginalia are the set tools of the trade for any student. For some it might be easier to manipulate using paper texts—the ability to cradle the book and write with precision. But it’s probably only a matter of time until tablets can mimic that sense too. On the other hand, an E-reader can still “mark-up” pages without devaluing the text the way such would do with a paper text. Digital writing implements have gone through an evolution concerning their precision; tablet screens are going through a comparable evolution from resistive touch-screens to capacitive touch-screens. In addition, tablet software has made incredible strides toward handwriting and voice recognition.
    • Sensory tactile sensibility: This, for many, is the deal breaker when looking at the entire comparison. Generally speaking, tablets don’t and had never attempted to mimic the tactile feel of using paper. For those who grew up reading and using print, it is really hard to get over this (though all of them would be happy to give up the paper cuts).
  • Visual Character: Visual character is the most significant element to consider when analyzing books versus tablets in the context of education. The other elements dealt largely dealt with matters of preference, sensibility, and the like. However, visual character—the optical nature of the text—impacts cognition and retention the most. Research on this subject will be dealt with during the final blog. The comparison below concerns paper texts and tablets utilizing LCD screens. E-reader tablets, though, utilizing E-ink technology has proven to be more “paper-like” and less computer-like than LCD tablets.
    • Textual character integrity: To the naked eye, the textual character of tablets is comparable to paper texts. But if looked at in perspective, we understand why the issue exists: the character integrity of early dot matrix printers and/or computer screens was highly flawed by contemporary standards. Depending upon tablet choice, the digital resolution can vary, which can translate into varying ease of reading.
    • Textual contrast: In terms of color composition, tablets and digital imagery can mimic the textual contrast present in paper texts. The residual issue is related to the next, element, how the illuminated nature of the majority of tablets impacts the sensory act of reading.
    • Impact of lighting: Without recounting childhood imperatives about not reading in the dark, lighting is a significant factor for both paper text and tablets. The choice of lighting—incandescent versus florescent—is relevant in consideration of paper text; the nature of the tablet illumination is equally significant. Early computer screens dealt with issues of flickering; while that phenomenon is not an issue with tablets per se, there remains an issue regarding the light it emits. As noted above, this issue does not obtain in the same way when comparing certain readers, which utilize E-ink screens.
  • Cost/$$$$: Prior to tablets, this factor primarily centered on choices of hardbound versus paperback and new versus used. However, digital publishing and the Internet have ushered in a range of new choices. While cost is not really a factor when considering content cognition and retention, it is a factor faced by educational institutions when deciding how to use their scarce financial recources.
    • New, used, and rent: The issue of new versus used only obtains for paper texts, not for tablets. The costs, although, for digital content initially was far below that of paper texts. The shipment of paper texts, naturally, included additional costs into this equation, whereas digital texts can simply be downloaded. Colleges made it exceptionally easy to sell back college texts (generally at a substantially lower rate); students can rent content for tablets, through time restrictions for use for non-downloadable, Internet-based reading.
    • Editions, subscriptions, and updates: Educational choices of textual editions always plagued educators and students. Every several years, a textbook might be updated, which prompted many to obtain the new edition at a slightly elevated price point. This, however, created a countervailing force against the used book and “rental” market for paper texts. Digital publishing made creating new editions and updates easier, which made the “newer” content easier to access via the Internet. In this new consumer market, some educational institutions consider maintaining subscriptions to digital content, which enabled users to have access to updated material without making discrete expenditures for individual digital works.

We’ve covered a lot of ground in this blog. With this information as a point of reference, let’s look next at the research and studies that assess the value of traditional books versus tablets.

Craig Lee Keller, Ph.D., Learning Strategist

 

Books vs. Tablets (Part One): Analog Versus Digital

Over the past year, we have touched upon a range of topics in the field of eLearning. Such topics included flipping the classroom, self-paced learning, gamification, mLearning, and the like. Most of these topics are dependent on the ever-changing world of communications and digital technology. In the context of eLearning, it is almost a given that students, trainees, and others will be utilizing a tablet of one kind or another. One takes it for granted that eLearning practice will follow eLearning theory.  But if we simply follow the technological wave, then the actual learning value of any educational theory or approach is lost. For this series of blogs, the particular issue is the experiential dimension of using various technologies, in particular, the digital tablet. Remember, an important element of most thoughtful approaches to eLearning includes evaluation and improvement.

Content Delivery

Regardless of pedagogical philosophy, educators seek to instruct their charges with ideas. There are different means to communicate this information: from the oral tradition to radio waves, from handwriting to the Internet. Similarly, there are different formats for storing information: from scrolls to typescripts, from clay tablets to electronic tablets. Commentators may highlight digitalinformation, while intimating that analog is everything else. Such an understanding, however, detracts from the meaning of the word analog and clouds our ability to better compare different ways to employ and improve different technologies for eLearning.

The word analog is derived from the Greek word ἀνάλογος (analogos), meaning proportionate (the word analogy also is derived from this same source, meaning similar or comparable). Using this definition, data—information—is represented through an analog device using continuous, albeit variable quantities. The clock is the most frequent example of an analogical device. The hands of the clock progressively move in relation to the passing of time; a ruler, similarly, is an analogical device in showing demarcations of distance. For eLearning, that is “electronic” learning, the use of analog draws upon its understanding in electricity. For example, sound is propagated through waves that create vibrations in devices that etch grooves into, say, a wax cylinder or vinyl record based on varying degrees of voltage and resistance. However, much of the data/information communicated in the context of eLearning is textual.

Contrary to what one might suspect, textual data is never analogical; it is always symbolic. That is, textual data uses alphabetic characters to represent letters that represent sounds and ultimately words to represent ideas. There is no relationship between the symbol(s) and the vocalized letter/word in non-ideographic text. Tools such as a stylus, an ink pen, or printing press can create and store texts in various formats; texts can also be stored using binary code, that represent different letters, numbers, and words, that is, ideas. In short, textual information is solely symbolic; the key factor is the means of storing textual information. For eLearning, it is useful to determine the value of using one form of textual storage versus another: the book versus the tablet (or computer screen).

Information Format

As noted, the format of storing information has dramatically changed over the years. Revolutions in textual creation have occurred only three times: mechanical papermaking, the printing press, and digital publishing. Each revolution impacted a different element in the production and storage of information: paper—the substrate for imprints; printing press—the mechanism for imprinting texts; and, digital publishing—the means of distributing textual products. Of course, there have been a variety of formats and technologies throughout this entire period as well: linotype and newspapers, paperback books, et cetera. The important factor for us revolves around format ease of use and efficacy.

Let’s look at the various factors that determine whether or not a given format for information is easy to use and, importantly, whether or not the format of information hinders or promotes cognitive comprehension and/or retention. The following categories should be considered when analyzing a composite text:

  • Physical Character
    • Portability and weight
    • Durability
  • Visual Character
    • Textual character integrity
    • Textual contrast
    • Impact of lighting
  • Tactile Character
    • Engagement of composite text
    • Dexterity and style of manipulation
    • Use of writing implements
    • Sensory tactile sensibility

Next, let’s look at some of these factors as we begin to assess the value of traditional books versus tablets.

Craig Lee Keller, Ph.D., Learning Strategist

Experience API (Part 6)

The Future of ADL and Experience API

The ADL (Advanced Distributed Learning Initiative) continues to be the sole authority coordinating and directing activities associated with Experience API, otherwise known as xAPI. But what does the future hold?  To look at the future, let’s remember their current roles.

While being the “Thought Leader,” ADL stimulates major advances by proffering its Broad Agency Announcements (BAA); in fact, about half of the research and development for ADL takes place outside of the confines of government and by private businesses and universities.  Such coordination includes organizing a variety of communities of practices, International Defense Coordination, the Defense ADL Advisory Committee, and the ADL Global Partnership Network.

For FY 17, the ADL focused on the following topics of interest in its BAA:

  1. xAPI integration with simulation, teams
  2. Persistent Learning Profiles for Lifelong Learner Data
  3. Implementing and Testing xAPI Profiles
  4. TLA Ontologies for Semantic Interoperability
  5. Infrastructure Security
  6. Other Innovations

An extremely important facet of ADL’s work is facilitating Communities of Practice (CoP). A CoP is a “group of practitioners connected by a common cause, role or purpose, which operates in a common modality.” The CoP create common rules and documentation (profiles), vocabularies, and “recipes” (the syntactic format).

(https://adl.gitbooks.io/companion-specification-for-xapi-vocabularies/content/relationship_between_vocabularies,_profiles,_and_r.html)

For the variety of different CoP: https://www.adlnet.gov/adl-collaboration/xapi-community-of-practice.  For incredibly interesting work in the field of mobile computing, look at the CoP for Actionable Data Book.

The ADL’s International Collaboration spans international military organizations:

  1. NATO Training Group (North Atlantic Treaty Organization)
  2. Partnership for Peace Consortium (Over 800 institutions in 60 countries that focus on issues of defense and international security)
  3. Technical Cooperation Program (Australia, Canada, New Zealand, the United Kingdom, and the United States)

Similarly its Global Partnerships include countries as diverse as Canada, Finland, Korea, and Romania.

While xAPI is being developed and nurtured on a daily basis under the auspices of the partnerships noted above, much of the attention and work is focused on expanding xAPI to a variety of new educational and training applications for an increasing number of professions, for example, emergency medical technician training. Not too snazzy, but that’s the way a lot of science progresses—expansion and refinement of a given paradigm. For the most recent DoD update (DoD Instruction 1322.26 on Distributed Learning, October 4, 2017), cut and paste the following link to your web browser:

http://www.esd.whs.mil/Portals/54/Documents/DD/issuances/dodi/132226_dodi_2017.pdf?ver=2017-10-05-073235-400

For new ideas, one should look at the cutting edge issues addressed in a recent ADL conference in collaboration with the National Training and Simulation Association: iFest 17. To see its agenda, cut and paste the following link to your web browser:

https://www.adlnet.gov/public/uploads/iFestAgenda_7-17-2017_IG_WSFlyer.pdf

Current Fears, Science Fiction, and xAPI

The area xAPI advancement I find most interesting and provocative is in the area of artificial intelligence (AI). Generally speaking, xAPI facilitates improvements in training along with various advancements in educational analytics, et cetera. As one might imagine, AI already is being used in training applications that utilize xAPI, think various simulations and virtual reality trainings. At this time, concern over AI does not so much focus on xAPI but rather on its military uses. On a recent blog published by Saffron Interactive, Priyanka Kadam broaches the issue of a “ban on automated deathbots.” Kadam continues his blog by discussing useful AI applications in the world of education and training, witness xAPI. (http://saffroninteractive.com/ai-in-learning/)

Founder of Tesla and the startup OpenAI, Elon Musk and other AI/robotics visionaries and/or founders have called upon the United Nations to ban autonomous weapons. Musk and a group of 116 tech leaders are worried about a burgeoning arms race of automated drones, tanks, and the like. One initial concern focuses on the beginnings of a new arms race, problematic in and of itself. Another concern relates to the changing character and speed of conflict not to mention the potential for black hat “hacking” into these military systems. But why would this even be an issue regarding xAPI?

(https://www.theguardian.com/technology/2017/aug/20/elon-musk-killer-robots-experts-outright-ban-lethal-autonomous-weapons-war)

Let’s think of a common example, drones. The current military utilization of drones is monitored by humans and is not autonomous. However, there’s already an aspect of drone functioning that is autonomous, that is, through various AI programs in the software and/or simply algorithms. Humans monitor drones and make decisions of whether or not to engage a potential target based on data screened through AI/algorithms. Given the problematic character of human attention, reaction, and other issues, generally speaking, this can serve to reduce human error.

Trends and Upcoming eLearning Conferences

When looking toward the future and trends, conferences are a great place for investigation. These days most academic and higher education conferences generally have sessions that touch upon eLearning. For example, a big topic over the years at the American Historical Association and the American Studies Association is the concept of the “Digital Humanities.” This topic is just like it sounds: how to utilize various computer and web-based technologies in the teaching and promotion of the humanities in academic and public settings. You guessed it—the material covered in these sessions generally is not “cutting edge” and simply conveys the application of mainstream ideas and technology for use in the “trenches.”

The conferences that really focus on new technological trends are those specifically geared toward the professionals tasked with the job of setting up eLearning at their respective institutions, be it within academia, government, or the private sector. While there are general, broad-based conferences in the field of eLearning, there are also more specialized conferences in sub-fields, for example, professional training or for Chief Information Officers.

Let’s list some of the conferences remaining during 2017 and in early 2018— including one we just missed: October 30–November 1: mLearn 2017: 16th World Conference on Mobile and Contextual Learning (Larnaca, Cyprus).

November 2017

  • November 16–18: 10th Annual International Conference on Education, Research & Innovation: 10 Years Building the Future of Learning (Seville, SP)

December 2017

  • December 6–8: OEB Global 2017: Learning Uncertainty (Berlin, GER)

January 2018

  • January 24–26: Association for Talent Development TechKnowledge Conference (San Jose, CA)
  • January 31–February 2: Human Capital Management Excellence Conference (Palm Beach Gardens, FL)

February 2018

  • February 11–14: The Institutional Technology Council eLearning Conference (Tuscan, AZ); Keynote Speaker: John Landis, Apple Learning

March 2018

  • March 2–3: 11th International Conference on eLearning & Innovative Pedagogies: Digital Pedagogies for Social Justice (New York, NY)
  • March 5–7: 12th Annual International Education, Technology, & Development Conference: Rethinking Learning in a Connected Age (Valencia, SP)

The following conferences are sponsored by the eLearning Guild, an eLearning organization for information, networking, and community:

March 27–29, 2018: Learning Solutions 2018 Conference & Expo (Orlando, FL)

June 26–28: 2018 Realities360 Conference (San Jose, CA)

October 24–26: DevLearn 2018 Conference & Expo (Las Vegas, NV)

The eLearning Guild conferences are major events, but each one is directed toward a different target population. The Learning Solutions conference focuses on developing knowledge and skill sets for addressing “real life” problems for individuals working in the “trenches.” There are sessions on all of the basic arenas of eLearning, e.g., games/gamification, instructional design, mobile learning, etc. An important part of these sessions is to present “best practices” in various sub-fields.

Per the eLearning Guild, the Realities360 conference focuses on “opportunities presented by virtual reality, augmented reality, and other alternate reality technologies.” This conference is “hands on,” and its Technology Showcase offers participants time to work with the new technologies and engage others as to how it might fit into their own learning needs. An interesting session during the 2017 conference was titled “Wayfinding, Storytelling, and Structuring Interaction in VR.”

Many consider the DevLearn conference to be one of the major events in eLearning each year. DevLearn offers a window into the “cutting-edge” technologies in a wide range of sub-fields and prides itself in showcasing an array of “thought leaders” in the field. Looking back at its 2017 Keynote Speakers:

  • Amy Web, “Sci-Fi Meets Reality: The Future, Today”
  • LaVar Burton (Actor/Director), “Technology and Storytelling: Making a Difference in a Digital Age”
  • Jane McGonigal, “How to Think Like a Futurist”
  • Glen Keane (Disney Animator/Legend), “Embracing Technology-Based Creativity”

DevLearn has sessions focusing on emerging technology, innovation, and management, among others. It also touched upon the following familiar subject: “Going Beyond SCORM: Using xAPI and WordPress as an LMS.” As you might imagine, a big draw for any of the conferences by the eLearning Guild or any other entity is the vendor showroom, which displays all of the latest strategies and technologies.

Craig Lee Keller, Ph.D., Learning Strategist

Experience API (Part 5)

Tin Can API/Experience API Concept

So, issues of nomenclature not withstanding, what were some of the key elements Rustici introduced with its Tin Can API? (For a copy of a slightly modified version of the Rustici deliverable to ADL [Tin Can API], cut and paste this link to your web browser: https://www.adlnet.gov/public/uploads/Experience-API-Release-v0.95.pdf )

Rustici addressed the reality and the administrative needs that exist in our increasing complex, disaggregated, and de-centralized technological world. As noted, yes, there are so many different types of technologies, and, yes, there are so many different types of platforms, and, yes, there are so many different sources of information. Moreover, not all of these learning experiences take place online. So how to capture the range of these “experiences” for modern learner . . .

The key concept and innovation for Tin Can API and Experience API is as follows.

Whenever a learning moment has to be recorded, documentation of this experience is sent to a Learning Record Store (LRS) using the following format:

The basic notion is “I did this.” This format permits administrators to track when learners begin educational courses/modules, review a given page, answer a question, and/or finish (or fail) a given course of study. While the information might have originated within a proprietary Learning Management System (LMS), the data ultimately is routed to an independent LRS, which then, in theory, could be accessed by other parties and software applications.

Since a LRS is the ultimate destination for learning data, individuals can learn off-line and simply upload their learning data once given an Internet connection. Now this does not mean that a learner could be reading a hardback book and an article on a PDF reader and magically that information is transmitted to the LRS. Rather, the learning still needs to take place through a digital format that tracks steps taken by the learner. (The reading of a hardback book, in fact, could be added to the LRS, but this would simply need to be documented and inputted by an administrator.)

Let’s look at some examples:

  1. Peter began the intermediate course for sommeliers
  2. Peter read module 1
  3. Peter scored 50% on module 1 questions
    1. Peter scored 100% on module 1 questions about white wine
    2. Peter scored 0% on module 1 questions about red wine
  4. Peter read a refresher on red wine for module 1
  5. Peter scored 95% on module 1 questions

     .  .  .  .  .  .

     27. Peter achieved competency in Burgundy style wines

This information could have originated from a cell phone, a tablet app, a desktop computer at home, or a school-based workstation. Imagine Peter began the class as an outside student at the US Department of Agriculture, and then received a job working at the U.S. Food and Drug Administration. While at FDA, he continued his study as a sommelier though using a different LMS. His old learning records are still accessible even though the FDA is using a new LMS, since the records are stored in a LRS that is universally accessible using protocols developed by Tin Can API/Experience API.

Another important feature of Experience API is that it can record learning data derived from simulations and virtual reality environments. This data, of course, is qualitatively different from other data given its dynamic nature. In this regard, too, Experience API can record data from “groups,” as distinct from individuals, that participate in a learning process. For example, ADL highlights one related element in its portfolio: Hyper-Personalized Intelligent Tutor (HPIT), which “is able to detect non-cognitive factors (e.g., determination, boredom, motivation) in a learner . . .”

(https://www.adlnet.gov/hpit).

Similarly, SAVE (Semantically Automated Assessment in Virtual Environments) “provides a framework for learning procedural skills (e.g., repairing a car, flying an airplane, or shooting/maintaining a weapon system) through simulation.

(https://www.adlnet.gov/save)

Apart from the sleek, sexy uses of xAPI [note the devolution into an abbreviation], there are basic, fundamental uses of value regardless of whether or not or an organization employs novel gaming training or the like. Welcome to the ADL/DOE Learning Registry (LR) Project. (https://www.adlnet.gov/learning-registry) There is a huge need for a tool like this—especially within the government or other large and multi-faceted organizations. Imagine an organization having a simple need, say, developing an emergency building evacuation training. Divisions on the east coast may have completely different missions and operations from divisions on the west coast, however the character of their building evacuation plans will likely be fairly similar discounting local elements. A training that one division develops can then be used and, perhaps, improved upon by another division. Maintaining a central LR valuable to leverage corporate expertise and intellect and minimize waste in expenses and time. In fact, many corporations have developed positions specifically for this function: Chief Knowledge Curators.

(http://www.clomedia.com/2017/05/22/organizations-need-chief-knowledge-curators/)

Credentialing increasingly is becoming an important element that is facilitated through xAPI, especially in government service. Witness the birth of MIL-CRED (Military Micro-Credentials), which is designed to create “a fully vetted, fully automated, personally controlled digital resume.” This project was developed to ease the transition from military to “civilian careers and educational opportunites.”

(https://www.adlnet.gov/mil-cred)

Administrators using xAPI can generate meta-data drawn across different groups of students over periods of time. This can be valuable in terms of fine-tuning elements of educational content and course focus. Ultimately, xAPI was built to document a relationship between training and job performance, which for administrators, managers, and supervisors is a key if not the key element in any program of workplace development.

Next Step: Actually, next and final step, is to look at the future of Experience API (xAPI) and the current collaborations and research initiatives of the ADL.

Experience API (Part 4)

O.K., last week we finished up with SCORM, which paved the way for a discussion about Tin Can API and, yes, Experience API.  Let’s get right into it . . .

SCORM had peaked in its level of development and value, and the ADL (Advanced Distributed Learning Authority) decided a newer version of SCORM would not meet its continuing need. As such, in 2011, ADL issued a contract to investigate, research, and basically re-think SCORM in order to advance its mission and goals. The Nashville-based business Rustici Software won this contract, and the firm initiated its work by starting a conversation, a conversation that became Project Tin Can.

Project/Tin Can

Rustici termed the research phase of the contract to be Project Tin Can. They embraced the image and notion of tin-can communication to convey the two-way communication between Rustici and the eLearning community.

Per Rustici, this process included seeking information through five different avenues:

  • nput from hundreds of xAPI stakeholders;
  • Interviews with key industry leaders;
  • LETSI SCORM 2.0 White Papers (this was, in many ways, a precursor of Project Tin Can; for an archive of these papers, see the Rustici site: https://scorm.com/tincanoverview/the-letsi-scorm-2-0-white-papers/;
  • Interactions with then current Rusti customers; and,
  • The ADL contract specifications.

A Rose By Any Other Name . . . Tin Can API/Experience API/xAPI

While the results of the Project Tin Can research produced Tin Can API, the latter was a qualitative successor to SCORM and an earlier version of the continually evolving Experience API.  xAPI, then, is simply an acronym for Experience [eXperience] API, neither a successor to nor a different version of Experience API.  

It really seems confusion arose and still arises from the period when Tin Can and Experience API virtually were synonymous. This was the period of and the immediate years following Rustici’s submission of its deliverables to the ADL. At that time, perhaps understandably, Rustici stated:

ADL will be transferring ownership of the spec to a public standards body after v1.0 is complete this spring. After that transfer, we don’t expect the official government name “Experience API” to last much longer [emphasis added].”

(https://experienceapi.com/we-call-it-tin-can/)

They had branded their process and deliverable with the “Tin Can” name, and their work was widely known by many in the industry as the Tin Can API.  Yet, the ADL used the name Experience API in their contract specifications and in their continuing usage. Experience API is the pervasive name that is used, and the name “Tin Can” is only formally used in reference to Rustici’s original contract work. Indeed, Rustici later called its response to the ADL contract “Project xAPI.”

(https://experienceapi.com/overview/)

Ownership versus Web Domains

The ADL awarded the contract—the BAA [Broad Agency Announcement, which in general, is for basic and applied research and development]—to Rustici and, as such, the work derived from that contract was and is the property of the United States Government. The issue of name “ownership” publicly arose in a May 2012 Google Group discussion:

https://groups.google.com/a/adlnet.gov/forum/?hl=en&fromgroups=#!topic/tincanapi-info/q87uy3XJXX8

The concern centered on the Rustici trademark petition for the names of “Tin Can” and “Project Tin Can.” No less a figure than Rustici President, Mike Rustici, weighed in on the concern to assure writers that the company had no proprietary claim on the use of “Tin Can” and sought to obtain trademark status to avoid it from being “pirated” by others who might be less community minded.

The continuing Google Group discussion continued on the topic as to whether or not Rustici would use the “Tin Can” moniker in any of its future commercial enterprises; to wit, Rustici replied that it would, however, the company would not prevent others from doing so.

Toward this end, while Tin Can, Experience API, and, for that matter, SCORM, are names under government “contract,” as it were, Rustici owns the web domains for www.tincanapi.com, which is redirected to another one of their web domains, www.experienceapi.com; they also own the web domain of www.scorm.com. In their separate domains, they clearly differentiate the administration, ownership, and stewardship of the respective names to the ADL, but that they also offer services for companies seeking to utilize the SCORM and/or Experience API specifications.  For a response by Rustici Software on this subject, please see:

https://experienceapi.com/we-call-it-tin-can/ and

https://experienceapi.com/tin-can-experience-api-xapi/

Note: To be clear, the above comments neither are intended nor take away from any of Rustici Software’s groundbreaking work in the field of eLearning; rather, the comments are included to simply clarify distinctions amongst terms and the like.

Next step: Finally, a focused discussion on the Tin Can API/Experience API Innovations and its evolution.

Craig Lee Keller, Ph.D., JAG Learning Strategist

Experience API (Part 3)

In our last blog, we further detailed the foundation for ADL and its areas of research.  One of these areas of research is Total Learning Architecture Structure, which provided the basis for interoperability between different systems. One of the results of this work was the Sharable Content Object Reference Model (SCORM). The initial edition of SCORM was released in January 2000 with a couple of SCORM iterations produced the following year in 2001. However, a new version of SCORM was introduced in January 2004, and DOD made SCORM use mandatory in 2006. In total, there have been four versions of SCORM 2004. The next generation of SCORM arose in 2010 with Project Tin Can, but we’re getting a little ahead of ourselves.

SCORM or What do you mean be a Sharable Content Object Reference Model?

To understand SCORM, let’s break it down into its constituent elements.

  • Sharable Content Object (SCO)—an object is the means of relating various pieces of data and their value.  For us, this refers to an element within a learning system, for example, a question or image. Each “object” is a part of the larger educational program. The desire and demand to make objects “sharable,” is linked back to our original quest for interoperability. In one sense, think of it as a specific lesson or module in an on-line course.
  • Reference Model—by reference, SCORM is referring to a computer term of art, that is, the means of finding specific data or datum located on a computer hard drive or, increasing, on a cloud-based server. In short, a reference provides the basis for discerning a physical location for information. Yes, there are all of these 0000s and 1111s out there in the digital world, so wouldn’t it be nice to be able to keep track of them. By reference model, SCORM is creating rules and protocols for references in the context of sharing that information with other Learning Management Systems (LMS).

To better understand let’s look how software designers create their programs. I remember writing programs in the defense industry. I already knew the languages of BASIC and easily learned FORTRAN in addition to LOCUS (an early spreadsheet proprietary program). My work was a mess, truly. LOL! I knew how to program, but insisted on writing my programs without a flowchart—breaking rule number 1. Anyway, you can imagine all of the problems I faced.

There are other rules in computer science that make it easier to write, track, and modify code. One of these approaches is object or class-based programming. Instead of lumping all the data together, in object-based programming, the programmer defines a group of fields or attributes, which then provides the basis for relating actual data values and associated operations and/or methods. This type of organization, then, provides the basis for generating a commonality that can be shared amongst different programs. That is, if data value, its characterization, and associated operations can be made uniform, then different programs can be capable of utilizing that same digital information. SCORM is about creating the basis for doing just that.

Now imagine trying to perform all of these functions while transmitting the information through the Internet in the context of a client-server relationship. Sharing information through a cycle of request and response from the client (you) to the server (the repository of data and generally the program) gets complicated enough. Imagine trying to force-feed your information into a different LMS. Whhhew! You get the picture. Yes the horror as it were. So, again, that’s the basis for creating SCORM.

SCORM Protocols

Let’s be clear. SCORM is neither a software program nor a programming language. Rather SCORM provides standards for data and programming that make it possible to have data sets that can be interchangeable amongst differing LMSs. So, software designers are extremely mindful to utilize SCORM when designing and coding their proprietary LMS. There have been numerous limitations to SCORM. Why? It’s simple: trial and error. Software designers within and without the government have found flaws or limitations to the SCORM protocols, which, of course, gave rise to successive iterations of SCORM. The SCORM protocols are the rules utilized by different Application Programming Interfaces (API). An API is the part of the programming language that facilitates a communication between different computer systems. API and software developers use SCORM to create the standard of interoperability for eLearning systems. Now there exists a SCORM API, but that is just one of many forms of an API.

A major SCORM component was adopted with the 2004 version. Researchers with the ADL created the notion of “sequencing.” The sequencing protocol specified that learners could only experience content objects in a specified order. Such can be valuable, but it also can be a limitation.

Next step: The movement away from SCORM toward Tin Can API and Experience API.

Craig Lee Keller, Ph.D., JAG Learning Strategist

Experience API (Part 2)

In our last blog we discussed the foundation for Experience API, Advanced Distributed Learning (ADL). Let’s reiterate the truly pressing need for ADL, because it’s so, so easy to get lost in the rapid pace of technological innovation and moreover system transformation.

Prior to ADL, and prior to eLearning, education that utilized computers, in many ways, was a solitary, alienating experience for both the user and the providers of educational content. How’s that? Education and training took place at a single computer station, prior to the days of networking. Looking backwards, that form of education has been termed Computer Based Training (CBT). Think of a painfully sad image of a bureaucrat in a cubical toiling away. Let’s look to Dilbert for insight:

In CBT, administrators purchased software packages—generally expensive software packages—that could be utilized by single or multiple users based on purchased licensing privileges. Proprietary packages, unlike today, were not cloud based, but utilized compact discs (CDs) to access programs. I remember an actor on television regaling the value and permanency of CDs pronouncing, “they can even be dropped in your goldfish bowl and nothing happens!” Information input through the software would be saved on the resident computer (or back in my day, on floppy disks ☺) and coded into a file structure that could only be accessed via proprietary software. When software was updated to fix glitches and to add additional functions, the educational administrator generally had to purchase next iteration of proprietary software in order to access old data and/or use it with the new functions. Software companies might offer mechanisms for translating the files of a competitor, but frequently the results were disappointing. As stated before, all of this was the problem that ADL set out to address.

ADL and the Need for SCORM Protocols

As noted in last week’s blog, the ADL was developed by the U.S. Department of Defense in the mid-1990s to streamline its technological approach to education and training. As one might suspect, though, other agencies within the federal government simultaneously were engaged in similar projects for their own programs in education and technology.  In order to avoid duplication and inevitable conflicts in integration, the array of federal ADL programs were consolidated within DoD ADL Initiative. It would not be surprising for private industry to fall in line with this program, as a large portion of their revenue is generated from government contracting.

Based on Congressional defense authorization and President William Clinton’s Executive Order 13111, DoD created a strategic plan for ADL with the the following areas of research:

  • eLearning (web-based learning)—Research technical components and techniques to develop and support electronic-based education and training . . . consistent and interoperable . . . best practices . . . learning management systems, content registries, and Massive Open Online Courses
  • Mobile learning and mobile performance support—Research focused on the use of commercially-available handheld computing devises to provide access to learning content and information systems . . .
  • Learning analytics and performance modeling—Research in collection, measurement, analysis and reporting of data, which may include “big data,” about learns and their contexts, for purposes of understanding, optimizing, and predicting learning success . . . competencies, credentialing, learner profiles, data visualization . . . associated privacy and information security concerns.
  • Learning Theory—Research focused on the application, evaluation, and embedding of efficient and effective, current, new, and emerging theories of learning, instructional technology . . .
  • Total Learning Architecture infrastructure (TLA)—Research focused on modernizing the platforms used for education and training, to interoperability of disparate systems so they can be used together as a Service Oriented Architecture (SOA) to securely share relevant learning data including, but not limited to, granular learning experience . . .
  • Web-based Virtual Worlds and simimulations (VWs)—Research into the emerging fields of serious games, simulations, and virtual reality (within a distributed learning context)  . . .[https://adlnet.gov/research]

Given the research mandates noted above, it became necessary to develop a language that facilitated the goals of accessibility, reusability, and interoperability. The Sharable Content Object Reference Model (SCORM) was the first solution.

Next step: the discussion of references in computer science and their relationship to and the development of SCORM and ultimately Experience API.

Craig Lee Keller, Ph.D., JAG Learning Strategist

Experience API

What is Experience API? There’s a bunch of names swirling around that sound similar—Tin Can API, Experience API, and xAPI. What are they and what do they mean in relation to eLearning? They are a set of names sequentially adopted regarding the development of software specifications (rules) that govern the communication and relationship between learning content (educational information) and learning systems in order to record and track a wide range of learning activities on a wide range of technological platforms.

To best understand Experience API, the reader should appreciate its relationship to any number of interrelated terms and concepts. Here’s a short list to lend the reader a head’s up:

  • Advanced Distributed Learning (ADL)
  • API (Application Programming Interface)
  • Learning Management Systems (LMS)
  • Learning Record Store (LRS)
  • SCORM (Sharable Content Object Reference Model)

The Foundation and History of Experience API

It is a truism that with the advent of eLearning, educators increasingly shifted their focus from “hard copies,” that is, printed material, toward information stored in digital format. With the explosion of digital information platforms and the wide range of proprietary software, educators were faced with the herculean task of analyzing and integrating digital information stored on various platforms, in various divergent software programs and formats. That was and still is the challenge.

Let’s look at the different actors in this framework. First, there are individual users who use educational content and input responses accessed from a variety of technological platforms—think smart phones, tablets, desk top computers, and online portals. Second, there are educational and training administrators who utilize, third, proprietary software to convey, collect, and organize educational information (input and output). Fourth, and this is the key part, others work toward developing protocols for integrating digital information collected from different software packages or programs. This is the basis for ADL: Advance Distributed Learning.

The ADL Initiative is a government-based program that, as per its mission:

“bridges across Defense and other Federal agencies, as well as coalition partners and industry and academia, to encourage collaboration, facilitate interoperability, and promote best practices for using distributed learning to provide the highest-quality education, training, informal learning, and just-in-time support, tailored to individual needs and delivered cost-effectively, anytime and anywhere” (http://www.adlnet.gov/about).

As an original program of the U.S. Department of Defense, the initiative was created from an early-1990s Congressional funding for electronic classrooms and learning networks. After a few years of work, the Quadrennial Defense Review recommended the creation of a centralized strategy, which ultimately became the original ADL Initiative. The initiative now has three main activities: thought leadership, R&D innovation, and outreach and transition.

All of this sounds vaguely familiar, yes? The government mounts a massive program to streamline defense and national security operations? Sounds a lot like the creation of ARPANET in the 1960s, which, of course, led to the creation of the World Wide Web and the explosion of commercialization and private use on a widespread basis. During that entire time, interested parties in government, academia, and industry collaborated to create operational protocols. Move forward in time . . . The Defense Department created a related program for education and training for its personnel—witness the birth of and need for the ADL Initiative.

Next step: the creation of ADL Initiative SCORM protocols and the rise of Experience API.

Craig Lee Keller, Ph.D., Learning Strategist

The Kirkpatrick Model: Principles

O.K.! With this blog, we’re finishing our description of the Kirkpatrick Model by detailing its Principles. Before that part, however, we really need to recap the previous blogs in this series. Why? It’s so easy to forget or simply get trapped by details. In short, we need to be able to see the forest for the trees (with the Kirkpatrick Business Partnership Model [KBPM] being the forest). So, quickly . . .

(click to enlarge)

The KPBM obviously has many similarities with the levels, though the order seems to have been reversed. Why is that? Let’s look at the first Kirkpatrick principle.

 

KIRKPATRICK PRINCIPLES

Let’s remember that the Kirkpatrick Partners argue that the chain model is the best way to appreciate the interrelated nature of assessing training programs. And, of course, the reason for the training program is because of a business need that has been identified.

  1. The end is the beginning. This principle reminds us that any training program—really any business decision—should be directly linked to a business need that was established at the onset.
  2. Inventor Don Kirkpatrick realized that assessing a training program necessitates understanding the organizational framework. This conditions data collection, surveying learning, and monitoring subsequent work behavior, in other words a chain of understanding and evidence. Administrators will be forced to rely on anecdotal comments and impressions if they don’t keep the end (the business need) in mind.
  3. Return on Expectations (ROE) is the ultimate indicator of value. In short, administrators need to understand that the money spent on training and assessment should translate into a positive organizational net gain. This part is quantitative, but it’s not necessarily simple. Program managers need to be able to envision what “success” would look like to them. In so doing, those designing training will be understanding business desires/needs while helping administrators and managers refine their business goals and expectations. Business partnership is essential to bring about positive ROE. When the Kirkpatrick Partners speak of a business partnership, they are redirecting the focus of training from the traditional focus of course content and employee knowledge. First, and yes, course content is extremely important, however, it’s not an end in itself; the end is the ROE. Bringing about a positive ROE will be impossible if employees fail to apply their learning, especially if it is forgotten after a period. That’s why the business partnership is key.
  4. he partnership is amongst employees, managers, and the administrator. Managers must be able to coach and encourage employees, and the administrator and managers must be able to create and offer incentives for success. This is one of the reasons why it’s important to be able to visualize success during the phase of training design.
  5. Value must be created before it can be demonstrated. In the aforementioned Kirkpatrick “A Fresh Look,” they call upon an industry study that identifies the sources of training failure. The largest area of failure by far was the application of the training in the work environment (70%). Principle 4 is a direct correlate of Principle 3. What do Principles 3 and 4 mean when taken together? Simply that training professions need to radically adjust their understanding of role. Instead of solely being the traditional, knowledgeable, empathic instructor, they need to guide organizations (administrators, managers, and employees) in a plan that includes operational execution and oversight. A compelling chain of evidence demonstrates your bottom line value. Principle 5 brings us back to the beginning, being able to demonstrate the ROE for the specified business need. The sequential nature of the levels and principles is based on the requirement to document value through the associated causation of the training and its follow-up. With this principle, the results are related to the business need, and organizations can begin the process of refining goals and modifying training practices.

Next week, we’ll finish up the series by comparing the KPBM with other models and placing this in the context of the modern business environment.

Craig Lee Keller, Ph.D., Learning Strategist