Experience API (Part 6)
The Future of ADL and Experience API
The ADL (Advanced Distributed Learning Initiative) continues to be the sole authority coordinating and directing activities associated with Experience API, otherwise known as xAPI. But what does the future hold? To look at the future, let’s remember their current roles.
While being the “Thought Leader,” ADL stimulates major advances by proffering its Broad Agency Announcements (BAA); in fact, about half of the research and development for ADL takes place outside of the confines of government and by private businesses and universities. Such coordination includes organizing a variety of communities of practices, International Defense Coordination, the Defense ADL Advisory Committee, and the ADL Global Partnership Network.
For FY 17, the ADL focused on the following topics of interest in its BAA:
- xAPI integration with simulation, teams
- Persistent Learning Profiles for Lifelong Learner Data
- Implementing and Testing xAPI Profiles
- TLA Ontologies for Semantic Interoperability
- Infrastructure Security
- Other Innovations
An extremely important facet of ADL’s work is facilitating Communities of Practice (CoP). A CoP is a “group of practitioners connected by a common cause, role or purpose, which operates in a common modality.” The CoP create common rules and documentation (profiles), vocabularies, and “recipes” (the syntactic format).
For the variety of different CoP: https://www.adlnet.gov/adl-collaboration/xapi-community-of-practice. For incredibly interesting work in the field of mobile computing, look at the CoP for Actionable Data Book.
The ADL’s International Collaboration spans international military organizations:
- NATO Training Group (North Atlantic Treaty Organization)
- Partnership for Peace Consortium (Over 800 institutions in 60 countries that focus on issues of defense and international security)
- Technical Cooperation Program (Australia, Canada, New Zealand, the United Kingdom, and the United States)
Similarly its Global Partnerships include countries as diverse as Canada, Finland, Korea, and Romania.
While xAPI is being developed and nurtured on a daily basis under the auspices of the partnerships noted above, much of the attention and work is focused on expanding xAPI to a variety of new educational and training applications for an increasing number of professions, for example, emergency medical technician training. Not too snazzy, but that’s the way a lot of science progresses—expansion and refinement of a given paradigm. For the most recent DoD update (DoD Instruction 1322.26 on Distributed Learning, October 4, 2017), cut and paste the following link to your web browser:
For new ideas, one should look at the cutting edge issues addressed in a recent ADL conference in collaboration with the National Training and Simulation Association: iFest 17. To see its agenda, cut and paste the following link to your web browser:
Current Fears, Science Fiction, and xAPI
The area xAPI advancement I find most interesting and provocative is in the area of artificial intelligence (AI). Generally speaking, xAPI facilitates improvements in training along with various advancements in educational analytics, et cetera. As one might imagine, AI already is being used in training applications that utilize xAPI, think various simulations and virtual reality trainings. At this time, concern over AI does not so much focus on xAPI but rather on its military uses. On a recent blog published by Saffron Interactive, Priyanka Kadam broaches the issue of a “ban on automated deathbots.” Kadam continues his blog by discussing useful AI applications in the world of education and training, witness xAPI. (http://saffroninteractive.com/ai-in-learning/)
Founder of Tesla and the startup OpenAI, Elon Musk and other AI/robotics visionaries and/or founders have called upon the United Nations to ban autonomous weapons. Musk and a group of 116 tech leaders are worried about a burgeoning arms race of automated drones, tanks, and the like. One initial concern focuses on the beginnings of a new arms race, problematic in and of itself. Another concern relates to the changing character and speed of conflict not to mention the potential for black hat “hacking” into these military systems. But why would this even be an issue regarding xAPI?
Let’s think of a common example, drones. The current military utilization of drones is monitored by humans and is not autonomous. However, there’s already an aspect of drone functioning that is autonomous, that is, through various AI programs in the software and/or simply algorithms. Humans monitor drones and make decisions of whether or not to engage a potential target based on data screened through AI/algorithms. Given the problematic character of human attention, reaction, and other issues, generally speaking, this can serve to reduce human error.