How Tin Can Can Help: Save Out Experience Data

| Comments

This is the second in a series of posts (and somewhat belated, we’re pretty busy here at Saltbox) answering designer questions about how the Experience API, Tin Can, can help in different scenarios. For more introduction, take a look at the first post.

In this post I’m tackling a question posed by Julie Dirksen. Due to the nature of the question and answer, this post will be less about data analysis and more about Tin Can technology.

I’d like a learner to be able to save the data out of a learning experience that they think is relevant – to a virtual notebook, or a file they can print or whatever. How can Tin Can help do that?

A couple aspects of Tin Can are especially important here. One, Tin Can centers around statements, which use a document data model. Instead of needing to come up with a way to represent as data the use of a programming interface, or turn a series of low-level commands into a higher-level summary, Tin Can data is already in a usable form at a useful level of granularity.

Two, Tin Can has fairly strong semantics. Instead of arbitrary properties where meaning needs to be completely imposed from without, a well-crafted Tin Can statement is highly interpretable by itself, and can be made even more interpretable given a small amount of additional knowledge about the more flexible identifiers (such as verb and activity id).

Because of these aspects, groups of Tin Can statements can be understood remarkably well on their own, and can be moved about quite freely.

So, returning to the question, there are several ways we could answer it, and I’m going to touch on three options. One, people could capture data generated as they engage in learning activities at the point of engagement. Two, a system sitting in front of a Learning Record Store (LRS) could present and curate personal learning portfolios to people with data in the system. Or three, people could extract portable personal learning-related data from a central LRS (this might also be tied in with option two).

Starting with the first, easiest option, capturing data at the point of engagement. A learning activity could easily request personal LRS credentials from the learner right now. I suspect that in the future LRS connection will become even slicker, requiring people to do very little to have learning activity data they generate recorded. Not all learning activities will enable such behavior, though I hope it will be widely adopted.

Moving on to the second option, personal portfolios, we’re talking about a layer above an organization’s central LRS. This treats the LRS like it is commonly used today: as a specialized database (NoSQL & document-oriented if you’re curious) for learning-related data. The portfolio application would be a summarizing layer, showing important and relevant data, while hiding or de-emphasizing detailed data, such as the many choices made in a lengthy simulation.

Specific pieces of data could be included, excluded, commented upon, or otherwise manipulated using the built-in stucturing capabilities of the standard such as Statement References and Context Activities.

In the third option, an LRS could package up a selection of statements related to a given person along with human-readable descriptions and a cryptographic signature for the package contents into a single archive. These could be statements derived from a portfolio application, like the above, or they could be chosen by a simpler system.

The cryptographic signature makes it possible for another recipient (such as a new company the person was hired at) to verify the origin of the statements. Including the human-readable descriptions make them accessible in systems that don’t understand Tin Can. Using separate packaging instead of passing statements between LRSs avoids many of the potential coordination problems and encourages human review (since the receiving LRS might not be interested in all the data).

I’m hoping to see a lot of innovation in those three (and more!) directions for personal learning-related data. If you’re interested in talking more, I can be reached in the comments or at russell.duhon@saltbox.com. What do you want to enable people to do with the learning-related data they generate?

Comments