Question Analysis in WaxLRS

| Comments

We’ve had Question Analysis in WaxLRS for a while now, but haven’t talked much about it. I’m not going to go into too much detail here, but I thought I’d give a brief overview of what it is and why it’s useful.

In short, Question Analysis helps find questions that do not measure the same things as are measured by overall score on the assessment they are part of. The calculation is independent of question difficulty, except insofar as excessively easy or hard questions will sometimes be indicated, and there are many possible explanations, from trick questions to infrequently answered questions to badly phrased questions to questions with typos. All we’re trying to do is point out questions that are worth taking a harder look at, and are better candidates for reworking or elimination than other questions in the assessment.

So, how do we find that? There’s a simple model behind the indicator. If we assume that the overall score roughly measures something and that the question we are looking at is intended to measure the same thing as the overall assessment, then in order to find likely bad questions (that is, that do not measure the same thing as the overall assessment), all we need to do is find the correlation between correct answers on the question and score on the test; low and negative correlations indicate bad questions.

Correlation is between -1 and 1, and near-zero and negative correlations are bad. “Possibly zero” correlations are indicated in the interface of Wax in red by seeing if zero is within about two standard deviations of the measured correlation, though this is usually more an indicator of insufficient data than problematic questions for values much above zero.

Why does this work? It works because a near-zero correlation tells us that people who get a high score on the overall assessment are no more likely to get the question right than people who get a low score. A negative correlation tells us that people who score worse on the assessment are more likely to get the question right!

(By the way, I keep saying “questions” and “assessment”, but all of this works for anything where there’s an overall statement about an activity with a score and statements where that’s the parent activity with success marked in result.)

That’s Question Analysis, and it’s only the start of what we’re working on: a serious toolbox for analysis of learning-related data that’s broadly useful and helps practitioners answer Big questions. Using Question Analysis can lead to leaner assessments that do a better job of measuring desired results.

Is Question Analysis something you’re interested in? What supporting analysis would help answer the questions you care about?

Comments