Test Conversations: How I Wrote the Latest Alexa Course


Curriculum Developers at Codecademy write courses for the browser, accepting all of its constraints for the sake of accessibility. To write the new Dialog Management course in the Amazon Alexa series, I hit all kinds of constraints. But constraints breed creativity.

What is Dialog Management

Dialog Management is used to make Alexa engage in back-and-forth conversations. When a user makes a request, Alexa should be able to respond with questions, asking for and confirming important details. In the example below, a user is asking Alexa to recommend something to watch. To provide a good recommendation, Alexa asks the user for the desired type of video, genre, and decade.

In the Dialog Management course, students learn how to code skills—or apps—to make Alexa perform these conversations.

When a learner runs their code in our learning environment, a test is executed. If the code is written correctly, the test passes and the learner can move on. If the test fails, it outputs an error message that directs the learner to the solution.

Unlike a teacher in a classroom, we can’t assess and cater to each student’s situation. Tests are the medium of our feedback. We have to create tests strict enough to accept only correct code, but general enough to provide useful feedback for all possible mistakes.

When this learner ran their code, it failed the test. At the bottom in red, you can see, the test output that provides feedback.

Running code for Alexa skills requires the alexa-sdk module, a collection of functions and properties that make it easier for developers to use Alexa. So when I tried to run some learner’s code on our platform, I got this error message:

Error: Cannot find module 'alexa-sdk'

We don’t have the alexa-sdk module.

To maintain performance and save storage space, I couldn’t add the full alexa-sdk to our platform. But learners should practice as they play, so I made a fake alexa-sdk and added the bare minimum: empty functions and properties. Now learners can use real Alexa functions and properties—though they won’t do much—and I can run tests on their code.

For curious readers, the fake module contains:

const event = require('../request.json');

handlers.event = event;

const listen = function (arg) { return arg; };
const speak = function (arg) { return { listen }; };
handlers.response = { speak };

const calledEmitWithArgs = [];
handlers.emit = function(a,b,c,d,e) {
  calledEmitWithArgs[0] = a;
  calledEmitWithArgs[1] = b;
  calledEmitWithArgs[2] = c;
  calledEmitWithArgs[3] = d;
  calledEmitWithArgs[4] = e;

The request.json is a set of properties that I copied from the internet, and the listen and speak functions do nothing but return the input passed to it. (I’ll explain what I did with this.emit soon.)

How I wrote a new testing method

Testing code is like checking that your calculator works. You define specific input and expect certain output: if you are testing addition, inputting 1 plus 2 should output 3. Similarly, I can check how learners’ programs behave when given a certain input.

In the case of Dialog Management, the input is the state of a made-up conversation, and the output is a command. Ideally, Alexa responds dynamically to users, prompting them for missing information. For each state of the conversation, the learner’s code should emit a certain command to Alexa.

Imagine that an Alexa user is interacting with a skill built by a learner: an app that recommends videos based on genre, video type, and decade values provided by the user. Early in the dialog, the user has given the video type and decade, but not the genre value. I expect the learner’s code to command Alexa to prompt, or elicit, the user for a genre. The command is called :elicitSlot and it is emitted with the emit function.

How did I test that the code is correct? In every case, I knew that the learner’s code emits a command using emit, one of the functions provided by the real alexa-sdk:

emit(arg1, arg2, arg3, arg4, arg5);

I just needed to make sure those args were correct. To capture those, in the group of functions in the fake alexa-sdk package, I wrote my own version of the emit function. Instead of actually emitting a command to Alexa, this version stores its args in an array, calledEmitWithArgs:

const calledEmitWithArgs = [];
handlers.emit = function(a,b,c,d,e) {
  calledEmitWithArgs[0] = a;
  calledEmitWithArgs[1] = b;
  calledEmitWithArgs[2] = c;
  calledEmitWithArgs[3] = d;
  calledEmitWithArgs[4] = e;

Now I can check the args, asserting that they match my expected output:

assert.equal(calledEmitWithArgs[0], ':elicitSlot', `Expected :elicitSlot when dialog not completed and genre not collected`);
assert.equal(calledEmitWithArgs[1], 'genre', `Expected second arg to this.emit to be genre`);

If the learner didn’t emit the :elicitSlot command, the test sends the error message Expected :elicitSlot when dialog not completed and genre not collected. This directs the learners toward the correct code.

My work, realized!

When I got this working, I danced in celebration (in our office, it’s okay to dance in moderation).

In conclusion

Out of constraints came a new testing methodology that is used every time a learner runs code in the Dialog Management course. I couldn’t upload the entire alexa-sdk to our platform, so I made a mock alexa-sdk module. I had no easy way to test learners’ code and explain errors, so I wrote an adaptable and explicit test format. Learners will get all the benefits of self-paced education, with actionable feedback that our interactive platform is known for.

This testing method works for Codecademy, but I also use it myself when developing Alexa skills. Try it in action with our latest launch in the Alexa series: Dialog Management.

Although my name is at the bottom of this article, it takes a village to raise a Curriculum Developer. Thank you to the test-driven development master Sean Doyle, the Alexa extraordinaire Amit Jotwani, the test whiz Ian Munro, and the wordsmith Alexus Strong.

Related articles

7 articles