Bot Builder Community – Alexa Adapter Update Preview

Since its launch, the Alexa Adapter for the Bot Framework, part of the Bot Builder Community Project on GitHub, which allows you to surface a Microsoft Bot Framework bot via an Amazon Alexa skill, has received great feedback. Today, I am excited to announce a preview of the next major iteration of the Alexa adapter, one which I hope provides additional benefits for developers, ensures full comparability with the latest developments in the Bot Framework and also ensures we are in the best position we can be to support the Alexa platform moving forward. This post is intended to provide an insights into what is happening and why, along with details of how you can try the preview update yourself and provide feedback.

So, here are the key details this post will cover.

  1. Obtaining / installing the preview and providing feedback
  2. Key Changes
    Adoption of Alexa.NET
    Adding support for Bot Builder Skills and the Virtual Assistant
    New Activity Mapping Middleware
    Integration Changes
  3. Updated Sample

Obtaining / installing the preview and providing feedback

The preview Alexa Adapter package is now available on MyGet. You can install it from the Package Manager command line in Visual Studio using the following command;

We would really appreciate any and all feedback you have, raised through GitHub issues in the Bot Builder Community .NET repository, regarding the update. Either specific technical issues you have when trying the preview package (including the new sample), or more general issues about our approach, documentation etc. Every bit of feedback helps!

Key Changes

Adoption of Alexa.NET

As part of the work to develop the adapter, we have striven to not only allow basic voice based conversations on Alexa, but also to provide rich support for other platform capabilities, such as support for display / video directives for devices with screens or access to user profile data. Providing this support required us to maintain models for various Alexa schema items (directives etc) or functionality for calling the various Alexa APIs available. This has worked well, but the Alexa platform is expanding and keeping up to speed is a challenge. Thankfully, we found the fantastic Alexa Skills Kit SDK for .NET open source project, developed by Tim Heuer, which provides a library for managing request / responses for Alexa Skills with a strongly typed model, very similar to the one that we used internally in the adapter. This open source project is well maintained in terms of parity with the Alexa SDK and has good adoption too. Therefore, from this release forward we have taken the decision to adopt the use of the Alexa.NET library, allowing us to focus on the Bot Framework adapter specifics and take advantage of the great work happening over there.

Even better, Alexa.NET has some great supporting extensions in another project from Steven Pears, providing broad support for the wider platform, such as reminders, pro-active events and the customer profile API.

Adding support for Bot Builder Skills and the Virtual Assistant

In the time since the initial development of the adapter, there have been many changes within the Bot Framework SDK and related projects, all for the good, such as the Virtual Assistant templates and the recent announcement of skills (which allow you to better ‘componentize’ your application and enable parent / child bot scenarios). Currently, the Alexa Adapter will work well with Skills and the Virtual Assistant for basic voice only scenarios, where you don’t require the use of cards or display / video / audio directives etc. However, if you are currently using capabilities such as these, things break down when using skills (and hence the Virtual Assistant too). This is because the adapter uses TurnState (essentially a property bag that can be used to hold data during each turn) to hold directive / card data that should be added to outgoing Alexa responses, but TurnState isn’t passed between bots and skills, meaning there is no way to set directives from a skill.

As part of this update, we have totally re-factored the way we think about managing directives. Now, instead of using an extension method on the TurnContext to set outgoing directives (which would add them to TurnState for use later), you can simply add directives and cards as attachments to an outgoing activity. Below shows an example of adding a simple Alexa card to a response.

Another reason we think this is the right decision, beyond adding support for skills, is that it makes use of Attachments, a concept that most developers will already be familiar with.

New Activity Mapping Middleware

By default, the Alexa adapter will transform incoming Alexa requests into Bot Framework activity objects, where the Type of the activity is set to match the incoming Alexa request type, with the full request being stored within the Channel Data on the activity. This allows a developer to build a bot that targets just the Alexa platform. However, it is more likely that you will require the incoming Alexa requests to be transformed into Message activities, allowing you to surface a single bot on multiple channels without requiring huge amounts of conditional logic.

The current implementation of the Alexa adapter provides middleware to perform this transformation for incoming intent requests (providing you have configured your skill correctly), creating a message activity and setting the Text property to everything heard by Alexa.

As part of this update, the current middleware (AlexaIntentRequestToMessageActivityMiddleware) will be deprecated – although it is retained for backwards compatibility purposes – and is superceeded by a new middleware implementation, AlexaRequestToMessageEventActivitiesMiddleware. This new piece of middleware will still transform incoming Alexa Intent Requests into message activities in the same way, but will now additionally transform all other incoming Alexa requests into Event activities (another 1st party activity type within the Bot Framework). The resulting Event activity will have its value set to a strongly typed object specific to the incoming request type. For example, if Alexa sends a AccountLinkSkillEventRequest (sent when a user links / unlinks their account with your skill in the Alexa app if you have this feature enabled), your bot will receive an activity of Type ‘Event’ with its value populated with a AccountLinkSkillEventRequest object, containing specific details for that type of request. Ultimately, this change means that you no longer need to parse the incoming request body for requests of type other than IntentRequest.

AlexaAdapter implements IBotFrameworkHttpAdapter and dropping ASP.NET Web API Support

Until now, integration support for both ASP.NET Web API and .NET Core app have been available for the Alexa Adapter. With this update the current plan is to drop the support for ASP.NET Web API. This decision was made as we don’t believe that the Web API integration is widely used and this would be a good opportunity for us to reduce our surface area. However, this decision is a great example of why we are releasing this preview and writing this post – if you are currently using Web API with the Alexa adapter we would very much like to hear from you so that we can consider other options, other than dropping support for Web API totally.

We have also made a tweak for integration with .NET Core apps too. To date we have maintained specific integration classes (IAlexaHttpAdapter / AlexaHttpAdapter) for use when creating a controller / endpoint for your bot to receive Alexa requests. Whilst these classes still exist for backwards compatibility purposes, we have now aligned with the new adapters within the Bot Framework SDK (in preview at time of writing), where the AlexaAdapter class now implements IBotFrameworkHttpAdapter, removing the need for additional integration classes.

Backwards Compatibility / Migration

As part of the update, we have tried wherever possible to maintain backwards compatibility wherever possible. If you are currently using the Adapter for basic voice conversations then you will likely find that things will just continue working following upgrading to the preview package. You may need to update some namespaces, where classes that used to be internal to the adapter have now been replaced with classes from the Alexa.NET package. Then, at your own convenience, you can choose to update your adapter / controller integration and potentially move to the new request mapping middleware.

If you are using Alexa cards or any audio / hint / display / video directives, then you may have a little more work to do, but this should still be straightforward. Anywhere you are using a TurnContext extension method to set a card or directive. (e.g. context.AlexaSetCard) will need to be replaced by adding an attachment to your outgoing activity using one of the three new attachment objects available (CardAttachment, DirectiveAttachment or PermissionConsentRequestAttachment).

As we move through the preview period for this adapter, we are planning to provide some documentation showing some key before / after examples, showing practically how migration can be achieved.

Documentation and Updated Sample

Documentation for the adapter will be updated during the preview period, ahead of the GA package being made available. Once a draft of this documentation is available this post will be updated.

In the meantime, a completely refreshed sample has been produced to show the usage of the updated adapter. You can currently find the sample on the in-flight development branch along with the updated readme.

The sample demonstrates a bot that has adapters / controllers for both the default Bot Framework channels (BotFrameworkAdapter), as well as the AlexaAdapter and utilises the updated integration method, plus takes advantage of the new AlexaRequestToMessageEventActivitiesMiddleware middleware for transforming incoming requests.

At its core, the sample is an echo bot, where anything you say to your Alexa skill, connected to the sample, will be repeated back to you can say specific commands to show other capabilities.

  • “finish” – by default the sample leaves requests open and waits for more speech from the user (multi turn conversation), but saying “finish” will force the bot to send a final message and use the IgnoringInput InputHint on the outgoing activity to end the conversation.
  • “card” – sends a response back to the user with a simple Alexa card attached.
  • “display” or “hint” – if you are using a device with a display (such as an Echo Spot / Show) or the Alexa simulator, then saying “display” / “hint” will show a response which includes a display or hint directive depending on the command given.

All of the above capabilities are shown in a single bot file, allowing you to easily see how each response type is achieved.

Leave a Reply

Your email address will not be published. Required fields are marked *