Optimising your Bot Framework bot for Facebook Part 1 – The Get Started button

When building chat bots with the Bot Framework, we are in a great position of being able to target many channels with a single bot.  However, that doesn’t mean you should simply enable your bot on multiple channels without considering how you can optimise the experience for users of individual channels.

In this and the next couple of posts I am going to be looking at a few ways you can optimise your bot for use with the Facebook channel.

In this post I am going to start off with looking at enabling a better on-boarding experience for your users using the ‘Get Started’ button.

Why?

Before we talk about the how, lets quickly touch on the why. On-boarding is a super important thing to consider when building a bot. The worst possible experience in most cases is a bot that doesn’t set any expectations for the user and simply starts with a blank message window.  This can lead to confusion and even the user thinking that they are going to be messaging a human, which is usually not what you want.  Think about it, if your users are talking to a bot (and they know it) and it gets things right 9 times out of 10, they are going to be pretty impressed because they are getting serviced instantly and in return are likely to be more forgiving of a few mistakes. However, if they think they are talking to a human and the ‘human’ gets simple things wrong 1 times out of 10, they are likely to get frustrated.

On-boarding is also a great way to set users expectations about what a bot can actually do. By providing a welcome message to your users, greeting them and listing a few features and maybe some examples of how they can get started with your bot, you are reducing the chance of the user asking the bot to do something it cannot do and therefore reducing the chance of errors and poor experience.

Can I not use conversation updates and the Facebook greeting message?

Under normal circumstances, on most channels, we can use the Conversation Update activity, special types of activities sent to your bot when events like a user starting the conversation occur. Unfortunately these do not work consistently across all channels and whereas you can use something like this to send a welcome message in web chat, the same doesn’t work for Facebook – so we need to find another way of getting our welcome message to the user.

You also have the option of using the Facebook greeting message, which is shown to users before they send their first message to your bot.  As you can see in the image below on the Walt Disney World bot I released recently (more on that in another post), I have started to set the user’s expectations of what my bot can do using this message.  This is absolutely something you should use and is a useful tool in and of itself, but it has it’s limitations. Namely you don’t get a lot of characters to work with and sometimes users may just ignore it and simply start talking to your bot, at which point it’s gone.

2017-08-02_8-47-20


Enter the Facebook Get Started button

Thankfully, Facebook have thought about this and have given us a way of knowing when a user is starting a conversation with our bot. The Get Started button can be enabled on your Facebook Messenger bot so that a user must click it before they are able to send a message to your bot.  When they do click it, we receive specific Facebook channel data on the Activity that is sent to our bot, meaning we can look out for it and respond appropriately.

Before you can handle the Get Started button in your bot, you need to enable it. To do this you need to send a Curl request like the one below, replacing the page access token with your own (you will have generated this when you registered your bot with the Facebook channel).

Once enabled, when a user first chooses to message your bot, they will see the Get Started button. Here it is in action on my bot. Hint: If you have already started a conversation with your bot through Facebook, you can click the settings icon in Messenger and delete the conversation. Next time you start a conversation, you will get the Get Started button as if it was the first time.

2017-08-02_8-40-56


Handling the Get Started button in your Bot Framework code

When a user clicks the Get Started button and the resulting Activity reaches your bot, you need to do two things;

  • Check if the incoming message is from the Facebook channel
  • If it is, check for the Get Started button postback payload (you can read more about channel specific data over at the excellent Bot Framework docs site

Below is an example of how you can achieve this by altering your Post method on your bots MessagesController.

If the Get Started payload is found, you can then call another method to send a welcome message to the user, which could look something like this.

You could even send the user a choice prompt so that they can simply choose an option to get started with your bot!

Summary and what’s next for Facebook optimisation for your bot?

Hopefully this post has given you a good tool  to add to your armory when building bots with great user experience in mind. I encourage you to always consider what channel specific capabilities you can take advantage of.  Don’t forget, in many cases users will already be used to such channel capabilities, like the Get Started button, from their interactions with other people and bots on the platform.

In the next post I will be discussing the Persistent Menu and how you can use it to give your users access to quick actions anytime during their conversation.  Here’s a peek of what it looks like.

2017-08-02_9-13-57

Creating and testing a Cortana Skill with Microsoft Bot Framework

With the release of the new Cortana Skills Kit, with Bot Framework integration, we are now able to create skills which are backed by bots! This is a really exciting prospect as now we can potentially have a bot that can serve both text based channels, as well as other speech enabled channels like Cortana.

Even if you are familiar with the Bot Framework, there is quite a lot of new pieces to consider when creating a bot on the Cortana channel. However, in this post I am going to go through how you can create a basic Cortana Skill using a Bot Framework bot and test it, by talking with your skill, in Cortana.

Setup your developer environment

First things first.  Right now the Cortana Skills Kit is only available to use for the US (United States) language and region and so before you do anything you will need to set your environment’s language and region settings to US English.  You will also need to alter the region on your Android / iOS device if you plan to use the Cortana app to test your skill, which is what I will be doing as part of this post.

Register your bot and create your skill in the Cortana Developer Dashboard

Now, before we start building out Cortana enabled bot, we need to create our Cortana Skill from the developer dashboard.  You can do this by either creating your skill in the Cortana Developer Dashboard at https://developer.microsoft.com/en-us/cortana/dashboard where you can then be redirected to the Bot Framework dashboard to register a bot. Or, alternatively, you can register your bot in the Bot Framework dashboard first and then enable the Cortana channel – which incidentally is what you will want to do if you want to enable Cortana for an existing bot.

For this example, I have registered a bot in the Bot Framework portal first. My demo bot is called ‘HR bot’ and can help the user book meetings and let their employer know if they won’t make it into work.  I won’t go over how to register a bot in this post, but if you need a refresher or this whole Bot Framework game is new to you then head over to some of my earlier posts where you can find out about getting started and some of the basics.

Once you have registered your bot in the Bot Framework portal, you should notice that Cortana is now an available channel (along with the new Skype 4 Business and Bing channels – but that’s for another post!).

Selecting the new channel will open a new window and take you to the Cortana Skills Dashboard in order for you to setup your new skill that will be connected to your bot.  Here we need to provide a few pieces of information, with the most important two being;

  • Display name – the name shown in the Cortana canvas when a user users your skill
  • Invocation name – This is really important because this is the name used for a user to talk to your skill. e.g. “Hey Cortana, ask HR Bot to book a meeting for me”.  It is really important to pick an invocation name that is easy for a user to say and equally easy for Cortana to understand. I will be posting another blog at some point soon with some best practice information.

There are additional options available to you on this page, such as the ability to surface user profile information through your skill to your bot and I will explore these in future posts, but for now just enter the basic information and save your changes.  Once you have done this, you should now see Cortana is registered as a channel for your bot!

Teach your bot to speak!

Now that we have registered our bot and enabled the Cortana channel, its time to build the bot itself with the Cortana channel in mind.

For my example, I have something that will be very familiar to anyone who has been developing with the Bot Framework to date, a bot that is integrated with the LUIS Cognitive Service. One of the intents I have wired up is ReportAbsense, where a text based conversation in a channel like web chat might look similar to the one shown below.

Up until now, to achieve the above conversation we would use the PostAsync method on our conversation context object to send messages to the user, supplying the text that we want to post into the channel. The code for the conversation above would look something like the below.

With the latest release of the Bot Framework SDK, a new method is now available to us which we can use when we are dealing with speech enabled channels – context.SayAsync.

Using the SayAsync extension method, we can specify both the text that we would like to display in text based channels, but also the speech that should be output in speech enabled channels like Cortana. In fact, Cortana will show both the text on the screen (providing the Cortana device has a screen – like a PC / phone) and also say the speech you define. The built in prompt dialogs now also support speech enabled channels, with additional properties speak and retrySpeak.  This means we can tailor our messages depending on the method of communication and also ensure our bot can support both speech and non-speech enabled channels.

After updating the above code to use the new SayAsync method and updating our prompt to include the options for speak / retrySpeak, it now looks like the below.

You can now use these methods to build a bot, or update an existing bot.  Then you can deploy it and ensure that the bot’s endpoint is updated in the Bot Framework dashboard as you would with any other bot.  Once you have done this you are almost ready to test your bot on Cortana.

Before moving on to testing our bot though, there are a few things worth pointing out;

  • The SayAsync method also allow you to set options including an InputHint (ExpectingInput / AcceptingInput / IgnoringInput) which tells the Cortana channel if you are expecting input and if the microphone should be activated following your message being posted or not. e.g. you might send multiple separate messages all at once, with each setting their InputHint to IgnoringInput, apart from the last.  This helps ensure that no input is accepted until all messages have been sent.
  • You can specify the message to be spoken directly on an activity object as well as using the SayAsync extension method.

I plan to go into more detail about all of the various aspects of building bots for Cortana in future posts, with this post simply designed to be an introduction.

Enable debugging and testing our new skill

Now for the really exciting bit, testing our new Cortana enabled bot!

First we first need to enable debugging within the Cortana dashboard.  This will make your skills available on Cortana devices where you are signed in with the same Microsoft Account under which you have registered the skills.  It will also enable additional debug experiences, such as the ability to see additional information sent between your bot and the device.

Now that we have enabled debugging, we can use a Cortana enabled device, such as a Windows 10 PC or an Android / iOS device.

For my example, I launched the Cortana app on my Android phone and said “Hey Cortana, tell HR Bot that I am not well and will not be in the office today“.  At this point Cortana correctly identified my skill, connected with my bot and, because it was the first time I had used the skill, presented me with a permission dialog where I can agree to continue using the skill.

Once I have accepted I can continue to converse with my skill with my voice, just as I would with my keyboard in another channel.

As mentioned earlier, you can see that Cortana is displaying accompanying text messages on the canvas as well as outputting speech and we can also continue to use other elements that we already utilise on other channels today, such as cards, to display other appropriate information.

Hopefully this post has helped you get up and running with a Cortana Skill backed by the Bot Framework. Watch out for future posts soon about other Cortana Skill features!

Using Scorables for global message handling and interrupting dialogs in Bot Framework

If you have been using Bot Framework for any length of time, you will likely be familiar with dialogs and the dialog stack – the idea that you have a root dialog and you can pass control to child dialogs which, when finished, will pass control back up to their parent dialog.  This method of dialog management gives us a lot of flexibility when designing our conversational flow.  For example, using the LUIS service to determine a user’s intent, but then falling back to a QnA Maker dialog if no intent can be recognised.

However, there are times when we might want to be able to interrupt our current dialog stack to handle an incoming request, for example, responding to common messages, such as “hi”, “thanks”, “how are you” etc.  Scorables are a special type of dialog that we can use within Bot Framework to do just this – global message handlers if you will!

Scorable dialogs monitor all incoming messages to a bot and decide if they should try to handle a message.  If they should then they set a score, between 0 and 1 as to what priority they should be given – this allows you to have multiple scorable dialogs and whichever one has the highest score will be the one that handles the message.  If a scorable matches against an incoming message (and has the highest score if there are multiple matches) then it can then handle the response to the user rather than it being picked up by the current dialog in the stack.

Simple example

Below is an example simple scorable dialog that is designed to respond to some common requests as described above.

Let’s discuss what’s happening in the code above.

I the PrepareAsync method, our scorable dialog accepts the incoming message activity and checks the incoming message text to see if it matches one of the phrases that we want to respond to.  If the incoming message is found to be a match then we return that message, otherwise we return null.  This sets the state of our dialog which is then passed to some of the other methods within the dialog to decide what to do next.

Next, the HasScore method checks the state property in order to determine if the dialog should provide a score and flag that it wants to handle the incoming message.  In this instance the dialog is simple checking to see if the PrepareAsync method set our state to a string.  If it did then HasScore returns true, but if not (in which case state would be null) it returns false.  If the dialog returns false at this point then the message will not be responded to by this dialog.

If the HasScore returns true then the GetScore method kicks in to determine the score that the dialog should post so that it can be prioritised against other scorables that have also returned a score.  In this case, to keep things simple, we are returning a value of 1.0 (the highest possible score) to ensure that the dialog handles the response to the message.  There are other scenarios where we might wish to return an actual score, for example you might have several scorables, each sending the incoming message to a different QnA Maker service and if an answer is found the score could be determined based on the response from the QnA Maker service.  In this scenario the dialog that receives the highest confidence answer from it’s service would win and respond to the message.

At this point, if the dialog has returned a score and it has the highest score amongst any other competing scorables, the PostAsync method is called. Within the PostAsync method we can then hand off the task of responding to another dialog by adding it to the dialog stack, so that it becomes the active dialog.  In the example we are checking to see which phrase the incoming message matches and returning an appropriate response to the user by passing the response to a very basic dialog shown below (hint: it’s a very basic dialog to illustrate the point, but you could add any dialog here).

Once the dialog above is completed and calls context.Done, we are passed back to our scorable and the DoneAsync method is called and the process is complete.

The next message that gets received by the bot, providing it doesn’t match again with a scorable dialog, will pick up exactly where it left off in the conversation.

Registering a scorable

In order for scorables to respond to incoming messages, we need to register them.  To register the scorable in the example above we first create a module that registers the scorable.

Then register the new module with the conversation container in Global.asax.cs.

Summary

In this post we have seen how you can use scorable dialogs to perform global message handling.  There are many potential use cases for using scorables, including implementing things like settings dialogs or having some form of global cancel operation for a user to call, bot of which can be seen in one of the samples over at the Bot Builder Samples GitHub Repo.

Personally, I love scorables and I think you will too.

Forwarding activities / messages to other dialogs in Microsoft Bot Framework

I have been asked a question a lot recently – is it possible to pass messages / activities between dialogs in Microsoft Bot Framework?  By doing this you could have a root dialog handling your conversation, but then hand off the message activity to another dialog.  One common example of this is using the LUIS service to recognise a user’s intent, but handing off to a dialog powered by the QnA Maker service if no intent is triggered.

Thankfully this is very simple to do.

Normally to add a new dialog to the stack we would use context.call which adds a dialog to the top of the stack. However, there is another method which was added some time ago but is not as widely known, context.forward, allowing us to not only call a child dialog and add it to the stack, but also let us pass an item to the dialog as well, just as if it was the root dialog receiving a message activity.

The example code below shows you how to forward to fallback to a dialog that uses the QnA Maker if no intent is identified within a LUIS dialog.

In the example above, a new instance of the FaqDialog class is created and the forward method takes the incoming message (which you can get as a parameter from the LUIS intent handler), passes it to the new dialog and also specifies a callback for when the new child dialog has completed, in this case AfterFAQDialog.

Once it has finished, the AfterFAQDialog will call context.Done and in the example will pass a Boolean to indicate if an FAQ answer was found – if the dialog returns false then we can provide an appropriate message to the user.

That’s it, it is super simple and unlocks the much asked for scenario of using LUIS and QnAMaker together, falling back from one to the other.

TechDays Online 2017 Bot Framework / Cognitive Services now available

This February saw the return of TechDays Online here in the UK, along with other sessions from across the pond in the U.S.  I co-presented 2 sessions on bot framework development along with Simon Michael from Microsoft and fellow MVP James Mann.  The sessions covered some great advice about bot development and dug a little deeper into subjects including FormFlow and the QnA Maker / LUIS cognitive services.

Both sessions are now available to watch online, along with tons of other great content from the rest of the 3 days.

Conversational UI using the Microsoft Bot Framework

Microsoft Bot Framework and Cognitive Services: Make your bot smarter!

Another fellow MVP, Robin Osborne, also recorded some short videos about his experience in building a real world bot for a leading brand, JustEat, so check them out over on his blog too.

Adding rich attachments to your QnAMaker bot responses

Recently I released a dialog, available via NuGet, called the QnAMaker dialog. This dialog allows you to integrate with the QnA Maker service from Microsoft, part of the Cognitive Services suite, which allows you to quickly build, train and publish a question and answer bot service based on FAQ URLs or structured lists of questions and answers.

Today I am releasing an update to this dialog which allows you to add rich attachments to your QnAMaker responses to be served up by your bot.  For example, you might want to provide the user with a useful video to go along with an FAQ answer. (more…)

QnA Maker Dialog for Bot Framework

The QnA Maker service from Microsoft, part of the Cognitive Services suite, allows you to quickly build, train and publish a question and answer bot service based on FAQ URLs or structured lists of questions and answers. Once published you can call a QnA Maker service using simple HTTP calls and integrate it with applications, including bots built on the Bot Framework.

Right now, out of the box, you will need to roll your own code / dialog within your bot to call the QnA Maker service. The new QnAMakerDialog which is now available via NuGet aims to make this integration even easier, by allowing you to integrate with the service in just a couple of minutes with virtually no code.

Update: I have now released an update to the QnAMakerDialog which supports adding rich media attachments to your Q&A responses.

The QnAMakerDialog allows you to take the incoming message text from the bot, send it to your published QnA Maker service and send the answer sent back from the service to the bot user as a reply. You can add the new QnAMakerDialog to your project using the NuGet package manager console with the following command, or by searching for it using the NuGet Manager in Visual Studio.

Below is an example of a class inheriting from QnAMakerDialog and the minimal implementation.

When no matching answer is returned from the QnA service a default message, “Sorry, I cannot find an answer to your question.” is sent to the user. You can override the NoMatchHandler method to send a customised response.

For many people the default implementation will be enough, but you can also provide more granular responses for when the QnA Maker returns an answer, but is not confident in the answer (indicated using the score returned in the response between 0 and 100 with the higher the score indicating higher confidence). To do this you define a custom hanlder in your dialog and decorate it with a QnAMakerResponseHandler attribute, specifying the maximum score that the handler should respond to.

Below is an example with a customised method for when a match is not found and also a hanlder for when the QnA Maker service indicates a lower confidence in the match (using the score sent back in the QnA Maker service response). In this case the custom handler will respond to answers where the confidence score is below 50, with any obove 50 being hanlded in the default way. You can add as many custom handlers as you want and get as granular as you need.

Hopefully you will find the new QnAMakerDialog useful when building your bots and I would love to hear your feedback. The dialog is open source and available in my GitHub repo, along side the other additional dialog I have created for the Bot Framework, BestMatchDialog (also available on NuGet).

I will be publishing a walk through of creating a service with the QnA Maker in a separate post in the near future, but if you are having trouble with that, or indeed the QnAMakerDialog, in the mean time then please feel free to reach out.

Building conversational forms with FormFlow and Microsoft Bot Framework – Part 2 – Customising your form

In my last post I gave an introduction to FormFlow (Building conversational forms with FormFlow and Microsoft Bot Framework – Part 2), part of the Bot Framework which allows you to create conversational forms automatically based on a model and allows you to take information from a user with many of the complexities, such as validation, moving between fields and confirmation steps handled for you. At this point if you have not read the last post I encourage you to give it a quick read now as this post follows on directly from that.

As promised, in this post we will dig further into FormFlow and how you can customise the form process and show you how you can change prompt text, the order in which fields are requested from the user and concepts like conditional fields.

(more…)

Building conversational forms with FormFlow and Microsoft Bot Framework – Part 1

Forms are common. Forms are everywhere. Forms on web sites and forms in apps. Forms can be complicated – even the simple ones. For example, when a user completes a contact form they might provide their name, address, contact details, such as email and telephone, and their actual contact message.  We have multiple ways that we might take that information, such as drop down lists or simply free text boxes. Then there is the small matter of handling validation as well, required fields, fields where the value needs to be from a pre-defined set of choices and even conditional fields where if they are required is determined by the user’s previous answers.

So, what about when we need to get this type of information from a user within the context of a bot? We could build the whole conversational flow ourselves using traditional bot framework dialogs, but handling a conversation like this can be really complex. e.g. what if the user wants to go back and change a value they previously entered?  The good news is that the bot framework has a fantastic way of handling this sort of guided conversation – FormFlow.  With FormFlow we can define our form fields and have the user complete them, whilst getting help along the way.

In this post I will walk through what is needed to get a basic form using FormFlow working.

(more…)