Introducing the long awaited CanFulfillIntentRequest type!

Up until now, Alexa users usually have to use the skill name when invoking a third party skill. For instance, “Alexa, ask 21 Dayz what my goals are for today?” In this example, “21 Dayz” is the skill name. If a user tried to ask “Alexa, what are my goals for today,” without it, they would typically get “Sorry, I don’t know that one”.

Now with the new CanFulfillIntentRequest type, you can include your skill in a pool of possible skills that can handle name-free skill interactions. This new request type is backward-compatible and can work with your existing skill with some minor tweaks, specifically handling the CanFulfillIntentRequest type similar to how you handle the other request types. For example, say you have a simple switch statement like…

switch (req.type) {
  case "LaunchRequest": 
    processLaunchRequest(req); break;
  case "IntentRequest": 
    processIntentRequest(req); break;
  case "SessionEndedRequest": 
    processSessionEndedRequest(req); break;

In this case (pun intended), you add another case for “CanFulfillIntentRequest” for example…

switch (req.type) {
  case "LaunchRequest": 
    launchRequest(req); break;
  case "CanFulfillIntentRequest": 
    canFulfillIntentRequest(req); break;
  case "IntentRequest": 
    intentRequest(req); break;
  case "SessionEndedRequest": 
    sessionEndedRequest(req); break;

Or if you are using the Alexa Skills Kit SDK, then you can implement the “onCanFulfillIntent” interface. Whichever route you take, in your function, in our case here, we do something like the following…

function canFulfillIntentRequest (req) {
  var intentName =;
  var slots = req.intent.slots;

  //TODO: validate if your skill can handle 
  //this intent name and any provided slots here

  return {
    version: "1.0",
    sessionAttributes": {...},
    response": {
      canFulfillIntent: {
        canFulfill: "<YES, NO or MAYBE>",
        canFulfillSlotsResponse: {
          <SLOT-NAME>: {
            canUnderstand: "<YES, NO or MAYBE>",
            canFulfill: "<YES or NO>"

Basically what’s happening in the code sample above is, first, Alexa sends an HTTP post request to your endpoint with the request type set to “CanFulfillIntentRequest”. This request includes an intent name and possibly some slots as well. In your canFulfillIntentRequest function, we want to validate if we can indeed handle the intent name and any slots provided but we do not, and this is very important, do not execute the actual intent! All we want to do is make sure we can indeed handle it then respond back to Alexa with either YES or NO specified for the “canFulfill” property as well as set “canFulfill” to YES or NO for all slots under “canFulfillSlotsResponse”. The reason why we do not want to execute the actual intent is because Alexa will send out multiple “CanFulfillIntentRequest” to multiple third-party skills (currently 2 skills at a time) and will pick one skill after it’s collected the canFulfillIntentResponse back. So imagine if all those skills kicked off a pizza ordering process or turned off lights for example, not good. Only the chosen skill should execute the actual intent.

As for the other property in “canFulfillSlotsResponse” called “canUnderstand”, you set this to YES if your skill has a perfect match or high-confidence match. Set to NO if it can’t match the value. Pretty straight forward.

Call me… MAYBE?

Here’s an interesting one, instead of YES or NO on some of those “canFulfill” and “canUnderstand” properties (I bet you were wondering why not true or false right?), you can also specify “MAYBE”.

Setting MAYBE on “canUnderstand” basically let’s Alexa know that your skill has a partial match instead of an exact match as in the case of partial or fuzzy search results. As for “canFulfill” at the “canFulfillIntent” level, MAYBE instructs Alexa that your skill has might be able to process the request. This could be due to some slots being set to YES and others to MAYBE or if you need to prompt for account linking or fulfill required slots or some other multi-turn conversation requirement.

What Else?

If you are using the latest ASK CLI tools for your skills, check out the Quick Start Guide to get started.

To learn more about the specification, check out the “Name-Free Interaction for Alexa Skills” page.

Lastly, if you haven’t already, read the official Beta announcement here.

Once the feature is out of public beta, I’ll circle back and update this post with any relevant updates.

If you have any questions, feel free to reach out to me anytime or any one of the Alexa Champions or Evangelists in Slack.

Happy coding!

Share on FacebookTweet about this on TwitterShare on Google+Share on LinkedInShare on Reddit

Programming Voice Interfaces with Jibo

Glad to hear that Jibo is finally shipping, I can’t wait to get my hands on one! You can read more on that here:

Bob and I mention Jibo in our book “Programming Voice Interfaces, Giving Connected Devices a Voice”. While we only touch on Jibo for a moment, you can get a high level understanding of the current landscape around voice and how one can get started playing in the field.

Here’s an excerpt…

“In addition to and API.AI you will want to check out IBM Watson and Watson Virtual Agent as well as tools such as Jasper, PocketSphinx, Houndify, and Festival. You should also check out the latest offerings from Nuance and be on the lookout for startups such as Jibo, for example. Jibo is an interesting offering in that it’s an actual physical robot that moves, blinks, and reacts physically to voice input and output.

While at the time of this writing Jibo isn’t publicly available, there are tools developers can download such as the Jibo SDK, which has Atom IDE integration, as well as a Jibo Simulator (shown in Figure 2-5), which is great for visualizing how your code would affect Jibo and how users can engage with the robot.”

For more on the book, go to

For more information on developing for the Jibo platform, check out

Share on FacebookTweet about this on TwitterShare on Google+Share on LinkedInShare on Reddit

Alexa Champion Walter Quesada Says with Voice, ‘the Opportunities Are Really Exciting’

Walter Quesada says he has been “obsessed with building for voice” ever since the Amazon Echo was first released.

“Ever since Amazon Echo came out, I’ve been learning about ways it can fit into different scenarios, both in my professional life and in my free time,” says Quesada.

The Alexa Champion is an artist-turned software engineer with a rich history of imagining what’s possible with new technology. When Quesada’s passion for painting and sculpting turned to dabbling with code, he quickly found his niche.

Combining his creative eye with an eagerness to experiment with new technologies, Quesada blazed his career path doing the work that other companies or agencies had not yet learned how to do.

[ Read More ]

Share on FacebookTweet about this on TwitterShare on Google+Share on LinkedInShare on Reddit

Adding VoiceLabs to your Amazon Alexa Skills in C#

If you’re a C# programmer and have created Amazon Alexa Skills, you already know it’s tough to get some C# code samples, SDK’s and just a overall clear path to satisfy your curiosity for creating Alexa Skills. These days, it’s just tough to get C# support on a lot of the new services out there. We are just now seeing the big fish like Google and Amazon support C# on their cloud offerings which is great, but startups like VoiceLabs for instance, come out of the gate with SDK’s for Node.js, Python, Java and Ruby… no C#.

If you’re not familiar, VoiceLabs is a free analytics platform for Voice that supports Amazon Alexa, Google Home, Cortana and Siri. Actually, it supports just about any platform really. I have my chatbots logging on VoiceLabs right now, mainly just to see if I could. I just set my VoiceLabs project to Google Home and made a note in the metadata that says it’s really for, works for me!

So back to my SDKs rant, as I navigated their “Install SDK” section realizing no support for C# at the time of this writing, I figured, ok, nothing new, I’ll just write my own… again… just need to find the HTTP API documentation. After clicking on page after page and a couple of Google searches later, I could not find any information on any kind of HTTP API. At first I was upset, I mean who doesn’t post their HTTP API docs! Then, I took this as a challenge. Yes, I could of called or emailed, but no, that’s all too easy. I had to break down their Node.js SDK and figure it out for myself.

To make a long story short, here’s what I came up with. Works great on my machine…

var payload = new
user_hashed_id = MD5Hash(request.Session.User.UserId),
session_id = request.Session.SessionId,
intent = request.Request.Intent.Name,
data = new {
metadata = request.Request.Intent.Slots,
speech = response.response.outputSpeech.text
event_type = "SPEECH"

var data = JsonConvert.SerializeObject(payload);

using (var client = new WebClient())
client.Headers[HttpRequestHeader.ContentType] = "application/json";
client.UploadString("" + payload.app_token, data);

A couple of notes to consider when implementing the above code, first, get your “app_token” from and replace the “XXX…” value. Then make sure that “request” is set to your AlexaRequest object. Resolve all usings for JsonConvert (Newtonsoft), WebClient and so on. Lastly, that MD5Hash is function I found online, can’t remember who or where I stole it from but this is what that looks like if you just want to steal it from here, have at it…

public static string MD5Hash(string text)
MD5 md5 = new MD5CryptoServiceProvider();
byte[] result = md5.Hash;

StringBuilder strBuilder = new StringBuilder();
for (int i = 0; i < result.Length; i++)

return strBuilder.ToString();

So there you have it, drop that in your C# Alexa Skill, Google Assistant Action, chatbot or whatever else you have that handles logic for voice or chat intents.

Overall, despite it’s HTTP API & C# SDK shortcomings, VoiceLabs is a promising analytics platform for voice. Works fast and my intent requests are visualized instantly in the Voice Insights interface. I definitely recommend checking them out for yourself at

EDIT: I just chatted with Adam from VoiceLabs, cool dude! And yes, they do have information on their HTTP API, you just need to contact them to get it. I say, if you’re up for the challenge, let’s get some community supported open source SDKs going on Github for C#, Unity, C++ and whatever other language you want to support!

If you run into any problems hit me up on and if you haven’t done so yet, check out my Pluralsight course on creating Alexa Skills in C#. It’s getting a bit dated but still plenty of relevant information. Enjoy!

Share on FacebookTweet about this on TwitterShare on Google+Share on LinkedInShare on Reddit

You’re invited to Intoxicating VR @ the MIC!

Intoxicating VR at the Microsoft Innovation Center

Please join us for an evening of Virtual Reality immersion with Industry professionals. Experience VR for yourself up close and personal. Learn about opportunities for developing with some of most talented Pixel Pathfinders in the area.

This event includes: Hands-on Demos • Food & Libations • VR Hardware

When: Thursday June 16 • 6:00 – 9:00 PM
Where: Microsoft Innovation Center
@Venture Hive
1010 NE 2nd Ave
Miami, FL 33132

Register Now to save your seat!

Share on FacebookTweet about this on TwitterShare on Google+Share on LinkedInShare on Reddit

South Florida Code Camp is almost here!

The South Florida Code Camp agenda has just been released. There’s 15 tracks with over 90 sessions. I will be speaking along with David Isbitski on Amazon Echo and Alexa in the IoT Track. In addition, there will be some great speakers that will touch on Cortana, Raspberry Pi, OpenBCI, C#, dotNET, node, Angular and many other topics so make sure to check out the agenda and register today. Already over 800 people have registered so make sure to get your free ticket before it’s too late!

Also! We will be hosting a min-hackathon and hands on labs with Raspberry Pi’s, Windows 10 and Amazon Echo, so make sure to come thru and hack away! Yes, there will be prizes like Pi’s and Echo!



Share on FacebookTweet about this on TwitterShare on Google+Share on LinkedInShare on Reddit

Pluralsight Offers Free Courses to Those Out of Work

Pluralsight, a global leader in online technology training has opened up 50 free courses to those unemployed and seeking to pivot into a new career in technology.

“Pluralsight is thrilled to partner with the White House to help unemployed Americans land a career in technology,” said Aaron Skonnard, founder and CEO of Pluralsight. “Online training is a great resource for people to learn at their own pace, making it a real possibility for those looking to master key technology skills in a rapidly growing job sector.”

Currently more than half a million jobs in information technology go unfilled in the United States alone. That makes this category the largest U.S. available jobs sector.

The 50 courses are broken down into 5 categories which include job-hunting skills, general technology basics, data skills, front-end web application development and IT operations.

To learn more about Pluralsight’s TechHire program visit

Additional information about Pluralsight can be found at

Share on FacebookTweet about this on TwitterShare on Google+Share on LinkedInShare on Reddit

Enter the Matrix… IoT for Everyone!

matrix iot for everyoneNow here’s a Kickstarter campaign after my own heart. First, it’s called the “Matrix” (I know right!?!) Secondly, it’s an IoT product that could really make the Internet of Things accessible to everyone, as their slogan suggests. It’s basically a single unit, filled with 15 sensors, vision and voice interfaces, with plug-n-play architecture and an open framework for third party developers to enhance even further.

This thing needs to happen! Check it out…

For details go to

Share on FacebookTweet about this on TwitterShare on Google+Share on LinkedInShare on Reddit