On Saturday 8th July, myself and fellow Umbricians interested in IOT decided to spend our Saturday together learning Alexa skill development.
** This post is over a year old, so the tech is likely out of date **
You can read more info on this on Skrift IO newsletter, but here I’ll chat about my learnings. I focused my time on seeing what data can be included in the Alexa Skill cards and how can we content manage this using Umbraco.
After my initial investigations on how to get Alexa integrated with CMS data, I wanted to further investigate what user interaction is available with Alexa, outwidth the obvious voice controlled messages and spoken responses from Alexa. Within the Alexa mobile app there are “Cards” we can show users details of their interaction with the app. See more info on card data in your response here.
The Umbraco-y bit
As per the APIs I built in a previous blog post, the Umbraco fields for text, images and links can be added to the doctype and returned via our API, we just need to put this into the relevant Alexa response fields. So I got started by adding these fielda to Umbraco and edited my model to be returned in JSON. See new Umbraco doctype below… yes all my content is based around my cat!
As you can see in screenshot below, I have updated my model and this content is now returned in the JSON. You’ll notice I have a field for a link. Turns out links can’t be shown in cards on custom skills- that sucks! But it seems like flash briefing skills can have links… I have had a play with this and will blog about it soon!
The Alexa-y bit...
Within the lambda function, we can edit the card info part of the response rather than just the text read aloud by Alexa as I had previously done before.
See my original Lamba function code here and see below how I edited it. Basically, I now change the ‘card’ object being returned. you ‘ll see I ended up hardcoding my image URL to be a image on AWS, I had some trouble getting it to show my image due to some restrictions around the rules on this (SSL and CORS), I hardcoded it just as I wanted to do this as a proof of concept to see if it was possible, didn’t want to get stuck dealing with a server config issue all day!
Now, here is what we see in the app when we run our skill:
As you can see, it’s not taken much to get my CMS’d Alexa skill going from CMS controlled text for Alexa speech but also for the app!
Again, I know this hasn’t been an overly technical blog… I guess that’s my point. It’s not too difficult, just have a wee try at hacking around and you can build something quite easily!
As usual give feedback or say hello on Twitter :)