I am creating the Alexa skill that will allow Alexa users to interact with the consumer facing the e-commerce site. There is a function to call a representative that already exists on the site. Now I want to create a voice application as a side project that extends the same opportunity through conversation. There will be a need for slots such as location, call category, etc. This is basically an application / transaction bot.
In the future, if this is successful, I would like the same general application to be available on different IoT devices (for example, Google Home Assistant, etc.). Therefore, I would like to abstract voice interactions with the same (common) thread and API for interacting with.
This leaves me researching some technologies like api.ai, wit.ai, Lex, etc.
But since this is an application for Alexa, and I already rely on AWS and Amazon in general, I think I would rather use Lex or just write a native Alexa application.
It’s hard for me to understand the differences between them. I understand that Alexa was built using Lex, and I see that they have similar concepts such as intention, slots, etc.
But I'm looking for any differences between the two services:
Will Lex be used so I can integrate more easily with other devices? Or is there any benefit?
Will using Lex be more flexible when designing / modifying the conversation flow? It seems that Lex is a bit more complicated and therefore can provide more functionality.
, Alexa?
Lex , Alexa? Alexa /, , .