I am currently trying to write a demo for action on Google using the leisure web service.
At the moment, the user opens the action ("talk to testaction") and receives a welcome message (through the main intention). This initial intention awaits the user's response, and also sets the next expected intention through the possible_intents field in the JSON response
According to the documentation, I should be able to specify the user intention in the possible_interests of my HTTP JSON response.
However, if I use any intention other than "assistant.intent.action.TEXT" as soon as I respond to the initial intention / invitation, I get the following error:
I'm sorry, I did not understand.
And the answer to the initial welcome intention is incorrectly sent to my service.
This does not work:
{
"response": "...",
"expectUserResponse": true,
"conversationToken": "...",
"audioResponse": "...",
"debugInfo": {
"agentToAssistantDebug": {
"agentToAssistantJson": {
"conversation_token": "...",
"expect_user_response": true,
"expected_inputs": [
{
"input_prompt": {
[...]
},
"possible_intents": [
{
"intent": "testintent"
}
]
}
]
}
}
}
}
It works:
{
"response": "...",
"expectUserResponse": true,
"conversationToken": "...",
"audioResponse": "...",
"debugInfo": {
"agentToAssistantDebug": {
"agentToAssistantJson": {
"conversation_token": "...",
"expect_user_response": true,
"expected_inputs": [
{
"input_prompt": {
[...]
},
"possible_intents": [
{
"intent": "assistant.intent.action.TEXT"
}
]
}
]
}
}
}
}
my testintent is correctly defined in the action package and works fine if I call it directly.
Is it possible to use only general TEXT intentions, and then I need to perform all text recognition and intent recognition in the code?