Adding a event listener to respond with text without speaking#38
Adding a event listener to respond with text without speaking#38hughwilliams94 wants to merge 2 commits intoMycroftAI:20.08from
Conversation
|
@hughwilliams94 Looks good 👍 |
|
Hey Hugh, Thanks for the contribution, it's a great idea and would be a good use case for the Skill API however that isn't yet included in core. So this seems like a valid work around until then. I would suggest we prepend However it's also not immediately apparent to me why one is "spoken" vs "unspoken". Perhaps the "spoken" method was intended to emulate the API endpoint. It seems that the differentiation is really localization, is that right? I.e. Given this will be an externally facing interface (from this Skills perspective) I want to make sure we get the naming right. I haven't done anything in this Skill before, does it even use that existing |
|
Sorry for a slow response, I've updated the message name adding 'skill'. On the 'unspoken', I had used this to indicate that mycroft does not speak the response returned by Wolfram, but just sends it as a message. I also couldn't see any usage of the |
I was wanting to use information from WolframAlpha in another skill without mycroft speaking the results so I added a listener for
wolfram.unspoken.requestmessages which triggers theemit_text_resultmethod and awolfram.unspoken.responsemessage is then sent with the result.I had to add the
query_to_stringmethod because thequerymethod kept failing becausewolfram.Result()did not like the output of 'BytesIO(data.content)`.This is my first time contributing, and I was a little unsure of how the
Apiinheritance was working so I would love some feedback on how it can be improved.