Alexa Skills - Developer Voice And Vote

Welcome to the Alexa Skills Feature Request site! This site enables Alexa Skills Developers to request and vote on features you’d like to see in the developer toolset and services for Alexa.

To keep this site purpose-driven and actionable, we moderate requests. Here’s some guidance on how to create great feature requests that can be evaluated by our development teams. For conversation, dialogue or help, you should visit our Alexa forums. We appreciate your input.

-Alexa Skills team

Alexa Skills - Developer Voice And Vote

Categories

JUMP TO ANOTHER FORUM

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback
  1. I believe a great addition to the ASK Toolkit for Visual Studio Code would be to bring more of the visual tools and editors within the IDE. It would be great to be able to work through the interaction model and add intents, utterances, slots, etc as well as the general skill properties and be able to validate them before deployment or building in the developer console that would update my local files. Then when I deploy my skill using ask-cli everything is in sync with my changes.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Received  ·  0 comments  ·  Interaction Model  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  2. Allow an intent to receive information on whether or not the user had requested an intent or given a command.

    Especially for kid skills, the ability to alter the response based on the manners involved in invoking the intent could drastically alter the response and, if given as an aspect of the skill info on the marketplace, alter parents decision to enable it for their children.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Received  ·  0 comments  ·  Interaction Model  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  3. I'm not a native English speaker and it takes me longer to think about the command I'm going to say. As such, I often find the follow up mode unusable for me because Alexa stops listening sooner than I manage to formulate the next command in my head. It's also often a bit unclear when does Alexa start listening (in Follow Up) and when does it stop so I easily end up in ridiculous command-repeating situations.

    Making this listening time configurable would allow me to extend this duration while keeping it the same for quicker speakers.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Merged  ·  0 comments  ·  Interaction Model  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  4. As the title states. I am not sure if this is a technical limitation, and why it only applies to public skills. Canfulfill is completely ignored on development\private skills and I think that limits it's use in both practice and during testing.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Received  ·  0 comments  ·  Interaction Model  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  5. Session attributes should be included in all requests and responses between alexa and a skill. Why would sessionEnd miss the session attributes? It is like talking to a person without short term memory.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Received  ·  0 comments  ·  Interaction Model  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  6. Creating Dynamic intents at run time similar to the way dynamic entities work. This would create a lot of flexibility to adapt skills without re-deploying. Thanks for your considerations

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Received  ·  0 comments  ·  Interaction Model  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  7. Currently, on Smart Home Skills it is not possible to retrieve the PowerState info through the voice; the only information I noticed that can be retrieved seems to be the status for smart locks and the temperature/setpoint for thermostats...
    The power state probably represents one of the most used interfaces for smart devices, and its status retrieval would be very useful for the user: in my opinion it's very limiting to interact by using the voice with a device without knowing if it is powered on or off.

    17 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Received  ·  2 comments  ·  Interaction Model  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  8. When playing a game like backgammon I would like to have Alexa store the game score. The I score would have to be given to Alexa. For example "Alexa Update Score. Steve 3 John 5". "Alexa Score" then Alexa would say " Steve 3 John 5. The user can then update the score at any time using the command "Alexa Update Score" or "Alexa Clear Score" which would clear the score. After a "Clear Score" command the Alexa response would be " No Score stored" Ability to give as many names followed by a score. Example "Alexa Update Score John…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Interaction Model  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  9. I'm pretty sure this has been done a lot, but the timeslot filling needs a lot of work. The slot confirmation should automatically be able to derive am/pm and know when to ask e.g. if you ask alexa (not in the skill) to set and alarm for 12, she says "midnight or midday?", ask for 6 she says "morning or afternoon", ask for 6pm she accepts it as 18:00. Apparently this is handled in Amazon Lex, but not here. Having to code for this manually in the skill is cumbersome. Thanks.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Received  ·  0 comments  ·  Interaction Model  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  10. Let's say my user is ordering a burger:
    user: "I want a cheeseburger"
    alexa: "Alright. What toppings would you like?"
    user: "mayo, pickles and no onions"

    In this scenario I've got to create some number of custom topping slots. {topping1} {topping2} etc.. and then every iteration of sample utterances up to however many toppings might possibly exist on a burger.

    There has to be a better way. I don't want to ask the user for toppings one at a time until they say "no more, please". Collecting multiple values to the same slot would fix this.

    Next, in the same…

    8 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Received  ·  0 comments  ·  Interaction Model  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  11. Since you can use IPA and SSML in Alexa responses, allow IPA in requests for intent utterances and slot values/synonyms.

    SSML that uses International Phonetic Alphabet (IPA) in response:

    <speak>
    I am from <phoneme alphabet="ipa" ph="ʃɨˈkɑːɡoʊ">Chicago</phoneme>.
    </speak>
    Proposed: Allow IPA in request in either intents or slots:

    {
    "name": "SAMPLE_City",
    "values": [
    {
    "name": {
    "value": "/ˈpɪtsbɜːrɡ/"
    }
    },
    {
    "name": {
    "value": "Chicago",
    "synonyms": [
    "/ʃɪˈkɑːɡoʊ/"
    ]
    }
    },
    {
    "name": {
    "value": "Phoenix",
    "synonyms": [
    "/ˈfiːnɪks/"
    ]
    }
    }
    ]
    }

    7 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Received  ·  0 comments  ·  Interaction Model  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  12. This feature request comes off the back of https://forums.developer.amazon.com/questions/200304/phonic-recognition.html

    The most recognised and adopted method of teaching kids and grownups to learn english is by using phonics.

    Could Alexa also recognise phonics as well as words? This would allow a whole new type of interactive skill that could teach children to read by getting them to practice phonics.

    Reference from the UK government for teaching kids to read with sounds:

    https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/190599/Letters_and_Sounds_-_DFES-00281-2007.pdf

    34 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Received  ·  1 comment  ·  Interaction Model  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  13. WakeUp event is a great addition but it could be 10 times better. Right now, you can only send it asynchronously to the Event Gateway, which requires authentication with user's Amazon account. This means more work to get it functional and also more costs for the developers.

    If we could just send the WakeUp event in response to the TurnOn/TurnOff directive, directly back to the device that initiated the directive, it would make things more straight forward.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Received  ·  0 comments  ·  Interaction Model  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  14. Allow a skill when activated to be interactive with Yes or No commands

    Example:

    Routine is activated by spoken command

    User: "Alexa, I'm home."
    Alexa: "Hello, would you like to review your schedule and reminders for today?"
    User: "Yes"
    Alexa: "You have three events remaining... Would you like to hear your reminders?"
    User: "No"

    8 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Received  ·  0 comments  ·  Interaction Model  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  15. Similar to lex, it would be help to know whether a slot should be elicited prior to the request entering the fulfillment service. I've seen that even if a slot is required, the presence of a slot value must be checked in the lambda function and the ElicitSlot directive returned rather than interaction model determining that ahead of time.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Received  ·  0 comments  ·  Interaction Model  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  16. When attempting to build skill, the operation fails due to the skill exceeding 2MB in size. Vote to increase hard limit for size of interaction model!

    15 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Received  ·  5 comments  ·  Interaction Model  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  17. I'm getting intent failures with no corresponding error messages in logs (using alexa-skills-kit-sdk-for-nodejs and Lambda).

    In troubleshooting, it looks like I may be exceeding the max allowed size of the JSON Response object. I see from the docs that the total size of your response cannot exceed 24 kilobytes.

    I'm using session attributes to store data I need, and in some circumstances that data is larger than 24kb. The same documentation as referenced above says: When returning your response, you can include data you need to persist during the session in the sessionAttributes property. The attributes you provide are then…

    23 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Received  ·  4 comments  ·  Interaction Model  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  18. Right now, it's completely random how AMAZON.Date understands whether day of the week are in the past or the future. ("What's the weather Sunday?")

    Half the time AMAZON.Date return last Sunday, the other half the next Sunday.

    This should be regimented. Past days should only be returned when the user say "last Sunday" or "this past Sunday", etc.

    Really shouldn't need to argue for this. It's completely crazy that this hasn't be dealt with by now!

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Received  ·  0 comments  ·  Interaction Model  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  19. Imagine a situation. I have a simple game, which uses states. One state is initial, another one is game state and the last one is the end game state.

    I have x amount of intents which are: launch, answer, pass a question, players amount, stop, yes and no intents. I know that these are not all of the required intents, but I want to keep this example short.

    Lets say player launches the game. At this state I want user to say how many players he wants to play with. But because game answer intent has a lot of answers…

    12 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Received  ·  1 comment  ·  Interaction Model  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  20. would like alexa to accomodate not english words for english use in invocation words . An example would be like "Déjà vu." While Alexa is able to interpret this exact pronunciation and spelling when transcribing (speech to text), it is not able to interpret for invocation words because these non-standard characters are not allowed (é, à).

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Received  ·  0 comments  ·  Interaction Model  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  • Don't see your idea?