Add keyboard shortcuts to your MVVM UWP App

by Ishai Hachlili 28. January 2016 07:22

 

Adding shortcuts to your app will make life easier for your users.

On a PC users would expect shortcuts and now with Windows 10/UWP these shortcuts will work on Mobile apps as well when a keyboard is connected.

When used on a phone with Continuum an app that supports full screen and keyboard shortcuts will give the user a great experience.

I use MVVM for all my apps and I try to avoid code behind in pages (I aim for one line of code in my *.xaml.cs files, InitializeComponent();) so I wanted a nice way to define keyboard interactions in XAML.

 

Enter Template10

Template10 is an open source project from Jerry Nixon and other Microsoft Developer Evangelists.

There's a lot of good stuff in Template10 but one of the things that caught my eye was the TextBoxEnterKeyBehavior. I quickly used it to add Enter key support in login and search forms.

It's very simple to use:

 
<TextBox Header="Search:" Text="{Binding SearchText, Mode=TwoWay, UpdateSourceTrigger=PropertyChanged}" >
          <interactivity:Interaction.Behaviors>
                    <behaviors:TextBoxEnterKeyBehavior>
                              <core:InvokeCommandAction Command="{Binding SearchCommand}"/>
                    </behaviors:TextBoxEnterKeyBehavior>
          </interactivity:Interaction.Behaviors>
</TextBox>


I'm using the InvokeCommandAction to execute a command in my ViewModel but you could use other actions as well (to change focus for example).

Notice the added UpdateSourceTrigger=PropertyChange in the TextBox binding, without it the typed value won't be updated before the command executes (thanks janabimustafa).

 

Keyboard Shortcuts

Once I saw how simple it is to add keyboard support with a behavior, I decided to add a bunch of shortcuts. Since I now wanted to use other keys and not just the enter key I created a new behavior and added a Key property to it. I also changed the behavior target to Control so you can set it up on a whole page or grid.

 Here's a gist of the new behavior

 To use it add a behavior and define the Key you want to list to:

<interactivity:Interaction.Behaviors>
        <behaviors:KeyUpBehavior Key="Left">
            <core:InvokeCommandAction Command="{Binding ShowPreviousItemCommand}"/>
        </behaviors:KeyUpBehavior>
        <behaviors:KeyUpBehavior Key="Right">
            <core:InvokeCommandAction Command="{Binding ShowNextItemCommand}"/>
        </behaviors:KeyUpBehavior>
</interactivity:Interaction.Behaviors>


Key Modifiers (Control, Shift)

I was using that last sample to control paging, it's much nicer to be able to use the keyboard to move between items but what about saving an item or adding a new one? 

I needed to add support for Control and Shift modifiers. I added another property for the modifier and updated the code to check if modifiers were set.

Here's the diff

To use it in your XAML:

<interactivity:Interaction.Behaviors>
        <behaviors:KeyUpBehavior Key="S" KeyModifier="Control,Shift">
            <core:InvokeCommandAction Command="{Binding SaveAndPublishCommand}"/>
        </behaviors:KeyUpBehavior>
        <behaviors:KeyUpBehavior Key="S" KeyModifier="Control">
            <core:InvokeCommandAction Command="{Binding SaveDraftCommand}"/>
        </behaviors:KeyUpBehavior>
    </interactivity:Interaction.Behaviors>

As you can see, you can use one or more modifiers. In this sample I'm using CTRL+S to save a draft and CTRL+SHIFT+S to save and publish.

Now just add tooltips so users can discover your app supports shortcuts (add ToolTipService.ToolTip="Ctrl+S" to the save button for example) and you're all set.

You can actually use this KeyUpBehavior for the Enter key functionality as well.

 

I hope you find it useful in your apps. 

Tags: , , , ,

MVVM | UWP

A Quick Tip for UI design in Windows Phone 8

by Ishai Hachlili 30. October 2012 07:45

If you’ve worked on WP7 apps, you probably came across the MetroGridHelper from Jeff Wilcox. If not, what this little helper does is add a grid of squares that helps you align controls in your app’s pages.

If you’re one of those who ignore the commented sections in newly created pages, you might miss that this feature is now included in the default WP8 project. To use it, all you have to do is uncomment the Image tag at the end of the page

<!--Uncomment to see an alignment grid to help ensure your controls are
aligned on common boundaries. The image has a top margin of -32px to
account for the System Tray. Set this to 0 (or remove the margin altogether)
if the System Tray is hidden.

Before shipping remove this XAML and the image itself.-->
<!--<Image Source="/Assets/AlignmentGrid.png" VerticalAlignment="Top" Height="800" Width="480" Margin="0,-32,0,0" Grid.Row="0" Grid.RowSpan="2" IsHitTestVisible="False" />-->

Tags: , ,

Design | Windows Phone 8

Using Text To Speech and Speech Recognition in Windows Phone 8

by Ishai Hachlili 29. October 2012 12:55

With Windows Phone 8 Microsoft added an API for speech recognition and synthesis (TTS).
The combination of these APIs allows you to create conversations with the user, asking them for input with TTS and listening for their replies.

To use this feature you need to add the following capabilities to your app (in the WMAppManifest.xml file)
ID_CAP_SPEECH_RECOGNITION
ID_CAP_MICROPHONE
ID_CAP_NETWORKING
(like all speech recognition solutions, the actual processing is done on the server side, the phone just streams the audio to the server and gets the text result back. that’s why networking must be enabled  for the app)

Recognizing Speech

Here’s a very simple piece of code that will show the speech recognition UI to the user:

private SpeechRecognizerUI _recoWithUI;

private async void SimpleRecognition()
{
//initialize the recognizer
_recoWithUI = new SpeechRecognizerUI();

//show the recognizer UI (and prompt the user for speech input)
var recoResult = await _recoWithUI.RecognizeWithUIAsync();
}

And here’s what a result looks like when using the default grammar

ResultStatus: Succeeded
RecognitionResult: {
RuleName: ""
Semantics: null
Text: "Recognize this text."
TextConfidence: High
Details: {
ConfidenceScore: 0.8237646
RuleStack: COM Object
}
}

(this is the object hierarchy, I’m showing it as a json objects for clarity)

TextConfidence can have the following values: Rejected, Low, Medium, and High. You use this value to figure out how close the returned text is to what the user actually said. If you want the actual score, you can use ConfidenceScore.

Text is the recognized text. it will be empty if the TextConfidence is Rejected.

RuleName is the name of the custom grammar used for this recognition (since I didn’t use one, the value is empty here)

Semantics are related to SRGS grammars (an XML file that defines a more complex grammar you want to use). I will not get into this more advanced option in this post. 

Prompting the user and adding a custom grammar

The longer the user’s input is the harder it is to get accurate recognition. It makes sense to design your application in such a way where the user only needs to give you short answers.

In the following code snippet I’m showing the user a text prompt asking a specific question and I’m showing the possible answers.
I’m also adding the possible answers as a programmatic list grammar. This is the simples way of adding a grammar and it will increase the accuracy and force the API to match only the words I added in the grammar.
This means that if the user said Lace or Ace it will still be recognized as Race.

I’m also disabling the Readout. If you’re creating a speech conversation with your user, the readout might get tedious and slow the user down. repeating every recognition with “Heard you say: “ and the recognized text takes too long and will get annoying. You might want to enable it and certain situations but I would leave it off by default.
You can also hide the confirmation prompt which displays the same text (Heard you say…) by setting Settings.ShowConfirmation to false.

private SpeechRecognizerUI _recoWithUI;

private async void SpeechRecognition()
{
//initialize the recognizer
_recoWithUI = new SpeechRecognizerUI();

var huntTypes = new[]{"Race", "Explore"};
_recoWithUI.Recognizer.Grammars.AddGrammarFromList("huntTypes", huntTypes);

//prompt the user
_recoWithUI.Settings.ListenText = "What type of hunt do you want to play?";

//show the possible answers in the example text
_recoWithUI.Settings.ExampleText = @"Say 'Race' or 'Explore'";

//disable the readout of recognized text
_recoWithUI.Settings.ReadoutEnabled = false;

//show the recognizer UI (and prompt the user for speech input)
var recoResult = await _recoWithUI.RecognizeWithUIAsync();
}

 

Adding a spoken prompt

To create a conversation, you might want to add a spoken text before prompting the user for their response.
You can do that by simple adding the following two lines before the call to RecognizeWithUIAsync

var synth = new SpeechSynthesizer();
await synth.SpeakTextAsync("What type of hunt do you want to play?");

Notice that I’m waiting for the text to be spoken before showing the recognizer UI. This means the current screen will still be visible while the text is spoken and as soon as it ends the recognizer UI will show up and the prompt sound will be played. If you don’t include the await on that line, the text will be playing while the recognizer is already listening.

A better solution would’ve been to include a TTS option for the prompt in the recognizer. I couldn’t find such an option.
Another way to solve this is to create your own UI and use SpeechRecognizer and RecognizeAsync with your own UI.

Here’s a quick code sample:

private async void RecognizeWithNoUI()
{
var recognizer = new SpeechRecognizer();
var huntTypes = new[]{"Race", "Explore"};
recognizer.Grammars.AddGrammarFromList("huntTypes", huntTypes);

var synth = new SpeechSynthesizer();
await synth.SpeakTextAsync("Do you want to play a Race, or, Explore, hunt?");

var result = await recognizer.RecognizeAsync();
}

The main difference is that there’s no automatic handling of failed recognitions. The RecognizeWithUIAsync call tells the user “Sorry, didn’t get that” and asks them to speak again, with the no UI option, you need to handle that yourself using the TextConfidence value.

 

As you can see, it’s very easy and straightforward to add speech recognition and synthesis to your app. Combined with Voice Commands you can create an experience that lets the user launch and control your app without touching their phone. If you’re using voice commands you can start this experience on that target page only, so when the user launches the app with a voice command you will prompt them with TTS and get their replies with speech and when they launch their app from the apps list or tile it will show a normal user interface.

Tags: , , ,

Speech Recognition | Windows Phone 8

About Me

Ishai Hachlili is a web and mobile application developer.

Currently working on Play The Hunt and The Next Line


Recent Tweets

Twitter October 23, 05:22
@BenThePCGuy a standard where that doesn't matter is better. One more reason to get the #Lumia920, wireless charging, no need for microUSB

Twitter October 23, 05:21
@ManMadeMoon where they dance around the issues and don't really talk about them

Twitter October 23, 05:20
@BenThePCGuy are you a @wpdev ?

Twitter October 23, 04:17
@JonahLupton But if it's black it's usually better

Twitter October 23, 02:58
@jongalloway next time ask your 5 year old how to spell

@EShy