I was recently fortunate enough to gain some Microsoft Surface development experience. Overall, it was a very positive experience. Diving into any new technology is always an adventure. While the adventurous path itself might often be full of dead-ends, false starts, and disappointing losses of time; the lessons learned are often worth the effort in the end. In that spirit, I decided to take some time to reflect on what I learned from developing on Surface and what advice I might offer to those that come behind me.
First, Surface Development is currently only available to Surface Partners. However, I was working with a Surface Partner and found the roadblocks to getting the Surface SDK were not significant. If you want to work with the Surface Emulator work with your regional Microsoft Office, or go through the process of becoming a Surface Partner.
You don’t need a Microsoft Surface computer to build Surface applications. The price tag to purchasing a developer version of a Microsoft Surface exceeds that of the economy commuter car which I purchased new last year. Yes, it costs more than a new car. In addition, if you do buy this really cool computer, you will need a friend’s help to carry the nearly 200lb beast into your office. If you aren’t quite ready for the big cash outlay, don’t despair, the emulator is quite good. But check the emulator requirements. You will need a lot of RAM. I finally had to break down and get four GB, because two just wasn’t cutting it. A high-end video card is a big plus. The biggest roadblock to working on a laptop is going to be ensuring you have the necessary resolution. You need a minimum of 1280 x 960. If you are unfortunate enough to have a widescreen laptop that does a max resolution of 1280 x 800, I’m sorry, but you will need to go find an external monitor and be tethered to your desk, my unfortunate circumstance for this project.
Here are some interesting things about the emulator. The emulator runs at 1024 x 768 resolution, because this is the resolution all Surface machines run at. Surprising, I know. This may explain the need for the resolution mentioned above. Here is the biggest question that should be on everyone’s mind. “How do I simulate two fingers touching my Surface application in the emulator when running on a non-multi-touch laptop?” The answer, two mice. That’s right. You can hook up several mice to your laptop, and when running the Surface emulator, they each show up as separate fingers. The emulator also allows you to select a virtual Tag to place on the surface, but more about tags later.
Probably one of the most shocking findings I encountered very early is the complete lack of developer community on the internet. I suppose we as .Net developers have become spoiled in our comfortable position of almost always being able to find some sample code on the internet if we are proficient in our search criteria. There is almost always a blog entry, technical article or MSDN reference available for something we are looking for when dealing with just about any .Net technology. I experience a bit of a struggle a year ago when I started into the Windows Mobile development and needed to search for sample code. However, that was more of an issue of separating desktop from compact frameworks when searching on objects that existed in both places, but had different feature sets.
While that was a taste of what it took to find samples in a limited developer community, it didn’t prepare me for what I found when searching for Surface samples. There is an event in surface development that fires when the user “taps” certain controls. It is called “ContactTapGesture”. This is the name of an event, part of the Surface API. Now enter this keyword into Bing and Google and be prepared to be amazed at the skimpy results. I have never seen this keyword grab more than three results from that search.
I hope this conveys the information environment that exists for Surface. It reminds me of the days of VB4 development, before the internet had really become the information super highway we take for granted today.
So how do you find resources on Microsoft Surface development, the same way I did for VB4 (sort of), look for other people who have done it. My best contact was a “cold call” to someone I had once met in a parking lot for 5 minutes, while we both walked to our cars, after a technical seminar, Richard Monson-Haefel. Richard was more than accommodating in offering suggestions for work-arounds to the problem I was having. While he didn’t have the exact solution to my issue, his though sharing put me on the path to the solutions. The point here is to use your network, search the internet and twitter for generic “Microsoft Surface” terms and don’t hesitate to reach out for help. Someone knows your pain and will probably be willing to lend you a hand.
It isn’t really that Surface is painful, it is just new. Microsoft offers some sample programs with the installation of the SDK, and there are some “getting started” documents and videos that are often very high level. But all in all, the resources are very limited. MSDN does not list the API documentation and so much needs to be figured out via trial and error. For example, the tap gesture event I mentioned before can be captured by any Surface control. However, my experience showed me that not every surface control can generate the event. For example a ScatterViewItem and a SurfaceButton can both generate a Tap event, and subscribe to the event with an event handler. I also believe all the Surface container controls, such as the SurfaceScrollViewer, can also subscribe to the event, but their event handlers will only every fire if you put a control capable of firing the event inside. For example if you put a basic WPF control (which is incapable of firing the tap event) inside a SurfaceScrollViewer, you will never fire the Tap event handler. However, be prepared to have your mind blown, you can get ContactDown events. I still haven’t found a good reference for how all the contact events work.
This leads into an interesting segue, understanding gesture events is by far the greatest learning curve I encountered when learning Surface development. Richard gave me another good tip when I spoke with him and that is related to the “Preview” events. For every contact event, there exists a preview event, such as “PreviewContactTapGesture”. (By the way search for that event defined in the API. No results.) The contact gestures will bubble up or down (I forget which) and will be consumed at some level, however the preview events are supposed to traverse every level, so if you find that something is consuming your events before it gets to the even handler you want, try subscribing to the “preview” event.
Introducing the TagVisualizer to my application created an interesting tap gesture problem, but first, let’s talk describe the TagVisualizer before we get to that. If you have seen a surface demo, you have probably seen the user place a physical object, like a business card, mobile phone, or drinking glass on the Surface, and see the application respond to that object. Usually, this is because the object has a byte tag or identity tag on the bottom of it. A tag is a square barcode-like image, but unlike a barcode that uses lines, a tag uses dots in a specific pattern that represents a 1 byte or 4 byte number. The TagVisualizer is the application object (in code) that defines the area where the tag can be placed, and then your application will respond to the tag. For example, I recently saw an application where the bottom right corner of the application said, ”Place your member card here”. When a customer would put their membership card in the specified rectangle, the UI would change for that member. The cards had the users member number encoded in an identity tag on the back. The TagVisualizer object was the rectangle that was watching for Tags to be placed on the Surface.
You have the option to register a TagVisualization to go with a specific Tag number. The TagVisualization is effectively a XAML user control that will be loaded and rendered with the Tag matching the number used in the registration is placed on the Surface (on the TagVisualizer control). This was surprisingly easy to implement. Given my experience with the other contacts I was expecting this feature to be a difficult one to implement. Much to my surprise it was a snap. The TagVisualizer understands orientation of the tag as well, so as you spin the business card, you are going to see the TagVisualization spin around the tag. There is a very clear step-by-step tutorial on how to implement TagVisualization on the Surface Partner site.
If you want to do things based on a Tag, but not automatically display an associate XAML file, you can catch the Tag programmatically and do whatever you want to the display.
So now that I’ve discussed the TagVisualizer, let’s go back to the issue it created for me. I wanted to make my application so you could place a Tag anywhere on the surface, and a TagVisualization would popup. For example, I had a tag that represented Help. When that tag was placed anywhere on the application I wanted to show a Help bubble that would give some basic tips on what the user could do. Seems reasonable right? So I place a TagVisualizer over all the other controls in the entire application. The TagVisualizer had a transparent background, and I expected that the any contact that wasn’t a Tag contact would go right through to the controls under it. I was wrong. The TagVisualizer seemed to block all my Tap gestures, so I had to move my TagVisualizer to the bottom/back of all controls (in the Z-order), resulting in Help only popping up if you place the Tag on the background (desktop) of the application. I also had to clear all other controls that had been drug onto the Surface, so the TagVisualization could be viewed, because it was under all other controls. It was a fine work around for my task at hand, and likely I could have found a better solution given enough time, but it was yet another example of the necessity of understanding gesture events.
What external issues can play havoc with your application? Sunlight isn’t your friend. Sunlight can cause your application to both not respond to users fingers, as well as automatically cause the application to think something has touched it. Bottom line, keep your surface out of the sun (and other bright light) or you will see odd behavior. Cold fingers also can cause the application to not recognize touch events. So if you are in Minnesota 9 months of the year, as I am, or plan to have your users sipping a cold beer while ordering another one via the surface, be aware of the possible issues with cold finders.
UI considerations are many, but let me just touch on a few. Surface applications are generally meant to be approachable for all sides, so building a UI that doesn’t have a defined “top” is key. If you do have a need for a certain user aspect, be prepared to allow for a quick switch so an approaching user from a different side can quickly switch the application to their orientation.
Shadows are also an interesting issue. The designer that I was working with had sent me a PNG file to use as the background for ScatterViewItems. ScatterViewItems are often seen in many surface demos as snap-shot photos that are scattered on the table, allowing the user to rotate, stretch, and flick them across the table. When you consider that an item can have any rotation, you quickly realize the concept of a shadow on one side doesn’t make any sense. If you have directional shadows on these items, the shadows could be pointing in all different directions.
Surface has a very unique context menu. It is more like a “mind-map” diagram. If you are not familiar with them, do some research before you design the way the user will issue commands. You may find the flexibility of this menu style opens up a variety of new possibilies.
In the end, a Surface application is just a WPF or XNA application with some extra controls. Most of the Surface Controls are extensions of WPF controls which offer extra gesturing and easing animations. Deploying, obfuscating, and architecting the application is really no different than what you would do in WPF. The greatest hurdle is going to come with gaining a strong understanding of gestures. Once you’ve gained some comfort, I think you’ll be surprised how fast the rest falls in place.