At home, I have a cool Raspberry Pi 2 and an USB LifeCam VX-6000 webcam that, for now, I don’t use for nothing. So I thought I could probably use both to create a little application that will allow me to spy anyone from it (and view the results from a simple Website).

As it’s now possible to use Windows 10 IoT Core on Raspberry, the best way for me to create that application would be to use a UWP application, developped in C#.

Basically, the code of this app will be simple: I’ll first look for the webcam and configure it:

var videodevices = await DeviceInformation.FindAllAsync(DeviceClass.VideoCapture);

var camera = videodevices.FirstOrDefault(d => d.EnclosureLocation != null);
if (camera != null)
    await InitializeCameraAsync(camera);
    await InitializeAzureStuffAsync();

Unfortunately, after trying this code on my device, it does not work. Indeed, I was unable to get a reference to the camera After some research, the reason is simple: for now, the drivers for my Webcam are not supported (check here for the hardware compatibility list:

Well, I could have forgive my project but why? It’s a cool idea and we have the Universal Windows Platform: my app running on the Raspberry can also be used on any Windows 10 devices!

So I just changed the target (using my laptop instead of the PI) and, well, the code works fine: the camera is found and is correctly initialized:

private async Task InitializeCameraAsync(DeviceInformation camera)
    await Task.Factory.StartNew(async () =>
        _mediaCapture = new MediaCapture();

        await _mediaCapture.InitializeAsync(new MediaCaptureInitializationSettings
            PhotoCaptureSource = PhotoCaptureSource.VideoPreview,
            StreamingCaptureMode = StreamingCaptureMode.Video,
            VideoDeviceId = camera.Id

        // Find the highest resolution available
        VideoEncodingProperties maxResolution = null;
        var max = 0;
        var resolutions = _mediaCapture.VideoDeviceController.GetAvailableMediaStreamProperties(MediaStreamType.Photo);
        foreach (var props in resolutions)
            var properties = props as VideoEncodingProperties;
            var res = properties;

            if (res?.Width * res?.Height > max)
                max = (int)(res.Width * res.Height);

                maxResolution = res;

        await _mediaCapture.VideoDeviceController.SetMediaStreamPropertiesAsync(MediaStreamType.Photo, maxResolution);

        await Dispatcher.RunIdleAsync(async args =>
            // Display camera preview
            CaptureElement.Source = _mediaCapture;
            await _mediaCapture.StartPreviewAsync();

        _imageEncodingProperties = ImageEncodingProperties.CreateJpeg();

The goal of the application is to take a photo from the Webcam and upload it to an Azure blob storage so we have some Azure stuff to initialize too:

private async Task InitializeAzureStuffAsync()
    await Task.Factory.StartNew(async () =>
        var storageCredentials = new StorageCredentials(BLOB_ACCOUNT_NAME, BLOB_ACCOUNT_KEY);
        var storageAccount = new CloudStorageAccount(storageCredentials, true);

        var blobClient = storageAccount.CreateCloudBlobClient();
        _imagesContainer = blobClient.GetContainerReference("images");
        if (!await _imagesContainer.ExistsAsync())
            await _imagesContainer.CreateIfNotExistsAsync();

        var serviceProperties = await blobClient.GetServicePropertiesAsync();
        serviceProperties.Cors.CorsRules.Add(new CorsRule
            AllowedHeaders = new List<string> { "*" },
            AllowedMethods = CorsHttpMethods.Get | CorsHttpMethods.Head,
            AllowedOrigins = new List<string> { "*" },
            ExposedHeaders = new List<string> { "*" }

        await blobClient.SetServicePropertiesAsync(serviceProperties);

The code itself is pretty simple: we check if the blob container “images” exists and, if not, we create it. Then, we change the properties of the container to allow any origins to access it (otherwise, we’ll have a cross domain origin exception).

Once this part is done, we just need to take the pic from the Webcam. As we want to create a spy system, we need to take more than one pic so we’ll use a timer to take our pics:

private void OnStartWatchingButtonClick(object sender, RoutedEventArgs e)
    if (_timer == null)
_timer = new DispatcherTimer();
_timer.Interval = new TimeSpan(0, 0, 1);
_timer.Tick += OnTimerTick;

    StartWatchingButton.IsEnabled = false;
    StopWatchingButton.IsEnabled = true;

private async void OnTimerTick(object sender, object e)
    if (_mediaCapture == null)

    using (var memoryStream = new InMemoryRandomAccessStream())
            await _mediaCapture.CapturePhotoToStreamAsync(_imageEncodingProperties, memoryStream);
            await memoryStream.FlushAsync();


            var array = new byte[memoryStream.Size];
            await memoryStream.ReadAsync(array.AsBuffer(), (uint)memoryStream.Size, InputStreamOptions.None);

            if (array.Length <= 0)

            var blockBlob = _imagesContainer.GetBlockBlobReference("Image.jpg");
            await blockBlob.UploadFromByteArrayAsync(array, 0, array.Length);
        catch (Exception ex)
            Debug.WriteLine("Exception: " + ex.Message);

Every second, the Tick event of the timer is raised and, using the CapturePhotoToStreamAsync method, we get a pic from the Webcam and we use the method from WindowsAzure.Storage to upload it to the blob storage.

So our application is up and running, pushing a pic to the blob storage every second. So we now need a way to view that pic. Let’s go for a simple AngularJS application that perform a GET request to… the URL of our image in the blog container:

"use strict";

app.controller("indexCtrl", ["$scope", function ($scope) {

    var watchIntervalId;

    $scope.isRunning = false;

    $scope.startWatching = function () {

        $scope.isRunning = true;

        watchIntervalId = setInterval(function () {
            var xhr = new XMLHttpRequest();
            xhr.onreadystatechange = function () {
                if (this.readyState === 4 && this.status === 200) {
                    var url = window.URL || window.webkitURL;
                    $scope.ImagePath = url.createObjectURL(this.response);
  "GET", "");
            xhr.responseType = "blob";

        }, 1000);

    $scope.stopWatching = function () {

        $scope.isRunning = false;


Let’s add some user interface component to get a nice app:

<!DOCTYPE html>
<html ng-app="CameraWatcherApp">
    <title>Camera Watcher</title>
    <meta charset="utf-8"/>

    <meta http-equiv="cache-control" content="no-cache">
    <meta http-equiv="expires" content="0">
    <meta http-equiv="pragma" content="no-cache">

    <link rel="stylesheet" type="text/css" href="Content/bootstrap.css" />

    <script src="js/vendors/jquery/jquery-2.1.4.min.js"></script>
    <script src="js/vendors/angular/angular.min.js"></script>
    <script src="js/vendors/angular/angular-route.js"></script>

    <script src="js/app.js"></script>
    <script src="js/controllers/indexCtrl.js"></script>

<div class="container" ng-controller="indexCtrl">
    <h1>Camera Watcher <small>Your personal spy</small></h1>
    <div class="row">
        <div class="col-md-12">
            <div class="center-block">
                <img ng-hide="!isRunning" src="{{ImagePath}}" class="img-rounded" style="width: 800px; height: 600px; display: block; margin: 0 auto;" alt="Camera Watcher"/>
                <div class="text-center">
                    <div class="btn-group">
                        <button class="btn btn-default btn-lg" ng-click="startWatching()" ng-disabled="isRunning"><span class="glyphicon glyphicon-play" aria-hidden="true"></span> Start Watching</button>
                        <button class="btn btn-default btn-lg" ng-click="stopWatching()" ng-disabled="!isRunning"><span class="glyphicon glyphicon-stop" aria-hidden="true"></span> Stop Watching</button>

And you’re done! Now, if you run the application, a pic will be send every second and, to view it, just launch the website:

Here is the direct link to the video:

As you can see, it’s pretty straightforward to implement your own spy system. Of course, when the Windows 10 IoT version will support more devices, you’ll be able to use this code on the Raspberry. But, for now, you need to find another way to access to your webcam and upload the pic from it. For that, I suggest that you take a look at the great article from Laurent, which explain how to do the same upload to a blob storage from a Node JS app:


Happy coding!

For a small side project I’m working on, I wanted to easily send push notifications to my UWP application. So I’ve decided to use the Notifications Hub (which is, in fact, a wrapper around the Microsoft Service Bus) to send the notifications.

Basically, you first need to create an application in the Store so you can access to its notifications settings:


With the “Package SID” and “Client Secret” settings copied, you can go to the Azure Portal to create/edit your Notification Hub (under the Service Bus section) and set the properties:


Now, associate your UWP application with the Store application, so the manifest file can be correctly updated with the correct values:


Your Notification Hub is up and running, you app is correctly associated so now, it’s time to add some code!

First step is to register your application so it can receive the notifications. For that, we need to get a push notification channel for the application and use it to register the app to the Notification Hub (after adding the WindowsAzure.Messaging.Managed Nuget package)

var channel = await PushNotificationChannelManager.CreatePushNotificationChannelForApplicationAsync();

var hub = new NotificationHub(Constants.NOTIFICATION_HUB_PATH, Constants.NOTIFICATION_HUB_LISTEN_ENDPOINT);

await hub.RegisterNativeAsync(channel.Uri);

The notification hub path is, basically, your hub name. For the endpoint, you need to go to the Azure portal to get, from the “Dashboard” tab, the connection information:


By default, a notification hub has 2 different endpoints configured, each of them with different permissions:


As the application will only received notification, the “Listen” permission is the only one needed. So, it’s the DefaultListenSharedAccessSignature that we need to use as endpoint with the hub path.

The application is now configured, it’s time to see how to send the notifications (in my case, I’m using an Azure WebJob executing a Console app). Add the Nuget package Microsoft.Azure.NotificationHubs and you’re ready to create the NotificationHubClient object used to send the notifications:

var hub = NotificationHubClient.CreateClientFromConnectionString(Constants.NOTIFICATION_HUB_FULL_ENDPOINT, Constants.NOTIFICATION_HUB_PATH);

var windowToastTemplate = "<?xml version=\"1.0\" encoding=\"utf - 8\"?>" +
                            "<toast launch=\"fromNotification\">" +
                                "<visual>" +
                                    "<binding template=\"ToastText01\">" +
                                        "<text id = \"1\">My super notification</text>" +
                                    "</binding>" +
                                "</visual>" +

await hub.SendWindowsNativeNotificationAsync(windowToastTemplate);

Here, you can see that I’m using SendWindowsNativeNotificationASync to send the notification to my PC (running Windows 10) but, as you can see, you can send a lot of different notifications (if you have correctly configured their settings in the Azure portal):

image  image

Also, the XML payload used for the notification is the specific part of the project: feel free to use another template and, even more, to use the new template of Windows 10 if needed :

Now, you just need to schedule your Azure WebJob (thanks to Visual Studio, it’s just a right-clic on your project) and define the schedule parameters: name, frequency, etc.:


Now, just wait for your WebJob to run and you should receive a nice notification (toast, tile, etc.) on your device, depending of the notification type you’ve decided to use.

Bonus tip: on your Notification Hub, you can use the “DEBUG” tab to send test notifications to your devices!



As you can see, using a Notification Hub is really easy and can be implemented in a few minutes in any kind of applications!


Happy coding!

As I’m working on a small ASP.NET MVC 5 application, I wanted to deploy it on Windows Azure to test it. Thanks to Visual Studio, the deployment was really easy but, as soon as I tried to browse the website, I received the following error message:

Could not load file or assembly ‘Microsoft.Web.Infrastructure, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35′ or one of its dependencies. The system cannot find the file specified.


After searching a bit, it appears that the error is due to the reference Microsoft.Web.Infrastructure.dll that is not deployed automatically with your app. To resolve the issue, just select the reference, go to the properties and set “Copy Local” to true:


Once this is done, deploy your website again and now, it should works fine!


Happy coding!