Cognitive Services Computer Vision API powers Teamstream Video Search

Teamstream and its clients have a problem understanding the content inside their video libraries—a problem not just for them, but for everyone who uses traditional methods of archiving and retrieving videos. A video is typically created by a team, indifferently tagged, and saved out to a file in some repository—a file server or a cloud-based shared service like YouTube. In both cases, the videos are stored in a folder structure which is organized by client and by project. This type of methodology is extremely limiting when searching for particular videos. What is the job number? Which client was that? Did we do similar videos? Is this the right version? The difficulty multiplies when you want to search for specific content within videos. If a video is not tagged or named correctly—which is often the case—it can be impossible to find what you’re looking for. Teamstream Productions was looking for a “fix” that offered deep searching capabilities within a set of videos, a fix that would be accessible through multiple channels so that if the team is on the go or giving a presentation, they can access this search through mobile or web.

Technology Services Used

 

Customer profile

logo

Teamstream Productions is a content production and marketing firm that services some of the biggest consumer brands in the world. They work to understand each client’s business and communication goals, then use their expertise to create, publish, and promote content that services those goals across all channels, including videos, web and mobile content, social promotion, traditional print, events, and more.

Solution, Development, and Delivery

Solution

We believe Cognitive Services API is a great choice for us on this initiative for two reasons: First, it offers an already robust set of search parameters that can be utilized to sort videos. Second, its our understanding that the capabilities set is constantly expanding, which is a powerful differentiating factor as we offer this solution as an ongoing SaaS offering to clients—they’re signing on for a service that will only grow richer and better with time. – Keith Blanchard CEO Teamstream Productions

Microsoft Azure Search would sit on top of this data and return the results to a traditional web search but more importantly to a Bot Framework search that can be used via the Web Chat, text messaging and with Slack and Microsoft Teams.

Admin Architecture

There are many parts to this project but in essence it is broken down into 2 parts. The administrative tasks such as uploading the videos and then client side for searching the videos and the client side to search for the videos.

arch-diagram-admin

For the administration side is where most of the heavy lifting will take place.

  • ASP.NET MVC Web Application and Azure Web App Service to provide the tasks of uploading the videos to Azure Storage.
  • Azure Function with a Blob Trigger to start the video “slicing” and moving those images to another Azure Blob Storage.
  • A second Azure Function with a Blob Trigger will take each of the new images and call the* Cognitive Service Computer Vision API* to get the tags and descriptions for each of the frame images.
  • Azure MS SQL Database for storing all of the data.

Admin Development

ASP.NET MVC Web Application and Web App Service

We first created out Web App Service out on Azure for our ASP.NET Web Application which will be the main integration point for the Admin tasks as well as the Bot. Then we created our ASP.NET Web Application project in Visual Studio 2017.

Our first step was to build out the views and handle the form data. We wanted to use a lightweight and fast object mapper for .NET so we chose Dapper. This will extend our IDConnection interface and allow for quick and easy access to our DB. There is a NuGet libary that makes it easy to add by issuing the following command

We then used a Repository Pattern for our DB interaction. In order to help with the Object Relational Mapping we used

Then in our Admin controller constructor we can instantiate our repository and send the data from the form to our database.

Now that data is being stored we can update our UploadAsync method to include sending the video to Azure Blob Storage.

First we need to make sure we add the WindowsAzure.Storage package. We can then reference the libraries needed.

Here is the full UploadAsync method:

We now have the meta information from the user being saved into the DB along with the video moving over to Azure Blob Storage.

Admin ASP.NET MVC Web Application API

Eventually we will create Azure Functions that process these videos and images. We will need a way to send that data back to our app for storage in our Azure MS SQL Database. We will create an API endpoint that will allow those functions to send back data.

In order to add an API Controller to an existing project the following steps will be required:

WebApiConfig.cs file will need to be created in the App_Start directory to register your API Controller

In Global.asax.cs, the WebApiConfig needs to be registered with the GlobalConfiguration

Then a API controller can be created.

We will add a few more admin views but we are done with the Admin portion of the project.

Azure Functions – Setup

Once a video is stored in the video blob, we want to trigger a function to process the video. We also needed a way to determine the length of the video and a way to create images from frames or certain locations within the video. We are using FFMpeg to help with this. NReco has a C# .NET wrapper for FFMpeg as well as a C# .NET Wrapper for FFProbe. These wrappers will help with getting duration of the video along with grabbing a frame within the video.

We decided to use an Empty Web Project for publishing a .NET Class Library for our Azure Functions. This would allow us to pull in packages as needed for building out our full functions.

When starting off with an Empty Web Project for use with Azure Functions there are a few key files and command line tools that are needed.

First, install the Azure Functions Core Tools. Once installed and once you create your Empty Web Project perform the following steps:

  • Open the project properties. In the WEB tab choose START EXTERNAL PROGRAM
  • For the program path, enter the path to func.exe from the Azure Functions Core Tools. Location will be C:\Users\USERNAME\AppData\Local\Azure.Functions.Cli\1.0.0-beta.93\func.exe if you installed the tooling or installed via NPM C:\Users\USERNAME\AppData\Roaming\npm\node_modules\azure-functions-core-tools\bin\func.exe
  • For COMMAND LINE ARGUMENTS: set host start
  • For WORKING DIRECTORY, specify the root of the web project on your machine

start-action

Now when you run your project Azure Functions command line tooling will take over and you can still debug your Azure Function but it will be connected to whatever triggers you have set.

Speaking of triggers we will set those up now.

Our structure of our Azure Functions project is the following:

  • root – this will host appsettings.json
  • Utils – class library for common routines
  • Models – any db and json models
  • Functions – the actual code for our function
  • VideoTrigger – functions.json definition file our the trigger and function
  • ImageTrigger – functions.json definition file our the trigger and function
  • App_Data – our FFMpeg exe files

Lets look at appsettings.json. There are a couple of ways you can build this file. By hand. Pull it down manually via Kudo on Azure or via Azure Function Command Line Settings. You only need to pull it down by hand if you already configured your Azure Functions in the portal. That is what we did.

We first went into the Portal and searched for Azure Function App and configured it as an App Service Plan. Once we had the Azure Function created we opened up a command prompt and issued the following commands:

  • func azure login
  • func azure account list
  • func azure account set
  • func azure functionapp list
  • func azure functionapp fetch-app-settings [name]

This then downloaded the appsettings.json (encrypted) and we moved this into our project root. This file contains variables for the storage accounts that functions will use as well as which ones will be triggered when new data is contributed to the account. Be cautious since this file contains secrets so you don’t want to check that into source control.

azf

Azure Functions – Video Blob Trigger

Now that we have our project set up we need to create a definition file for our function. This will allow the Azure Function App to use a precompiled dll and the function entry point to start the job when it is triggered. We will create a file called function.json and add the following:

We added this to our VideoTrigger directory (and we will do this for our other function as well and change the entry point and pathing for the blob). Now lets build our function!

We need to add a package from NuGet Microsoft.Azure.WebJobs and any other packages that we will be needing (Storage, Threading and NReco)

Here is the video trigger.

The first step is to build out some paths, one for the actual video and the location of the tooling (FFMpeg.exe and FFProbe.exe). We then get the duration of the video. This is used to determine how many frames/images we will get per video. We arbitrarily chose these numbers but will fine tune later on. Then we loop through the video and get the thumbnail from the video at a given frame location. Once we have the image we then move that over to the other Blob storage for hosting the image.

Azure Functions – Image Blob Trigger

We will create another file called function.json and put that in the ImageTrigger directory and add the following:

The Image Trigger will now kick off when a new image is added to the blob. This function will call the Computer Vision API and get the results back in JSON and then update the MS SQL DB accordingly.

For each of the images in the Blob we make a call to the Computer Vision API and then the results are sent to API endpoint within our Admin tool application.

Client Architecture

The access to these results will be provided by a traditional search box as well as integration with the Bot Framework.

arch-diagram-client

We chose to use the Bot Framework as the interface for the search for multiple reasons:

  1. The Embedded Web Chat can serve just as good as a search box but could also direct the user into a faceted search
  2. Using the Bot Channels, we can easily extend the search to Slack and Microsoft Teams where clients tend to work and communicate about projects.
  3. Along with the channels mentioned above, access the search on the go was a key requirements and having the ability to integrate with SMS easily was also an added benefit.

Client Development

Bot Framework

Videos are being stored, images are being created and data is being saved. We now need to retrieve it. The first step is to create a View to retrieve the data we want to be indexed by the Azure Search. Then we set up our Azure Search to index our Azure SQL Database.

search-1

search-2

Now we can work on the Bot. We did a similar course of action by adding Bot Framework components to an existing application.

First we install the Bot Builder Library

Then we add the keys value pairs to the Web.config (Remember, for local testing these values can be blank.)

Now we added a MessageController class for the Bot to communicate with

Now our project is setup for building the Bot.

We want the user to be able to search for videos using the Bot and its dialogs and prompts. We decided that we would use the Bot Search Dialogs.

The documentation was easy to follow to get up and running. A few tweaks where needed to the SearchHitStyle to accommodate the way we wanted our results to look.

The code above was to style the card collection.

card-collection

We also had to modify the models, specifically the SearchHit to map to the results of our Azure Search.

Then the last modification was with the AzureSearchClient and using our search keys and index to return the proper results.

We created an IntroDialog that is invoked by the MessageController and will call the VideoSearchDialog to kick off the full dialog with the user to perform the search.

The dialog, search and mapper were all placed in a ContainerBuilder which was placed in the Global.asax.cs. A ContainerBuilder is an Autofac container based inversion pattern to help with dependency injection.

We were able to test locally using the Microsoft Bot Framework Emulator. The Bot Framework Emulator provides the ability to test your bot application locally as it would if it was being called by the Bot Framework Cloud Service.

The Bot Framework has a portal where we configured our Bot Application for publishing.

Our bot is now ready to go!

bot

Insights & Next Steps

This project will save time because video is inherently difficult to manually search and to manually tag, and this reduces the amount of time for any client to spend on both ends. The reduced timeframe to find any particular dataset from a video library (e.g., “how many of our videos from the last five years are funny and feature cars”) makes new targeted sales initiatives possible, including being able to authoritatively answer questions about the contents of a video library within the context of an ongoing presentation.

We’ve completed the first phase of the project, with Juhi Saha’s team at Microsoft. At the start of their next fiscal year we’ll be applying for a continuation grant to take this to the next level and fully develop the mobile interface and finalize the back-end with whatever functionality we can include in the scope, and create a working mobile demo we can start showing to clients (using a small subset of their own video libraries). – Keith Blanchard CEO Teamstream Productions

Lessons Learned

There was a big time blocker with the Azure Functions and running an exe. We first tried adding the exe as a site extension however, Azure Functions are a site extension in itself. Reaching out to support for NReco provided a property that we can set for the path of the FFProbe and FFMpeg files. Once we did that it worked like a charm.

Another area that we needed to spend a bit of time on was with the DB. We originally planned for using Azure Cosmos DB as our data storage but there are limitations with the size of each document which we would quickly exceed with indexing each frame of a larger video.

We also started this project a few weeks back and since we begun the Microsoft Garage and Azure Media services team released product very similar to what we are accomplishing. The service is called Video Indexer. There is a REST API and a portal that you can upload your videos and gain insight from the frames within. Our build can be adapted to use this new service by adding the REST API calls within the first Azure Function Video Trigger. We would eliminate the need for the “slicing” of the video and sending it to a second storage container and issuing calls for each of the frames. The team will be looking into this for use in an upcoming release.

Resources

Real-Time Communication to Xamarin App via SignalR and Azure API App Services

There are times where your website or mobile app would like to show live data, such as scores in real-time. And I don’t mean hitting an api on a timer but to populate with a new set of data auto-magically. This repo consists of an API and a few demo projects that make that happen. Using .NET Web API and Signal R.

Architecture

cloudconstruct-pba-diag

We will have an “internal system” that is recording the scores it can be any system, even just a DB. In our example I have created a web form that we can enter data in to simulate this.

We will then post those scores to an endpoint that is hosted on Azure API App Services built on a ASP.NET Web API Framework. We can add authentication and even scaling if needed to our App Service when needed. For now it is open.

Next we have our clients, they will connect using the SignalR Client API to connect to a SignalR Hub out on our API App Service. SignalR will push out any messages based on data that is coming in.

Usage

LiveScoringAPI

This is the API app that will take a “post” of scores and process them out to all of the connected clients via SignalR.

Startup.cs – currently there is an option configured for CORS to be enabled. This is needed for access to the Web.

Controllers/API/PlayerScoreController.cs – Endpoint to post scores to. Currently there is no authentication. That needs to be added still. There are 2 endpoints /GET and /POST /GET is a test method to generate random scores in the Player model. This can be removed. /POST is the main method. It is expecting a Player object and instantiates the PlayerScoreTickerHub to “update scores”

PlayerScoreTickerHub.cs – this is the SignalR hub clients will connect to. It also has a few exposed methods. The main one, for Website client script is GetAllScores. This happens at init(). I did not add that to the Mobile demos yet.

PlayerScoreTicker.cs – main file for updating scores. Currently the player object is a concurrent dictionary that just keeps getting updated with a score. This most likely will need to change at some point either time based or upon another call to clear out scores. There are also some test methods in there as well. BroadcastPlayerScores calls a method for all connected clients.

Models/PlayerScore.cs – sample player score model

FYI – Swagger is enabled as well http://webapi-url/swagger

LiveScoringWeb

This project serves 2 purposes

  1. Form to post to the API – http://website-url/demo/record
  2. Demo page to show the scoring updates in realtime – http://website-url/demo/scores

/Controller/DemoController.cs – When a user records a score it will post using the APIClient for the LiveScoringAPI

*/Scripts/PlayerTicker.js *- this is the JS on the demo score page that “gets” the data on init and then has a JS function that is called by the hub to update the scores in realtime.

LiveScoringMobile

This is a Xamarin project. Contains shared code and 3 others (iOS, Droid and WinPhone). iOS is not complete.

LiveScoringMobile has some shared code – a Player Model and a Signal R Client. You may have to update the domain name and player hub accordingly.

LiveScoringMobile.WinPhone – has an example of displaying scores. App.xaml.cs has the R Client connecting OnLaunched. MainPage.xaml.cs has a method when data is received from the hub it will use a dispatcher to update the UI

LiveScoringMobile.Droid – has an example of displaying scores. Similar to WinPhone, MainActivity kicks off the R Client and has a listener for data that is recieved. It will process the data with RunOnUiThread for real time updating.

LiveScoringMobile.iOS – not implemented but will follow same pattern

Working Demo http://livescoringwebapp.azurewebsites.net/

GitHub Repo https://github.com/jdruid/LiveScoringAPI-SignalR

PLUG Cambridge: Pitch Off

Most recently I had the opportunity to meet the staff at a local co-working spot in Cambridge called PLUG. I know what you are thinking, another co-working spot. True, but this one is a bit different. PLUG has a great story. It started many years ago in Sao Paulo Brazil to help connect local startups and entrepreneurs with resources they need. The overarching purpose was to help fuel business ideas. Fast forward a few years and they are in 5 different locations in Brazil. They just opened the Cambridge location early this year and have a great group of resources and startups already.

The one item that stood out to me is how they are helping the community outside of your standard co-working space. What I mean by that is there is a Latino bent to their space. Their founders and staff are all Latino and are really trying to build something special. They have big plans and I look forward to work with them to help fulfill it.

I saw with my own eyes the magic they have with the first annual PLUG Pitch off on October 6th. There were 8 companies that participated in the 3 minute pitch off. There was no requirement for industry but at least one of the founders needed to be a non-US founder.

20161006_225439827_ios

The judging panel was made up of Slava Heretz (Founder of Intellibookkeeping), Christian Na (Previous General Counselor Rosetta Stone, Start-up Mentor), Rich Palmer (Founder of Gravyty,  Founder of OvenAlly) which did a great job of providing constructive feedback to each of the founders.

Prizes were plenty. Cash awards, in-kind donations and even space at Plug. My contribution was to provide the winner with a Microsoft Sponsorship into our BizSpark Plus program.

20161006_224200793_ios

The list of companies that participated were:

Architectures – AI-based Architectural design. Empowering dramatically building design professionsal through AI on the Cloud

Biomag – Smart materials for diagonsing heart valves

CareAcademy – Educational platform for caregivers

dibind – Marketplace for selling small and large items on a hyper local scale

dot Learn – Video comression for online learning in under privilege countries

Eventbox – Platform and marketplace for all of your event needs. Expedia meets LinkedIn.

Gems – General Emergency Medical supplies

My On Campus Chef – Culinary event platform

Therapeutic Innovation – Cost effective device to help prevent premature birth deaths with respirtory failure.

Tyme – Wearable device embedded in clothing that monitors your breathing during exercises

Waterfall – IoT water measurement and control platform designed for large scale commercial deployments.

I look forward to work with a few of these companies in the short term as well as the PLUG staff on the upcoming events!

 

Cortana Intelligence Competitions

The Cortana Intelligence team has launched a data science challenge site. Currently there are two introductory challenges posted.

Decoding Brain Signals

Each year, millions of people suffer brain-related disorders and injuries and as a result, many face a lifetime of impairment with limited treatment options. This competition is based on one of the greatest challenges in neuroscience today – how to interpret brain signals.

Building on the work of Dr. Kai J. Miller and other neuroscientists, the competition is designed to further our understanding of how our brain interprets electric signals. The medical community and ultimately, patients will benefit from your expertise in machine learning and data science to help decode these signals.

Through this competition, you will play a key role in bringing the next generation of care to patients through advancing neuroscience research. Build the most intelligent model and accurately predict the image shown to a person based on electric signals in the brain. The Grand Prize winner will get $3,000 cash, followed by a 2nd prize of $1,500 cash, and a 3rd prize of $500 cash.

Contest Ends July 1, 2016

Iris Multiclass Classification

The primary goal of this tutorial competition is to help you get familiar with Cortana Intelligence Competition platform and build your first solution for Azure Machine Learning (AzureML) Competition using the well-known iris data. Participants’ performance is ranked by the overall accuracy of the model (higher accuracy, higher rank). As this is a tutorial competition, there is no award provided

 

The challenges have been up since 3/30 and already have 120 submissions! These are great competitions for all devs with an interest in data science. Students can try the Iris data set and more advanced can tackle the brain signal data.

The site has some interesting features.

  • auto-scoring of models
  • no need to download the data to your local box, then upload to wherever modeling is to be done. The data resides natively in Azure.

Stay tuned for more developments and challenges!

Library of Congress

How To Add Users to an Azure Subscription using Role-Based Access Control

In some cases you may want multiple users to access your Azure subscription, however you may not want to hand out your username and password to your Microsoft Account. Or maybe you don’t want that user to have full access but some role based access. Here is a simple step by step to add a user and choose the proper role for them. Here is a full list of Azure Role-Based Access Control

  1. Login to portal.azure.com
    step-1-addingusers
  2. Click on subscriptions
    step-2-addingusers
  3. Your subscription will load and click on the active subscription
    step-3-addingusers
  4. The Subscription Blade will open and click on All Settings
    step-4-addingusers
  5. Then Click on Users
    step-5-addingusers
  6. Then click on the + sign to Add a User
    step-6-addingusers
  7. Choose a Role for a user
  8. You can start typing the user email you would like to assign the role.
    step-7-addingusers
  9. If they do not appear you can click on Invite and they will be invited to join.
  10. Click OK.
  11. The next time the user logs into Azure they will be associated with the account.

Bits and Bytes: Migration of Azure Mobile Services to Azure App Services

For the past year Azure Mobile Services has come a long way. From adding easy access to making custom API’s and table scripts to combining push notifications and identity management. However, this past December Mobile Services has a new name, App Services. App Services now contains, Web Apps, Mobile Apps, Logic Apps and API Apps. With App service you get the following benefits:

  • Easily connect your app to data in the cloud or on-premises, including SQL Azure, SQL on-premises, Mongo, Azure Table Storage, SaaS APIs like Dynamics and Salesforce.
  • Enable your app to work with data offline and sync with above mentioned data sources
  • Build mobile backend APIs in seconds without writing any code
  • Intuitive SDKs for iOS, Android, Windows, as well as cross-platform frameworks like Xamarin, and Cordova
  • Extend your mobile backend with custom code by leveraging App Service Mobile .NET or Node.js server side SDK, or directly via REST frameworks for other languages supported on App Service including PHP, Java, or Python
  • Cross-platform push notifications to iOS, Android, Windows, Kindle and other devices (this capability is also available as a separate service: Notification Hubs)
  • Authenticate and Authorize your users without adding any backend code using App Service Authentication/Authorization features
  • AND MORE!

You are probably saying “that is great, but what does this have to with Azure Mobile Services”. Well it is because you want to use the new App Service model for your development. If you are starting a new project, clicking through and building a service using App Service is easy and straightforward. Here is a video that shows how to get up and running in no time.

However, what if you have invested your time and effort into Azure Mobile Services? How can it benefit from all of these awesome features? Well, it starts with Migrating to Azure App Services! You might say that migrating has always be a pain or even black magic. In some instances it has. However, with the Migration Tool for Mobile Services it makes it pretty easy and straight forward.

A few things to note before you attempt to migrate.

  • If you “MIGRATE” your site, there will be no code changes needed to your scripts.
  • If you “UPGRADE” your site, you will have to make code changes to your scripts to take advantage of the new mobile SDK.

In this post we are going to talk about MIGRATION.

The next thing to note is what tier you are on with your current Mobile Services and how many do you have. If you are on Free or Basic and have multiple mobile services, all of them will be migrated. If you only want one migrated you will have to “upgrade” to the standard tier. Here are the steps to do that.

  1. Log onto the Azure Classic Portal.
  2. Select your Mobile Service.
  3. Select the SCALE UP tab.
  4. Under Mobile Service Tier, click on the STANDARD tier. Click on the SAVE icon at the bottom of the page.

Remember to set the pricing tier to an appropriate setting after migration.

Here is a Bits and Bytes video on how to migrate your Azure Mobile Services over to Mobile App Services.
bitsbytesmigrateappservice

How to Migrate Your Mobile Service

  1. Log onto the Azure Classic Portal.
  2. Select your Mobile Service.
  3. Click on the Migrate to App Service button.The Migrate Button
  4. Read the Migrate to App Service dialog.
  5. Enter the name of your Mobile Service in the box provided. For example, if your domain name is contoso.azure-mobile.net, then enter contoso in the box provided.
  6. Click on the tick button.

 

Once done, head on over to the Azure Portal and view your new Mobile App Service!

KISS PARSE GOODBYE

Bits and Bytes: Forget Parse, Use Azure Mobile Services to Create an API for your Mobile App or Website

When you are building a mobile app or a website mostly likely you are going to need some type of data repository. This can be either files, video or in the form of a “database”. In order to access these repositories you need some sort of access mechanism. Sometimes it is a library or SDK others it is a connection to the database. However, with a mobile app or even websites that are distributed the connection to these repositories can be tricky. You need to create some sort of endpoint for these. Back in the day these were called Web Services now they are called API or Web API’s. Creating them can be time consuming as well. You have to create all of the routines needed to get and put data. You will need to create authentication mechanisms to make sure the proper user is getting and putting data. All in all this could be very time consuming.

Until today! Enter in Azure Mobile Services. A quick and easy way to consume an API endpoint for iOS, Android, Windows and HTML/JavaScript. We will look at creating these API’s using the “Classic Azure Portal”.

channel-9-mobile-services

If you want to use the new method of Azure App Services, here is a video as well.

channel-9-mobile-app-services

First off, make sure you sign up for your Azure account before we begin.

 

Create Mobile Service

  1. Login to the Classic Azure Portal. Click on NEW -> MOBILE SERVICE -> CREATE
  2. Enter a url for your clients to access the service.
  3. Choose eiher an existing SQL Server or Create a New SQL Server (this is used to host your database).
  4. Done
Responsive image

Create Your Data Table(s)

  1. Click on the Mobile Service name you just created.
  2. In the navigation tab, select DATA.
  3. On the bottom of the screen, click CREATE and choose a table name. You can also modify permissions if needed.
  4. Done
Responsive image

Modify Your Schema

  1. Click on the table name you just created and select COLUMN from the navigation tab.
  2. On the bottom of the screen, click ADD COLUMN and add as needed.
  3. You can connect to the Database you just created by using a SQL client tool and your connection string.
  4. To find your connection string click on DB in the left nav, find your Database name and you will see your connection string.
  5. Windows: can use SQL Management Tools or Cloud Explorer (within Visual Studio), or Mac: SQLPro for MSSQL or Mac SQL Studio
  6. FYI: if you are connecting to the SQL server you will have to add your ip address to the server firewall.
  7. Done
Responsive image

Access Your Table API

  1. Automatically you will get a CRUD (create, read, update, delete) API created for you.
  2. Click on the table and then click on SCRIPT in the navigation tab
  3. There will be 4 scripts that you can modify if needed.
  4. FYI: if you name your object and properties in your code, then no need to change anything in the api unless you need specific business logic
  5. Your endpoint will be: https://MOBILE SERVICE NAME.azure-mobile.net/tables/TABLE NAME
  6. Done
Responsive image

Create A Custom API

  1. From our Mobile Services page, click on the navigation tab API.
  2. Click CREATE -> and provide your custom API with a name.
  3. Here you can build custom logic to query multiple tables or perform some other routine.
  4. Your endpoint will be: https://MOBILE SERVICE NAME.azure-mobile.net/api/API NAME
  5. Done
Responsive image

View Errors and Logs

  1. From our Mobile Services page, click on the navigation tab LOGS.
  2. All logs will be there.
  3. Done
Responsive image
Adding TypeScript Support to Projects

Bits and Bytes: Adding TypeScript Support to an Existing Project

In the previous Bits and Bytes, we looked at Getting Started with TypeScriptnow we will see how we can add TypeScript support to existing projects.

Similar to getting up and running we are going to need to make sure Node.js and our TypeScript compiler is installed. Once that is complete we can add a tsconfig.json file to support our compiler options. For info on getting up and running check out Getting Started with TypeScript.

Once we have TypeScript support we can code away within our project. But most developers have third party JavaScript libraries to help with their website development. From jQuery support to full single page applications using React or Angular. In any of the cases, you still can create custom TypeScript libraries but the interaction with the third party libraries will be limited.

Here is how you can fix that.

channel-9-adding-typescript

Typings

In order for TypeScript to perform the type checking, the types of these libraries need to be defined somewhere. This is where type definition files help. They provide the compiler a definition file of the JavaScript code that is not typed a “definition” of how it should be. We can add each definition file in the typings directory under a library of choice (ie angular, jquery..etc.). and the file extension for such a file is .d.ts, where d stands for definition.

So where can we get these files? DefinitelyTyped.org or the GitHub repository. There are 1000’s of libraries out there and documentation on how you can create your own.

Once you find your .d.ts file, add it to your project and you should be able to enjoy the benefits from TypeScript from autocompletion, to syntax errors to member documentation.

Try it out and let me know what you think.

Bits and Bytes: Getting Started with TypeScript

You might be wondering what is TypeScript. Is it yet another programming language I need to learn? Another set of API’s and documents to read through? Yes and no. It is a superset of the programming language JavaScript. What does that mean? Well, as a “superset” it means everything that you love (or hate) about JavaScript is still there but now you have the ability to create classes, modules and interfaces to help build your large scale applications and components. Don’t worry about compatibility since at the end of the day it compiles down to simple JavaScript. What is also cool is you can start using TypeScript immediately. If you know JavaScript, you know TypeScript. Your developers and your apps can take full advantage right now and incorporate current JavaScript libraries if needed.

Let’s get started or watch the video.

channel-9-getting-started-with-typescript

Installing TypeScript

Remember, any browser, any OS, any host can accept type script. These instructions are platform agnostic so follow along.

Make sure you have Node.js install. I won’t go over that but it is as simple as going to NodeJS.org and downloading it for your OS.

Once you have NodeJS installed you can add the TypeScript compiler via the package manager.

To confirm if it installed correct type in tsc and you will see the list of compiler commands

tsc

Editor Configuration

You can use an editor of your choice. Visual Studio 2015 comes with TypeScript support out of the box. If you are running a Mac or even Windows, a nice editor to use is Visual Studio Code. Lightweight and TypeScript support. I will continue the rest of the article assuming you are using VS Code.

Your project needs to have a defintion file to allow the compiler to understand the settings it needs. Open up a new file and save it as tsconfig.json. Intellisense should be able to start helping you create the properties within the file however a simple one to get up and running would be something like this:

The items to note here are target. You can specify es3, es5, es6. What that means is you can code to the E6 or E7 specs and output JavaScript to E5! So you can use the latest and greatest features of the new ECMAScript spec but still support browsers that are not there yet. For more information on all of the property options check out the GitHub page for detail.

Now, when we create a .ts file as part of our project we will have super rich and exciting experiences with validation!

Writing TypeScript

Create a new file and call it helloerror.ts

Since TypeScript is JavaScript we can add the following to that file:

Done! You just created your first TypeScript file. But before we get ahead of ourselves you might ask what about the Types and all of the cool stuff. Lets move on and change up our code a bit.

We just added a string type to our parameter. If we change myErr to a number we will get a compile error. Lets now add one more change.

Now we have an object. However over to see the intellisense to see what is being returned. Now we can debug a bit better. Understand our code when it becomes a bit large in scale. Bringing modern development to JavaScript.

Make a TS to a JS

Now if we try to run this in our browser it will not work. We still have a TS file and we need a JS file. We could issue the build process from the command line (tsc helloerror.js) but as you can imagine that would get tiresome after a while.

Let’s use some build tasks within VS Code. Open the command palette with F1 and type in Command Task Runner and hit Enter

This will create a tasks.json file in the.vscode directory with an example output. You can look through and optimize your build but we just need to update the path within the args property to the path for our helloerror.ts file.

Press Ctrl+Shift+B to kick of the build process and you will see helloerror.js show up near the TS file. Let’s open the file and see what we get.

Holy Cow! It looks like JavaScript! That is because TypeScript is JavaScript!

If there were any build issues they would have shown up on the lower left hand side of the editor.

Done

Now you know TypeScript. Next up is how to take an existing application and add TypeScript support to it.