azurebootcampdk.azurewebsites.net  · web viewthe names of manufacturers, products, or urls are...

100
Computer Vision and BOT- LearnAI February 2018

Upload: others

Post on 19-Sep-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

Computer Vision and BOT- LearnAI 

February 2018 

Page 2: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

Information in this document, including URL and other Internet Web site references, is subject to change without notice. Unless otherwise noted, the example companies, organizations, products, domain names, e-mail addresses, logos, people, places, and events depicted herein are fictitious, and no association with any real company, organization, product, domain name, e-mail address, logo, person, place or event is intended or should be inferred. Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise), or for any purpose, without the express written permission of Microsoft Corporation. Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property rights covering subject matter in this document. Except as expressly provided in any written license agreement from Microsoft, the furnishing of this document does not give you any license to these patents, trademarks, copyrights, or other intellectual property. The names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and warranties, either expressed, implied, or statutory, regarding these manufacturers or the use of the products with any Microsoft technologies. The inclusion of a manufacturer or product does not imply endorsement of Microsoft of the manufacturer or product. Links may be provided to third party sites. Such sites are not under the control of Microsoft and Microsoft is not responsible for the contents of any linked site or any link contained in a linked site, or any changes or updates to such sites. Microsoft is not responsible for webcasting or any other form of transmission received from any linked site. Microsoft is providing these links to you only as a convenience, and the inclusion of any link does not imply endorsement of Microsoft of the site or the products contained therein. 

© 2017 Microsoft Corporation. All rights reserved.

Microsoft and the trademarks listed at https://www.microsoft.com/en-us/legal/intellectualproperty/Trademarks/Usage/General.aspx are trademarks of the Microsoft group of companies. All other trademarks are property of their respective owners  

Page 3: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

ContentsContents

Prerequisites.................................................................................................................................7

Introduction...................................................................................................................................7

Architecture..................................................................................................................................7

Setup.....................................................................................................................................................9

Lab: Setting up your Azure Account.................................................................................9

Lab 1.1: Setting up your Data Science Virtual Machine...............................9

Lab 1.2: Collecting the Keys..............................................................................................10

Image Processor..................................................................................................................................17

Cognitive Services....................................................................................................................17

Image Processing Library...................................................................................................17

Lab 2.1: Creating ImageProcessor.cs.............................................................................19

TestCLI.................................................................................................................................................22

Exploring Cosmos DB..............................................................................................................22

Lab 3.1 (optional): Understanding CosmosDBHelper...............................................22

Lab 3.2: Loading Images using TestCLI.........................................................................22

Language Addition...............................................................................................................................24

Custom Vision API C# Tutorial.................................................................................................25

Prerequisites...............................................................................................................................25

Platform requirements........................................................................................................25

Training client library...........................................................................................................25

The Training API key............................................................................................................25

Lab: Creating a Custom Vision Application......................................................................26

Step 1: Create a console application and prepare the training key and the images needed for the example......................................................................................26

Step 2: Create a Custom Vision Service project........................................................27

Step 3: Add tags to your project......................................................................................27

Step 4: Upload images to the project............................................................................28

Step 5: Train the project.....................................................................................................28

Step 6: Get and use the default prediction endpoint...............................................29

Step 7: Run the example....................................................................................................29

Further Reading.........................................................................................................................29

Page 4: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

LUIS......................................................................................................................................................30

LUIS................................................................................................................................................30

Lab 1.1: Creating the LUIS service in the portal........................................................30

Lab 1.2: Adding intelligence to your applications with LUIS..................................31

Azure Search........................................................................................................................................41

Azure Search..............................................................................................................................41

Lab 1.1: Create an Azure Search Service.....................................................................43

Lab 1.2: Create an Azure Search Index.........................................................................43

Build a Bot...........................................................................................................................................51

Regex and Scorable Groups..................................................................................................51

Lab 1.1: Setting up for bot development......................................................................51

Lab 1.2: Creating a simple bot and running it............................................................51

Lab 1.3: Regular expressions and scorable groups..................................................54

Azure Search........................................................................................................................................57

Lab 2.1: Configure your bot for Azure Search.............................................................57

Lab 2.2: Update the bot to use Azure Search.............................................................57

LUIS......................................................................................................................................................60

Lab 3.1: Update bot to use LUIS......................................................................................60

Publish And Register............................................................................................................................63

Lab 4.1: Publish your bot....................................................................................................63

Managing your bot from the portal.................................................................................67

Logging Chat Conversations.................................................................................................................68

1. Objectives...............................................................................................................................68

2. Setup........................................................................................................................................68

3. IActivityLogger......................................................................................................................68

4. Log Results.............................................................................................................................69

File Logger...........................................................................................................................................70

1. Objectives...............................................................................................................................70

2. An inefficient way of logging............................................................................................70

3. An efficient way of logging...............................................................................................70

4. Log file.....................................................................................................................................72

5. Selective Logging.................................................................................................................72

SQL Logger...........................................................................................................................................74

1. Objectives...............................................................................................................................74

2. Setup/Pre-requisites............................................................................................................74

3. SQL Logging...........................................................................................................................76

Page 5: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

SQL Injection...........................................................................................................................77

4. SQL Query Results...............................................................................................................77

Bot Testing...........................................................................................................................................79

Development and Testing with Ngrok...................................................................................79

1. Objectives...............................................................................................................................79

2. Setup........................................................................................................................................79

3. Forwarding..............................................................................................................................79

Unit Testing.........................................................................................................................................82

1. Objectives...............................................................................................................................82

2. Setup........................................................................................................................................82

3. Echobot....................................................................................................................................83

4. Echobot - Unit Tests............................................................................................................83

Connect Directly to a Bot – Direct Line................................................................................................85

Objectives....................................................................................................................................85

Setup.............................................................................................................................................85

Authentication............................................................................................................................85

App Config...................................................................................................................................87

Sending and Receiving Messages.......................................................................................87

Page 6: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

Computer Vision and BOT – LearnAI hands-on lab step-by-step Abstract and learning objectives This hands-on lab guides you through creating an intelligent console application from end-to-end using Cognitive Services (specifically the Computer Vision API). We use the ImageProcessing portable class library (PCL), discussing its contents and how to use it in your own applications.

ObjectivesIn this hands-on lab, you will:

Learn about the various Cognitive Services APIs Understand how to configure your apps to call Cognitive Services Build an application that calls various Cognitive Services APIs

(specifically Computer Vision) in .NET applications

While there is a focus on Cognitive Services, you will also leverage the following technologies:

Visual Studio 2017, Community Edition Cosmos DB Azure Storage Data Science Virtual Machine

Prerequisites

This workshop is meant for an AI Developer on Azure. Since this is only a short workshop, there are certain things you need before you arrive.

Firstly, you should have experience with Visual Studio. We will be using it for everything we are building in the workshop, so you should be familiar

Page 7: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

with how to use it to create applications. Additionally, this is not a class where we teach you how to code or develop applications. We assume you have some familiarity with C# (you can learn here), but you do not know how to implement solutions with Cognitive Services.

Secondly, you should have experience with the portal and be able to create resources (and spend money) on Azure.

Finally, before arriving at the workshop, we expect you to have completed 1_Setup.

Introduction

We're going to build an end-to-end application that allows you to pull in your own pictures, use Cognitive Services to obtain a caption and some tags about the images, and then store that information in Cosmos DB. In later labs, we will use the NoSQL Store (Cosmos DB) to populate an Azure Search index, and then build a Bot Framework bot using LUIS to allow easy, targeted querying.

Architecture

We will build a simple C# application that allows you to ingest pictures from your local drive, then invoke the Computer Vision API to analyze the images and obtain tags and a description.

Once we have this data, we process it to pull out the details we need, and store it all into Cosmos DB, our NoSQL PaaSoffering.

In the continuation of this lab throughout the workshop, we'll build an Azure Search Index (Azure Search is our PaaS offering for faceted, fault-tolerant search - think Elastic Search without the management overhead) on top of Cosmos DB. We'll show you how to query your data, and then build a Bot Framework bot to query it. Finally, we'll extend this bot with LUIS to automatically derive intent from your queries and use those to direct your searches intelligently.

Page 9: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

SetupEstimated Time: 30 – 40 Minutes

Lab: Setting up your Azure AccountYou may activate an Azure free trial at https://azure.microsoft.com/en-us/free/.

If you have been given an Azure Pass to complete this lab, you may go to http://www.microsoftazurepass.com/ to activate it. Please follow the instructions at https://www.microsoftazurepass.com/howto, which document the activation process. A Microsoft account may have one free trial on Azure and one Azure Pass associated with it, so if you have already activated an Azure Pass on your Microsoft account, you will need to use the free trial or use another Microsoft account.

Lab 1.1: Setting up your Data Science Virtual MachineAfter creating an Azure account, you may access the Azure portal. From the portal, create a Resource Group for this lab. Detailed information about the Data Science Virtual Machine can be found online, but we will just go over what's needed for this workshop. We are creating a VM and not doing it locally to ensure that we are all working from the same environment. This will make troubleshooting much easier. In your Resource Group, deploy and connect to a "{CSP} Data Science Virtual Machine - Windows 2016", with a size of 2-4 vCPUs and 8-12 GB RAM, some examples include but are not limited to DS4_V3, B4MS, DS3, DS3_V2, etc. Put in the location that is closest to you: West US 2, East US, West Europe, Southeast Asia. All other defaults are fine.

Note: Testing was completed on both West US 2 DS4_V3 and Southeast Asia DS4_V3.Once you're connected, there are several things you need to do to set up the DSVM for the workshop:

1. In the Cortana search bar, type "git bash" and select "Git Bash Desktop App", or type "cmd" and select "Command Prompt". Next, type cd c:// then enter, and git clone https://github.com/Azure/LearnAI-Bootcamp.git then enter. This copies down all of the files from the GitHub site to C:\LearnAI-Bootcamp.Validation step: Go to C:\LearnAI-Bootcamp and confirm it exists.

2. From File Explorer, open "ImageProcessing.sln" which is under C:\LearnAI-Bootcamp\lab01.1-computer_vision\resources\code\Starting-ImageProcessing. It may take a while for Visual Studio to open for the first time, and you will have to log in to your account. The account you use to log in should be the same as your Azure subscription account. Note: If your company has two factor authentication, you may

Page 10: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

not be able to use your pin to log in. Use your password and mobile phone authentication to log in instead.Validation step: In the top right corner of Visual Studio, confirm that you see your name and account information.

3. After Visual Studio loads and the solution is open, right-click on TestCLI and select "Set as StartUp Project."Validation step: TestCLI should appear bold in the Solution ExplorerNote: If you get a message that TestCLI is unable to load, right-click on TestCLI and select "Install Missing Features". This will prompt you to install .Net Desktop Development. Click Install, then Install again. You may get an error because Visual Studio needs to be closed to install updates. Close Visual Studio and then select Retry. It should only take 1-2 minutes to install. Reopen "ImageProcessing.sln", confirm that you are able to expand TestCLI and see its contents. Then, right-click on TestCLI and select "Set as StartUp Project".

4. Right-click on the solution in Solution Explorer and select "Build Solution".Validation step: When you build the solution, the only errors you receive are related to ImageProcessor.cs. You do not need to worry about yellow warning messages.

Lab 1.2: Collecting the KeysOver the course of this lab, we will collect Cognitive Services keys and storage keys. You should save all of them in a text file so you can easily access them in future labs.

Cognitive Services Keyso Computer Vision API:

Storage Keyso Azure Blob Storage Connection String:o Cosmos DB URI:o Cosmos DB key:

In addition, you will need to add the keys to the settings.json file which is under C:\LearnAI-Bootcamp\lab01.1-computer_vision\resources\code\Starting-ImageProcessing\TestCLI\settings.json. You will have to replace VisionKeyHere, ConnectionStringHere, CosmosURIHere, and CosmosKeyHere with their corresponding keys that you collect in the next section. Do not change the blob container (images), the database name (images), or the collection name (metadata).

Page 11: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

Getting Cognitive Services API Keys

Within the Portal, we'll first create keys for the Cognitive Services we'll be using. We'll primarily be using the Computer Vision Cognitive Service, so let's create an API key for that first.

In the Portal, click the "+ New" button (when you hover over it, it will say Create a resource) and then enter computer vision in the search box and choose Computer Vision API:

Page 12: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

This will lead you to fill out a few details for the API endpoint you'll be creating, choosing the API you're interested in and where you'd like your endpoint to reside (put in the West US region or it will not work), as well as what pricing plan you'd like. We'll be using S1 so that we have the throughput we need for the tutorial. Use the same Resource Group that you used to create your DSVM. We'll also use this resource group for Blob Storage and Cosmos DB. Pin to dashboard so that you can easily find it. Since the Computer Vision API stores images internally at Microsoft (in a secure fashion), to help improve future Cognitive Services Vision offerings, you'll need to check the box that states you're ok with this before you can create the resource.

Double check that you put your Computer Vision service in West US

Modifying settings.json, part oneOnce you have created your new API subscription, you can grab the keys from the appropriate section of the blade and add them to your TestCLI's settings.json file.

Page 13: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

Note: there are two keys for each of the Cognitive Services APIs you will create. Either one will work. You can read more about multiple keys here.Setting up Storage

We'll be using two different stores in Azure for this project - one for storing the raw images, and the other for storing the results of our Cognitive Service calls. Azure Blob Storage is made for storing large amounts of data in a format that looks similar to a file-system, and it is a great choice for storing data like images. Azure Cosmos DB is our resilient NoSQL PaaS solution and is incredibly useful for storing loosely structured data like we have with our image metadata results. There are other possible choices (Azure Table Storage, SQL Server), but Cosmos DB gives us the flexibility to evolve our schema freely (like adding data for new services), query it easily, and integrate quickly into Azure Search.

Azure Blob Storage

Detailed "Getting Started" instructions can be found online, but let's just go over what you need for this lab.

In the Portal, click the "+ New" button (when you hover over it, it will say Create a resource) and then enter storage in the search box and choose Storage account. Select create.

Once you click it, you'll be presented with some fields to fill out.

Choose your storage account name (lowercase letters and numbers), Set Account kind to Blob storage,

Page 14: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

Set Replication to Locally-Redundant storage (LRS) (this is just to save money),

Use the same Resource Group as above, and Set Location to the region that is closest to you from the following list:

East US, West US, Southeast Asia, West Europe. (The list of Azure services that are available in each region is at https://azure.microsoft.com/en-us/regions/services/). Pin to dashboard so that you can easily find it.

All other defaults are fine

Modifying settings.json, part twoNow that you have an Azure Storage account, let's grab the Connection String and add it to your TestCLI's settings.json.

Cosmos DB

Detailed "Getting Started" instructions can be found online, but we'll walk through what you need for this lab.

In the Portal, click the "+ New" button (when you hover over it, it will say Create a resource) and then enter cosmos db in the search box and choose Azure Cosmos DB and click Create.

Once you click this, you'll have to fill out a few fields as you see fit. Set Location to the region that is closest to you from the following list: East US, West US, Southeast Asia, West Europe.

Page 15: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

In our case, select the ID you'd like, subject to the constraints that it needs to be lowercase letters, numbers, or dashes. We will be using the SQL API so we can create a document database that is queryable using SQL syntax, so select SQL as the API. Let's use the same Resource Group as we used for our previous steps, and the same location, select Pin to dashboard to make sure we keep track of it and it's easy to get back to, and hit Create.Modifying settings.json, part threeOnce creation is complete, open the panel for your new database and select the Keys sub-panel.

Page 16: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

You'll need the URI and the PRIMARY KEY for your TestCLI's settings.json file, so copy those into there and you're now ready to store images and data into the cloud.Note: Be sure to turn off your DSVM from the portal after you have completed the Setup lab. When the workshop begins, you will need to start your DSVM from the portal to begin the labs. We recommend turning off your DSVM at the end of each day, and deleting all of the resources you create at the end of the workshop.

Page 17: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

Image ProcessorEstimated Time: 30-45 minutes

Cognitive Services

Cognitive Services can be used to infuse your apps, websites and bots with algorithms to see, hear, speak, understand, and interpret your user needs through natural methods of communication.

There are five main categories for the available Cognitive Services:

Vision: Image-processing algorithms to identify, caption and moderate your pictures

Knowledge: Map complex information and data in order to solve tasks such as intelligent recommendations and semantic search

Language: Allow your apps to process natural language with pre-built scripts, evaluate sentiment and learn how to recognize what users want

Speech: Convert spoken audio into text, use voice for verification, or add speaker recognition to your app

Search: Add Bing Search APIs to your apps and harness the ability to comb billions of webpages, images, videos, and news with a single API call

You can browse all of the specific APIs in the Services Directory.

As you may recall, the application we'll be building today will use Computer Vision to grab tags and a description.

Let's talk about how we're going to call Cognitive Services in our application.

Image Processing LibraryUnder resources>code>Starting-ImageProcessing, you'll find the Processing Library. This is a Portable Class Library (PCL), which helps in building cross-platform apps and libraries quickly and easily. It serves as a wrapper around several services. This specific PCL contains some helper classes (in the ServiceHelpers folder) for accessing the Computer Vision API and an "ImageInsights" class to encapsulate the results. Later, we'll create an image processor class that will be responsible for wrapping an image and exposing several methods and properties that act as a bridge to the Cognitive Services.

Page 18: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

After creating the image processor (in Lab 2.1), you should be able to pick up this portable class library and drop it in your other projects that involve Cognitive Services (some modification will be required depending on which Cognitive Services you want to use).

ProcessingLibrary: Service Helpers

Service helpers can be used to make your life easier when you're developing your app. One of the key things that service helpers do is provide the ability to detect when the API calls return a call-rate-exceeded error and automatically retry the call (after some delay). They also help with bringing in methods, handling exceptions, and handling the keys.

You can find additional service helpers for some of the other Cognitive Services within the Intelligent Kiosk sample application. Utilizing these resources makes it easy to add and remove the service helpers in your future projects as needed.

ProcessingLibrary: The "ImageInsights" class

Take a look at the "ImageInsights" class. You can see that we're calling for Caption and Tags from the images, as well as a unique ImageId. "ImageInsights" pieces only the information we want together from the

Page 19: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

Computer Vision API (or from Cognitive Services, if we choose to call multiple).Now let's take a step back for a minute. It isn't quite as simple as creating the "ImageInsights" class and copying over some methods/error handling from service helpers. We still have to call the API and process the images somewhere. For the purpose of this lab, we are going to walk through creating ImageProcessor.cs, but in future projects, feel free to add this class to your PCL and start from there (it will need modification depending what Cognitive Services you are calling and what you are processing - images, text, voice, etc.).

Lab 2.1: Creating ImageProcessor.csRight-click on the solution and select "Build Solution". If you have errors related to ImageProcessor.cs, you can ignore them for now, because we are about to address them.Navigate to ImageProcessor.cs within ProcessingLibrary.

Step 1: Add the following using directives to the top of the class, above the namespace:using System;using System.IO;using System.Linq;using System.Threading.Tasks;using Microsoft.ProjectOxford.Vision;using ServiceHelpers;Project Oxford was the project where many Cognitive Services got their start. As you can see, the NuGet Packages were even labeled under Project Oxford. In this scenario, we'll call Microsoft.ProjectOxford.Vision for the Computer Vision API. Additionally, we'll reference our service helpers (remember, these will make our lives easier). You'll have to reference different packages depending on which Cognitive Services you're leveraging in your application.

Step 2: In ImageProcessor.cs we will start by creating a method we will use to process the image, ProcessImageAsync. Paste the following code within the ImageProcessor class (between the { }):public static async Task<ImageInsights> ProcessImageAsync(Func<Task<Stream>> imageStreamCallback, string imageId) {

Hint: Visual Studio will throw an error, due to the open bracket. Add what is necessary to the end of the method. Ask a neighbor if you need help.In the above code, we use Func<Task<Stream>> because we want to make sure we can process the image multiple times (once for each service that needs it), so we have a Func that can hand us back a way to get the stream. Since getting a stream is usually an async operation, rather than the Func handing back the stream itself, it hands back a task that allows us to do so in an async fashion.Step 3: In ImageProcessor.cs, within the ProcessImageAsync method, we're going to set up a static array that we'll fill in throughout the processor. As

Page 20: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

you can see, these are the main attributes we want to call for ImageInsights.cs. Add the code below between the { } of ProcessImageAsync:VisualFeature[] DefaultVisualFeaturesList = new VisualFeature[] { VisualFeature.Tags, VisualFeature.Description };

Step 4: Next, we want to call the Cognitive Service (specifically Computer Vision) and put the results in imageAnalysisResult. Use the code below to call the Computer Vision API (with the help of VisionServiceHelper.cs) and store the results in imageAnalysisResult. Near the bottom of VisionServiceHelper.cs, you will want to review the available methods for you to call (RunTaskWithAutoRetryOnQuotaLimitExceededError, DescribeAsync, AnalyzeImageAsync, RecognizeTextAsyncYou). Paste the following snippet into in ImageProcessor.cs, at the end of the ProcessImageAsync method, and replace the "_" with the method you need to call in order to return the visual features.

var imageAnalysisResult = await VisionServiceHelper._(imageStreamCallback, DefaultVisualFeaturesList);

Step 5: Now that we've called the Computer Vision service, we want to create an entry in "ImageInsights" with only the following results: ImageId, Caption, and Tags (you can confirm this by revisiting ImageInsights.cs). Paste the following code below var imageAnalysisResult and try to fill in the code for ImageId, Caption, and Tags: ImageInsights result = new ImageInsights { ImageId = , Caption = , Tags = };Depending on your C# dev background, this may not be an easy task. Here are some hints:

1. When you defined the method, you specified a result. This should help you determine what ImageId is (it's actually really simple).

2. For Caption and Tags, start with imageAnalysisResult.. When you put the ., you're able to use the dropdown menu to help.

3. For Caption, you have to specify that you want only the first caption by using [0].

4. For Tags, you have to put the tags into an array with Select(t => t.Name).ToArray().

Still stuck? You can take a peek at the solution at resources>code>Finished-ImageProcessing>ProcessingLibrary>ImageProcessor.cs

So now we have the caption and tags that we need from the Computer Vision API, and each image's result (with imageId) is stored in "ImageInsights".

Page 21: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

Step 6: Lastly, we need to close out the method by adding the following line to the end of the method:return result;Now that you've built ImageProcessor.cs, don't forget to save it!Want to make sure you set up ImageProcessor.cs correctly? You can find the full class here.

Page 22: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

TestCLI

Estimated Time: 20-25 minutes

Exploring Cosmos DB

Azure Cosmos DB is our resilient NoSQL PaaS solution and is incredibly useful for storing loosely structured data like we have with our image metadata results. There are other possible choices (Azure Table Storage, SQL Server), but Cosmos DB gives us the flexibility to evolve our schema freely (like adding data for new services), query it easily, and can be quickly integrated into Azure Search (which we'll do in a later lab).

Lab 3.1 (optional): Understanding CosmosDBHelperCosmos DB is not a focus of this workshop, but if you're interested in what's going on - here are some highlights from the code we will be using:

Navigate to the CosmosDBHelper.cs class in the ImageStorageLibrary. Review the code and the comments. Many of the implementations used can be found in the Getting Started guide.

Go to TestCLI's Util.cs and review the ImageMetadata class (code and comments). This is where we turn the ImageInsights we retrieve from Cognitive Services into appropriate Metadata to be stored into Cosmos DB.

Finally, look in Program.cs in TestCLI and at ProcessDirectoryAsync. First, we check if the image and metadata have already been uploaded - we can use CosmosDBHelper to find the document by ID and to return null if the document doesn't exist. Next, if we've set forceUpdate or the image hasn't been processed before, we'll call the Cognitive Services using ImageProcessor from the ProcessingLibrary and retrieve the ImageInsights, which we add to our current ImageMetadata.

Once all of that is complete, we can store our image - first the actual image into Blob Storage using our BlobStorageHelperinstance, and then the ImageMetadata into Cosmos DB using our CosmosDBHelper instance. If the document already existed (based on our previous check), we should update the existing document. Otherwise, we should be creating a new one.

Lab 3.2: Loading Images using TestCLIWe will implement the main processing and storage code as a command-line/console application because this allows you to concentrate on the processing code without having to worry about event loops, forms, or any other UX related distractions. Feel free to add your own UX later.

Page 23: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

Once you've set your Cognitive Services API keys, your Azure Blob Storage Connection String, and your Cosmos DB Endpoint URI and Key in your TestCLI's settings.json, you can run the TestCLI.Run TestCLI, then open Command Prompt and navigate to "C:\LearnAI-Bootcamp\lab01.1-computer_vision\resources\code\Starting-ImageProcessing\TestCLI\bin\Debug" folder (Hint: use the "cd" command to change directories). Then enter TestCLI.exe. You should get the following result:

> TestCLI.exe

Usage: [options]

Options: -force Use to force update even if file has already been added. -settings The settings file (optional, will use embedded resource settings.json if not set) -process The directory to process -query The query to run -? | -h | --help Show help information

By default, it will load your settings from settings.json (it builds it into the .exe), but you can provide your own using the -settings flag. To load images (and their metadata from Cognitive Services) into your cloud storage, you can just tell TestCLIto -process your image directory as follows:

> TestCLI.exe -process c:\learnai-bootcamp\lab01.1-computer_vision\resources\sample_images

Once it's done processing, you can query against your Cosmos DB directly using TestCLI as follows: > TestCLI.exe -query "select * from images"

Take some time to look through the sample_images (you can find them in resources/sample_images) and compare the images to the results in your application.

Page 24: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

Language AdditionWhat if we needed to port our application to another language? Modify your code to call the Translator API on the caption and tags you get back from the Vision service.

Look into the Image Processing Library at the Service Helpers. You can copy one of these and use it to invoke the Translator API. Now you can hook this into the ImageProcessor.cs. Try adding translated versions to your ImageInsights class, and then wire it through to the CosmosDB ImageMetadata class.

Page 25: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

Custom Vision API C# TutorialThe goal of this tutorial is to explore a basic Windows application that uses the Computer Vision API to create a project, add tags to it, upload images, train the project, obtain the default prediction endpoint URL for the project, and use the endpoint to programmatically test an image. You can use this open source example as a template for building your own app for Windows using the Custom Vision API.  

Prerequisites

 

Platform requirementsThis example has been tested using the .NET Framework using Visual Studio 2017, Community Edition

 

Training client libraryYou may need to install the client library depending on your settings within Visual Studio. The easiest way to get the training client library is to install the Microsoft.Cognitive.CustomVision.Training package from nuget.

You can install it through the Visual Studio Package manager. Access the package manager by navigating through:Tools -> Nuget Package Manager -> Package Manager ConsoleIn that Console, add the nuget with:Install-Package Microsoft.Cognitive.CustomVision.Training -Version 1.0.0 

The Training API keyYou also need to have a training API key. The training API key allows you to create, manage, and train Custom Vision projects programatically. All operations on https://customvision.ai are exposed through this library, allowing you to automate all aspects of the Custom Vision Service. You can obtain a key by creating a new project at https://customvision.ai and then clicking on the "setting" gear in the top right.

 

Page 26: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

Lab: Creating a Custom Vision Application

Step 1: Create a console application and prepare the training key and the images needed for the example. 

Start Visual Studio 2017, Community Edition, open the Visual Studio solution named CustomVision.Sample.sln in the sub-directory of where this lab is located:Resources/Starter/CustomVision.Sample/CustomVision.Sample.slnThis code defines and calls two helper methods. The method called GetTrainingKey prepares the training key. The one called LoadImagesFromDisk loads two sets of images that this example uses to train the project, and one test image that the example loads to demonstrate the use of the default prediction endpoint. On opening the project the following code should be displayed from line 35: using System;using System.Collections.Generic;using System.IO;using System.Linq;using System.Threading;using Microsoft.Cognitive.CustomVision;

namespace CustomVision.Sample{ class Program { private static List<MemoryStream> hemlockImages;

private static List<MemoryStream> japaneseCherryImages;

private static MemoryStream testImage;

static void Main(string[] args) { // You can either add your training key here, pass it on the command line, or type it in when the program runs string trainingKey = GetTrainingKey("<your key here>", args);

// Create the Api, passing in a credentials object that contains the training key TrainingApiCredentials trainingCredentials = new TrainingApiCredentials(trainingKey); TrainingApi trainingApi = new TrainingApi(trainingCredentials);

}

Page 27: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

private static string GetTrainingKey(string trainingKey, string[] args) { if (string.IsNullOrWhiteSpace(trainingKey) || trainingKey.Equals("<your key here>")) { if (args.Length >= 1) { trainingKey = args[0]; }

while (string.IsNullOrWhiteSpace(trainingKey) || trainingKey.Length != 32) { Console.Write("Enter your training key: "); trainingKey = Console.ReadLine(); } Console.WriteLine(); }

return trainingKey; }

private static void LoadImagesFromDisk() { // this loads the images to be uploaded from disk into memory hemlockImages = Directory.GetFiles(@"..\..\..\..\Images\Hemlock").Select(f => new MemoryStream(File.ReadAllBytes(f))).ToList(); japaneseCherryImages = Directory.GetFiles(@"..\..\..\..\Images\Japanese Cherry").Select(f => new MemoryStream(File.ReadAllBytes(f))).ToList(); testImage = new MemoryStream(File.ReadAllBytes(@"..\..\..\..\Images\Test\test_image.jpg"));

} }} 

Step 2: Create a Custom Vision Service projectTo create a new Custom Vision Service project, add the following code in the body of the Main() method after the call to new TrainingApi(). // Create a new projectConsole.WriteLine("Creating new project:");var project = trainingApi.CreateProject("My New Project"); 

Step 3: Add tags to your projectTo add tags to your project, insert the following code after the call to CreateProject("My New Project");.// Make two tags in the new projectvar hemlockTag = trainingApi.CreateTag(project.Id, "Hemlock");

Page 28: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

var japaneseCherryTag = trainingApi.CreateTag(project.Id, "Japanese Cherry"); 

Step 4: Upload images to the projectTo add the images we have in memory to the project, insert the following code after the call to CreateTag(project.Id, "Japanese Cherry") method. // Add some images to the tagsConsole.WriteLine("\tUploading images");LoadImagesFromDisk();

// Images can be uploaded one at a timeforeach (var image in hemlockImages){ trainingApi.CreateImagesFromData(project.Id, image, new List<string>() { hemlockTag.Id.ToString() });}

// Or uploaded in a single batch trainingApi.CreateImagesFromData(project.Id, japaneseCherryImages, new List<Guid>() { japaneseCherryTag.Id }); 

Step 5: Train the projectNow that we have added tags and images to the project, we can train it. Insert the following code after the end of code that you added in the prior step. This creates the first iteration in the project. We can then mark this iteration as the default iteration.

 // Now there are images with tags start training the projectConsole.WriteLine("\tTraining");var iteration = trainingApi.TrainProject(project.Id);

// The returned iteration will be in progress, and can be queried periodically to see when it has completedwhile (iteration.Status == "Training"){ Thread.Sleep(1000);

// Re-query the iteration to get it's updated status iteration = trainingApi.GetIteration(project.Id, iteration.Id);}

// The iteration is now trained. Make it the default project endpointiteration.IsDefault = true;trainingApi.UpdateIteration(project.Id, iteration.Id, iteration);Console.WriteLine("Done!\n");

Page 29: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

Step 6: Get and use the default prediction endpointWe are now ready to use the model for prediction. First we obtain the endpoint associated with the default iteration. Then we send a test image to the project using that endpoint. Insert the code after the training code you have just entered.

 // Now there is a trained endpoint, it can be used to make a prediction

// Get the prediction key, which is used in place of the training key when making predictionsvar account = trainingApi.GetAccountInfo();var predictionKey = account.Keys.PredictionKeys.PrimaryKey;

// Create a prediction endpoint, passing in a prediction credentials object that contains the obtained prediction keyPredictionEndpointCredentials predictionEndpointCredentials = new PredictionEndpointCredentials(predictionKey);PredictionEndpoint endpoint = new PredictionEndpoint(predictionEndpointCredentials);

// Make a prediction against the new projectConsole.WriteLine("Making a prediction:");var result = endpoint.PredictImage(project.Id, testImage);

// Loop over each prediction and write out the resultsforeach (var c in result.Predictions){ Console.WriteLine($"\t{c.Tag}: {c.Probability:P1}");}Console.ReadKey(); 

Step 7: Run the exampleBuild and run the solution. You will be required to input your training API key into the console app when running the solution so have this at the ready. The training and prediction of the images can take 2 minutes. The prediction results appear on the console.

Further Reading

The source code for the Windows client library is available on github.

The client library includes multiple sample applications, and this tutorial is based on the CustomVision.Sample demo within that repository.

Page 30: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

LUISEstimated Time: 20-30 minutes

LUIS

First, let's learn about Microsoft's Language Understand Intelligent Service (LUIS).

Now that we know what LUIS is, we'll want to plan our LUIS app. Tomorrow, we'll be creating a bot ("PictureBot") that returns images based on our search, that we can then share or order. We will need to create intents that trigger the different actions that our bot can do, and then create entities to model some parameters than are required to execute that action. For example, an intent for our PictureBot may be "SearchPics" and it triggers Azure Search service to look for photos, which requires a "facet" entity to know what to search for. You can see more examples for planning your app here.

Once we've thought out our app, we are ready to build and train it. These are the steps you will generally take when creating LUIS applications:

1. Add intents 2. Add utterances 3. Add entities 4. Improve performance using features 5. Train and test 6. Use active learning 7. Publish

Lab 1.1: Creating the LUIS service in the portalIn the Portal, hit Create a resource and then enter LUIS in the search box and choose Language Understanding Intelligent Service:

This will lead you to fill out a few details for the API endpoint you'll be creating, choosing the API you're interested in and where you'd like your endpoint to reside, as well as what pricing plan you'd like. Put it in a location that is close to you and available. The free tier is sufficient for this lab. Since LUIS stores images internally at Microsoft (in a secure fashion), to help improve future Cognitive Services offerings, you'll need to check the box to confirm you're ok with this.

Once you have created your new API subscription, you can grab the key from the appropriate section of the blade and add it to your list of keys.

Page 31: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

Lab 1.2: Adding intelligence to your applications with LUISIn a lab tomorrow, we will create our PictureBot. First, let's look at how we can use LUIS to add some natural language capabilities. LUIS allows you to map natural language utterances (words/phrases/sentences the user might say when talking to the bot) to intents (tasks or actions the user wants to perform). For our application, we might have several intents: finding pictures, sharing pictures, and ordering prints of pictures, for example. We can give a few example utterances as ways to ask for each of these things, and LUIS will map additional new utterances to each intent based on what it has learned.

Navigate to https://www.luis.ai and sign in using your Microsoft account. (This should be the same account that you used to create the LUIS key in the previous section). You should be redirected to a list of your LUIS applications. We will create a new LUIS app to support our bot.

Fun Aside: Notice that there is also an "Import App" next to the "New App" button on the current page. After creating your LUIS application, you have the ability to export the entire app as JSON and check it into source control. This is a recommended best practice, so you can version your LUIS models as you version your code. An exported LUIS app may be re-imported using that "Import App" button. If you fall behind during the lab and want to cheat, you can click the "Import App" button and import the LUIS model.

Page 32: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

From the main page, click the "Create new app" button. Give it a name (I chose "PictureBotLuisModel") and set the Culture to "English". You can optionally provide a description. Then click "Done".

You will be taken to the Build section for your new app.

Page 33: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

Note that there is one intent called "None". Random utterances that don't map to any of your intents may be mapped to "None".We want our bot to be able to do the following things:

Search/find pictures Share pictures on social media Order prints of pictures Greet the user (although this can also be done other ways as we will see

later)

Let's create intents for the user requesting each of these. Click the "Create new intent" button.

Name the first intent "Greeting" and click "Done". Then give several examples of things the user might say when greeting the bot, pressing "Enter" after each one.

Page 34: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

Let's see how to create an entity. When the user requests to search the pictures, they may specify what they are looking for. Let's capture that in an entity.

Click on "Entities" in the left-hand column and then click "Create new entity". Give it an entity name "facet" and entity type "Simple". Then click "Done".

Next, click "Intents" in the left-hand sidebar and then click the blue "Create new intent" button. Give it an intent name of "SearchPics" and then click "Done".

Just as we did for Greetings, let's add some sample utterances (words/phrases/sentences the user might say when talking to the bot). People might search for pictures in many ways. Feel free to use some of the utterances below, and add your own wording for how you would ask a bot to search for pictures.

Find outdoor pics Are there pictures of a train? Find pictures of food. Search for photos of a 6-month-old boy Please give me pics of 20-year-old women

Page 35: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

Show me beach pics I want to find dog photos Search for pictures of men indoors Show me pictures of girls looking happy I want to see pics of sad boys Show me happy baby pics

Once we have some utterances, we have to teach LUIS how to pick out the search topic as the "facet" entity. Whatever the "facet" entity picks up is what will be searched. Hover and click over the word (or drag to select a group of words) and then select the "facet" entity.

So your utterances may become something like this when facets are labeled:

Page 36: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

Finally, click "Intents" in the left sidebar and add two more intents:

Name one intent "SharePic". This might be identified by utterances like "Share this pic", "Can you tweet that?", or "post to Twitter".

Create another intent named "OrderPic". This could be communicated with utterances like "Print this picture", "I would like to order prints", "Can I get an 8x10 of that one?", and "Order wallets".When choosing utterances, it can be helpful to use a combination of questions, commands, and "I would like to..." formats.

We are now ready to train our model. Click "Train" in the top right bar. This builds a model to do utterance --> intent mapping with the training data you've provided. Training is not always immediate. Sometimes, it gets queued and can take several minutes.

Then click on "Publish" in the top bar. You have several options when you publish your app, including enabling including all predicted intent scores or Bing spell checker. If you have not already done so, select the endpoint key that you set up earlier, or follow the link to add a key from your Azure account. You can leave the endpoint slot as "Production". Then click "Publish".

Page 37: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

Publishing creates an endpoint to call the LUIS model. The URL will be displayed, which will be explained in a later lab.

Click on "Test" in the top right bar. Try typing a few utterances and see the intents returned. Familiarize yourself with Interactive Testing and Relabeling Utterances/Retraining as you may want to do this now or in a future lab.

One quick example is shown below. I have noticed that my model incorrectly assigned "send me a swimming photo" as SharePic, when it should be SearchPic. I reassigned the intent.

Page 38: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

Now I need to retrain my app by selecting the Train button. I then tested the same utterance and compared the results between my recently trained and previously published model.

Page 39: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

You can also test your published endpoint in a browser. Copy the URL, then replace the {YOUR-KEY-HERE} with one of the keys listed in the Key String column for the resource you want to use. To open this URL in your browser, set the URL parameter &qto your test query. For example, append &q=Find pictures of dogs to your URL, and then press Enter. The browser displays the JSON response of your HTTP endpoint.Finish early? Try these extra credit tasks:

Create additional entities that can be leveraged by the "SearchPics" intent.

Explore using custom entities of entity type "List" to capture emotion and gender. See the example of emotion below.

Page 40: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

Note: When you add more entities or features, don't forget to go to Intents>Utterances and confirm/add more utterances with the entities you add. Also, you will need to retrain and publish your model.If you still have time, spend time exploring the www.luis.ai site. Select "Prebuilt domains" and see what is already available for you. You can also review some of the other features.

Page 41: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

Azure SearchEstimated Time: 20-30 minutes

Azure Search

Azure Search is a search-as-a-service solution allowing developers to incorporate great search experiences into applications without managing infrastructure or needing to become search experts.

Developers look for PaaS services in Azure to achieve better and faster results in their apps. While search is a key to many types of applications, web search engines have set the bar high for search. Users expect: instant results, auto-complete as they type, highlighting hits within the results, great ranking, and the ability to understand what they are looking for, even if they spell it incorrectly or include extra words.

Search is a hard and rarely a core expertise area. From an infrastructure standpoint, it needs to have high availability, durability, scale, and operations. From a functionality standpoint, it needs to have ranking, language support, and geospatial capabilities.

Page 42: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and
Page 43: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

The example above illustrates some of the components users are expecting in their search experience. Azure Search can accomplish these user experience features, along with giving you monitoring and reporting, simple scoring, and tools for prototyping and inspection.

Typical Workflow:

1. Provision serviceo You can create or provision an Azure Search service from

the portal or with PowerShell.2. Create an index

o An index is a container for data, think "table". It has schema, CORS options, search options. You can create it in the portal or during app initialization.

3. Index datao There are two ways to populate an index with your data. The first

option is to manually push your data into the index using the Azure Search REST API or .NET SDK. The second option is to point a supported data source to your index and let Azure Search automatically pull in the data on a schedule.

4. Search an indexo When submitting search requests to Azure Search, you can use

simple search options, you can filter, sort, project, and page over results. You have the ability to address spelling mistakes, phonetics, and Regex, and there are options for working with search and suggest. These query parameters allow you to achieve deeper control of the full-text search experience

Lab 1.1: Create an Azure Search ServiceWithin the Azure Portal, click Create a resource, enter "azure search" in the search bar, and click Azure Search->Create.

Once you click this, you'll have to fill out a few fields as you see fit. For this lab, the "F0" free tier is sufficient. You are only able to have one Free Azure Search instance per subscription, so if you or another member on your subscription have already done this, you will need to use the "Basic" pricing tier. Use the one Resource Group for all of the labs in this workshop. If you already have a resource group for this workshop, just use that one. Put in one of the following locations: West US 2, East US, West Europe, Southeast Asia.

Once creation is complete, open the panel for your new search service.

Lab 1.2: Create an Azure Search IndexAn index is a persistent store of documents and other constructs used by an Azure Search service. An index is like a database that holds your data and can accept search queries. You define the index schema to map to reflect the structure of the documents you wish to search, similar to fields in a database. These fields can have properties that tell things such as if it

Page 44: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

is full text searchable, or if it is filterable. You can populate content into Azure Search by programmatically pushing content or by using the Azure Search Indexer (which can crawl common datastores for data).

For this lab, we will use the Azure Search Indexer for Cosmos DB to crawl the data in the Cosmos DB collection.

Within the Azure Search blade you just created, click Import Data->Data Source->Document DB.

Once you click this, choose a name for the Cosmos DB data source. If you completed the previous lab, lab01.1-computer_vision, choose the Cosmos DB account where your data resides as well as the corresponding Container and Collections. If you did not complete the previous lab, select

Page 45: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

"Or input a connection string" and paste in TODO update connection string: AccountEndpoint=https://anthobootcampdb.documents.azure.com:443/;AccountKey=XzxwNshrS92RAycyikcSNLOtrDdTMQIYsOHDLko22QDHVCNi4b3YW7pqrzhDPZJupwelnlABrqY4m3nCr686Yw==;. For both, the Database should be "images" and the Collection should be "metadata".Click OK.

At this point Azure Search will connect to your Cosmos DB container and analyze a few documents to identify a default schema for your Azure Search Index. After this is complete, you can set the properties for the fields as needed by your application.

Note: You may see a warning that "_ts" fields are not valid field names. You can ignore this for our labs, but you can read more about it here.Update the Index name to: images

Update the Key to: id (which uniquely identifies each document)

Set all fields to be Retrievable (to allow the client to retrieve these fields when searched)

Set the fields Tags to be Filterable (to allow the client to filter results based on these values)

Set the fields Tags to be Facetable (to allow the client to group the results by count, for example for your search result, there were "5 pictures that had a Tag of "beach")

Set the fields Caption and Tags to be Searchable (to allow the client to do full text search over the text in these fields)

Page 46: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

At this point we will configure the Azure Search Analyzers. At a high level, you can think of an analyzer as the thing that takes the terms a user enters and works to find the best matching terms in the Index. Azure Search includes analyzers that are used in technologies like Bing and Office that have deep understanding of 56 languages.

Click the Analyzer tab and set the fields Caption and Tags to use the English-Microsoft analyzer.

Page 47: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

For the final Index configuration step, we will create a Suggester to set the fields that will be used for type ahead, allowing the user to type parts of a word where Azure Search will look for best matches in these fields. To learn more about suggestors and how to extend your searches to support fuzzy matching, which allows you to get results based on close matches even if the user misspells a word, check out this example.

Click the Suggester tab and enter a Suggester Name: sg and choose Tags to be the fields to look for term suggestions

Page 48: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

Click OK to complete the configuration of the Indexer. You could set at schedule for how often the Indexer should check for changes, however, for this lab we will just run it once.

Click Advanced Options and choose to Base 64 Encode Keys to ensure that the ID field only uses characters supported in the Azure Search key field.

Click OK, three times to start the Indexer job that will start the importing of the data from the Cosmos DB database.

Page 49: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

Query the Search Index

You should see a message pop up indicating that Indexing has started. If you wish to check the status of the Index, you can choose the "Indexes" option in the main Azure Search blade.

At this point we can try searching the index.

Click Search Explorer and in the resulting blade choose your Index if it is not already selected.

Click Search to search for all documents. Try searching for "water", or something else, and use ctrl+click to select and view the URLs. Were your results what you expected?

Page 50: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

In the resulting json, you'll see a number after @search.score. Scoring refers to the computation of a search score for every item returned in search results. The score is an indicator of an item's relevance in the context of the current search operation. The higher the score, the more relevant the item. In search results, items are rank ordered from high to low, based on the search scores calculated for each item.Azure Search uses default scoring to compute an initial score, but you can customize the calculation through a scoring profile. There is an extra lab at the end of this workshop if you want to get some hands on experience with using term boosting for scoring.

Page 51: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

Build a BotEstimated Time: 10-15 minutes

Regex and Scorable Groups

We assume that you've had some exposure to the Bot Framework. If you have, great. If not, don't worry too much, you'll learn a lot in this section. We recommend completing this Microsoft Virtual Academy course and checking out the documentation.

Lab 1.1: Setting up for bot developmentWe will be developing a bot using the C# SDK. To get started, you need two things:

1. The Bot Framework project template, which you can download here. The file is called "Bot Application.zip" and you should save it into the \Documents\Visual Studio 2017\Templates\ProjectTemplates\Visual C#\ directory. Just drop the whole zipped file in there; no need to unzip.

2. Download the Bot Framework Emulator for testing your bot locally here. The emulator installs to c:\Users\your-username\AppData\Local\botframework\app-3.5.33\botframework-emulator.exe or your Downloads folder, depending on browser.

Lab 1.2: Creating a simple bot and running itIn Visual Studio, go to File --> New Project and create a Bot Application named "PictureBot". Make sure you name it "PictureBot" or you may have issues later on.

Page 52: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

The rest of the Creating a simple bot and running it lab is optional. Per the prerequisites, you should have experience working with the Bot Framework. You can hit F5 to confirm it builds correctly, and move on to the next lab.Browse around and examine the sample bot code, which is an echo bot that repeats back your message and its length in characters. In particular, note:

In WebApiConfig.cs under App_Start, the route template is api/{controller}/{id} where the id is optional. That is why we always call the bot's endpoint with api/messages appended at the end.

The MessagesController.cs under Controllers is therefore the entry point into your bot. Notice that a bot can respond to many different activity types, and sending a message will invoke the RootDialog.

In RootDialog.cs under Dialogs, "StartAsync" is the entry point which waits for a message from the user, and "MessageReceivedAsync" is the method that will handle the message once received and then wait for further messages. We can use "context.PostAsync" to send a message from the bot back to the user.

Click F5 to run the sample code. NuGet should take care of downloading the appropriate dependencies.

The code will launch in your default web browser in a URL similar to http://localhost:3979/.

Page 53: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

Fun Aside: why this port number? It is set in your project properties. In your Solution Explorer, double-click "Properties" and select the "Web" tab. The Project URL is set in the "Servers" section.

Make sure your project is still running (hit F5 again if you stopped to look at the project properties) and launch the Bot Framework Emulator. (If you just installed it, it may not be indexed to show up in a search on your local machine, so remember that it installs to c:\Users\your-username\AppData\Local\botframework\app-3.5.27\botframework-emulator.exe.) Ensure that the Bot URL matches the port number that your code launched in above, and has api/messages appended to the end. You should be able to converse with the bot.

Page 54: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

Lab 1.3: Regular expressions and scorable groupsThere are a number of things that we can do to improve our bot. First of all, we may not want to call LUIS for a simple "hi" greeting, which the bot will get fairly frequently from its users. A simple regular expression could match this, and save us time (due to network latency) and money (due to cost of calling the LUIS service).

Also, as the complexity of our bot grows, and we are taking the user's input and using multiple services to interpret it, we need a process to manage that flow. For example, try regular expressions first, and if that doesn't match, call LUIS, and then perhaps we also drop down to try other services like QnA Maker and Azure Search. A great way to manage this is ScorableGroups. ScorableGroups give you an attribute to impose an order on these service calls. In our code, let's impose an order of matching on regular expressions first, then calling LUIS for interpretation of utterances, and finally lowest priority is to drop down to a generic "I'm not sure what you mean" response.

To use ScorableGroups, your RootDialog will need to inherit from DispatchDialog instead of LuisDialog (but you can still have the LuisModel attribute on the class). You also will need a reference to Microsoft.Bot.Builder.Scorables (as well as others). So in your RootDialog.cs file, add:using Microsoft.Bot.Builder.Scorables;using System.Collections.Generic;and change your class derivation to: public class RootDialog : DispatchDialog<object>

Page 55: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

Next, delete the two existing methods in the class (StartAsync and MessageReceivedAsync).

Let's add some new methods that match regular expressions as our first priority in ScorableGroup 0. Add the following at the beginning of your RootDialog class: [RegexPattern("^hello")] [RegexPattern("^hi")] [ScorableGroup(0)] public async Task Hello(IDialogContext context, IActivity activity) { await context.PostAsync("Hello from RegEx! I am a Photo Organization Bot. I can search your photos, share your photos on Twitter, and order prints of your photos. You can ask me things like 'find pictures of food'."); }

[RegexPattern("^help")] [ScorableGroup(0)] public async Task Help(IDialogContext context, IActivity activity) { // Launch help dialog with button menu List<string> choices = new List<string>(new string[] { "Search Pictures", "Share Picture", "Order Prints" }); PromptDialog.Choice<string>(context, ResumeAfterChoice, new PromptOptions<string>("How can I help you?", options:choices)); }

private async Task ResumeAfterChoice(IDialogContext context, IAwaitable<string> result) { string choice = await result; switch (choice) { case "Search Pictures": // PromptDialog.Text(context, ResumeAfterSearchTopicClarification, // "What kind of picture do you want to search for?"); break; case "Share Picture": //await SharePic(context, null); break; case "Order Prints": //await OrderPic(context, null); break; default: await context.PostAsync("I'm sorry. I didn't understand you."); break; } }This code will match on expressions from the user that start with "hi", "hello", and "help". Select F5 and test your bot. Notice that when the user asks for help, we present him/her with a simple menu of buttons on the

Page 56: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

three core things our bot can do: search pictures, share pictures, and order prints.

Fun Aside: One might argue that the user shouldn't have to type "help" to get a menu of clear options on what the bot can do; rather, this should be the default experience on first contact with the bot. Discoverability is one of the biggest challenges for bots - letting the users know what the bot is capable of doing. Good bot design principles can help.This setup will make it easier when we LUIS (later) as our second attempt if no regular expression matches, in Scorable Group 1.

Page 57: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

Azure SearchEstimated Time: 10-15 minutes

Lab 2.1: Configure your bot for Azure SearchFirst, we need to provide our bot with the relevant information to connect to an Azure Search index. The best place to store connection information is in the configuration file.

Open Web.config and in the appSettings section, add the following: <!-- Azure Search Settings --> <add key="SearchDialogsServiceName" value="" /> <add key="SearchDialogsServiceKey" value="" /> <add key="SearchDialogsIndexName" value="images" />Set the value for the SearchDialogsServiceName to be the name of the Azure Search Service that you created earlier. If needed, go back and look this up in the Azure portal.

Set the value for the SearchDialogsServiceKey to be the key for this service. This can be found in the Azure portal under the Keys section for your Azure Search. In the below screenshot, the SearchDialogsServiceName would be "aiimmersionsearch" and the SearchDialogsServiceKey would be "375...".

Lab 2.2: Update the bot to use Azure Search

Page 58: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

Next, we'll update the bot to call Azure Search. First, open Tools-->NuGet Package Manager-->Manage NuGet Packages for Solution. In the search box, type "Microsoft.Azure.Search". Select the corresponding library, check the box that indicates your project, and install it. It may install other dependencies as well. Under installed packages, you may also need to update the "Newtonsoft.Json" package.

Right-click on your project in the Solution Explorer of Visual Studio, and select Add-->New Folder. Create a folder called "Models". Then right-click on the Models folder, and select Add-->Existing Item. Do this twice to add these two files under the Models folder (make sure to adjust your namespaces if necessary):

1. ImageMapper.cs 2. SearchHit.cs

You can find the files in this repository under resources/code/ModelsNext, right-click on the Dialogs folder in the Solution Explorer of Visual Studio, and select Add-->Class. Call your class "SearchDialog.cs". Add the contents from here.

Review the contents of the files you just added. Discuss with a neighbor what each of them does.

Page 59: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

We also need to update your RootDialog to call the SearchDialog. In RootDialog.cs in the Dialogs folder, directly under the ResumeAfterChoice method, add these "ResumeAfter" methods: private async Task ResumeAfterSearchTopicClarification(IDialogContext context, IAwaitable<string> result) { string searchTerm = await result; context.Call(new SearchDialog(searchTerm), ResumeAfterSearchDialog); }

private async Task ResumeAfterSearchDialog(IDialogContext context, IAwaitable<object> result) { await context.PostAsync("Done searching pictures"); }In RootDialog.cs, you will also need to remove the comments (the // at the beginning) from the line: PromptDialog.Text(context, ResumeAfterSearchTopicClarification, "What kind of picture do you want to search for?");within the ResumeAfterChoice method.Press F5 to run your bot again. In the Bot Emulator, try searching for something like "dogs" or "water". Ensure that you are seeing results when tags from your pictures are requested.

Page 60: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

LUISEstimated Time: 15-20 minutes

Lab 3.1: Update bot to use LUISWe have to update our bot in order to use LUIS. We can do this by using the LuisDialog class.

In the RootDialog.cs file, add references to the following namespaces:using Microsoft.Bot.Builder.Luis;using Microsoft.Bot.Builder.Luis.Models;Next, give the class a LuisModel attribute with the LUIS App ID and LUIS key. If you can't find these values, go back to http://luis.ai. Click on your application, and go to the "Publish App" page. You can get the LUIS App ID and LUIS key from the Endpoint URL (HINT: The LUIS App ID will have hyphens in it, and the LUIS key will not). You will need to put the following line of code directly above the [Serializable] with your LUIS App ID and LUIS key:namespace PictureBot.Dialogs{ [LuisModel("96f65e22-7dcc-4f4d-a83a-d2aca5c72b24", "1234bb84eva3481a80c8a2a0fa2122f0")] [Serializable]

...Fun Aside: You can use Autofac to dynamically load the LuisModel attribute on your class instead of hardcoding it, so it could be stored properly in a configuration file. There is an example of this in the AlarmBot sample.Let's start simple by adding our "Greeting" intent (below all of the code you've already put within the DispatchDialog): [LuisIntent("Greeting")] [ScorableGroup(1)] public async Task Greeting(IDialogContext context, LuisResult result) { // Duplicate logic, for a teachable moment on Scorables. await context.PostAsync("Hello from LUIS! I am a Photo Organization Bot. I can search your photos, share your photos on Twitter, and order prints of your photos. You can ask me things like 'find pictures of food'."); }Hit F5 to run the app. In the Bot Emulator, try sending the bots different ways of sending hello. What happens when you send "whats up" versus "hello"?

The "None" intent in LUIS means that the utterance didn't map to any intent. In this situation, we want to fall down to the next level of ScorableGroup. Add the "None" method in the RootDialog class as follows:

Page 61: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

[LuisIntent("")] [LuisIntent("None")] public async Task None(IDialogContext context, LuisResult result) { // Luis returned with "None" as the winning intent, // so drop down to next level of ScorableGroups. ContinueWithNextGroup(); }Finally, add methods for the rest of the intents. The corresponding method will be invoked for the highest-scoring intent. Talk with a neighbor about the "SearchPics" and "SharePics" code. What else is the code doing? [LuisIntent("SearchPics")] public async Task SearchPics(IDialogContext context, LuisResult result) { // Check if LUIS has identified the search term that we should look for. string facet = null; EntityRecommendation rec; if (result.TryFindEntity("facet", out rec)) facet = rec.Entity;

// If we don't know what to search for (for example, the user said // "find pictures" or "search" instead of "find pictures of x"), // then prompt for a search term. if (string.IsNullOrEmpty(facet)) { PromptDialog.Text(context, ResumeAfterSearchTopicClarification, "What kind of picture do you want to search for?"); } else { await context.PostAsync("Searching pictures..."); context.Call(new SearchDialog(facet), ResumeAfterSearchDialog); } }

[LuisIntent("OrderPic")] public async Task OrderPic(IDialogContext context, LuisResult result) { await context.PostAsync("Ordering your pictures..."); }

[LuisIntent("SharePic")] public async Task SharePic(IDialogContext context, LuisResult result) { PromptDialog.Confirm(context, AfterShareAsync, "Are you sure you want to tweet this picture?"); }

private async Task AfterShareAsync(IDialogContext context, IAwaitable<bool> result) {

Page 62: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

if (result.GetAwaiter().GetResult() == true) { // Yes, share the picture. await context.PostAsync("Posting tweet."); } else { // No, don't share the picture. await context.PostAsync("OK, I won't share it."); } }

Hint: The "SharePic" method contains a little code to show how to do a prompt for a yes/no confirmation as well as setting the ScorableGroup. This code doesn't actually post a tweet because we didn't want to spend time getting everyone set up with Twitter developer accounts and such, but you are welcome to implement if you want.You may have noticed that we haven't implemented scorable groups for the intents we just added. Modify your code so that all of the LuisIntents are of scorable group priority 1. Now that you've added LUIS functionality, you can uncomment the two await lines in the ResumeAfterChoice method.Once you've modified your code, hit F5 to run in Visual Studio, and start up a new conversation in the Bot Framework Emulator. Try chatting with the bot, and ensure that you get the expected responses. If you get any unexpected results, note them down and revise LUIS.

Don't forget, you must re-train and re-publish your LUIS model. Then you can return to your bot in the emulator and try again.

Fun Aside: The Suggested Utterances are extremely powerful. LUIS makes smart decisions about which utterances to surface. It chooses the ones that will help it improve the most to have manually labeled by a human-in-the-loop. For example, if the LUIS model predicted that a given utterance mapped to Intent1 with 47% confidence and predicted that it mapped to Intent2 with 48% confidence, that is a strong candidate to surface to a human to manually map, since the model is very close between two intents.Extra credit (to complete later): create an OrderDialog class in your "Dialogs" folder. Create a process for ordering prints with the bot using FormFlow. Your bot will need to collect the following information: Photo size (8x10, 5x7, wallet, etc.), number of prints, glossy or matte finish, user's phone number, and user's email.Finally, add a default handler if none of the above services were able to understand. This ScorableGroup needs an explicit MethodBind because it is not decorated with a LuisIntent or RegexPattern attribute (which include a MethodBind). // Since none of the scorables in previous group won, the dialog sends a help message. [MethodBind] [ScorableGroup(2)] public async Task Default(IDialogContext context, IActivity activity)

Page 63: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

{ await context.PostAsync("I'm sorry. I didn't understand you."); await context.PostAsync("You can tell me to find photos, tweet them, and order prints. Here is an example: \"find pictures of food\"."); }Hit F5 to run your bot and test it in the Bot Emulator.

Page 64: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

Publish And RegisterEstimated Time: 10-15 minutes

Lab 4.1: Publish your botA bot created using the Microsoft Bot can be hosted at any publicly-accessible URL. For the purposes of this lab, we will register our bot using Azure Bot Service.

Navigate to the portal. In the portal, click "Create a resource" and search for "bot". Select Web App Bot, and click create. For the name, you'll have to create a unique identifier. I recommend using something along the lines of PictureBot[i][n] where [i] is your initials and [n] is a number (e.g. mine would be PictureBotamt40). Put in the region that is closest to you. For pricing tier, select F0, as that is all we will need for this workshop. Set the bot template to Basic (C#), and configure a new App service plan (put it in the same location as your bot). It doesn't matter which template you choose, because we will overwrite it with our PictureBot. Turn off Application Insights (to save money). Click create.

Page 66: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

You have just published a very simple EchoBot (like the template we started from earlier). What we will do next is publish our PictureBot to this bot service.

First we need to grab a few keys. Go to the Web App Bot you just created (in the portal). Under App Service Settings->Application Settings->App settings, grab the BotId, MicrosoftAppId, and MicrosoftAppPassword. You will need them in a moment.

Return to your PictureBot in Visual Studio. In the Web.config file, fill in the the blanks under appSettings with the BotId, MicrosoftAppId, and MicrosoftAppPassword. Save the file.Getting an error that directs you to your MicrosoftAppPassword? Because it's in XML, if your key contains "&", "<", ">", "'", or '"', you will need to replace those symbols with their respective escape facilities: "&amp;", "&lt;", "&gt;", "&apos;", "&quot;".In the Solution Explorer, right-click on your Bot Application project and select "Publish". This will launch a wizard to help you publish your bot to Azure.

Select the publish target of "Microsoft Azure App Service" and "Select Existing."

On the App Service screen, select the appropriate subscription, resource group, and your Bot Service. Select Publish.

Page 67: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

If you get an error here, simply exit the browser window within Visual Studio and complete the next step.Now, you will see the Web Deploy settings, but we have to edit them one last time. Select "Settings" under the publish profile. Select Settings again and check the box next to "Remove additional files at destination". Click "Save" to exit the window, and now click "Publish". The output window in Visual Studio will show the deployment process. Then, your bot will be hosted at a URL like http://picturebotamt6.azurewebsites.net/, where "picturebotamt6" is the Bot Service app name.

Page 68: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

Managing your bot from the portalReturn to the portal to your Web App Bot resource. Under Bot Management, select "Test in Web Chat" to test if your bot has been published and is working accordingly. If it is not, review the previous lab, because you may have skipped a step. Re-publish and return here.

After you've confirmed your bot is published and working, check out some of the other features under Bot Management. Select "Channels" and notice there are many channels, and when you select one, you are instructed on how to configure it.

Want to learn more about the various aspects related to bots? Spend some time reading the how to's and design principles.

Page 69: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

Logging Chat Conversations

1. Objectives

This lab is aimed at getting started with logging using the Microsoft Bot Framework. Specifically, in this lab, we will demonstrate how you can log conversations using an activity logger. Logging in this lab is performed using Debug.

2. Setup

Import the core-Middleware code from code\core-Middleware in Visual Studio. The easiest way to perform this is opening core-Middleware.sln. The solution explorer would look as follows in Visual Studio:

3. IActivityLogger

One of the most common operations when working with conversational history is to intercept and log message activities between bots and users. The IActivityLogger interface contains the definition of the functionality that a class needs to implement to log message activities. The DebugActivityLogger implementation of the IActivityLogger interface writes message activities to the trace listeners only when running in debug.public class DebugActivityLogger : IActivityLogger{ public async Task LogAsync(IActivity activity)

Page 70: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

{ Debug.WriteLine($"From:{activity.From.Id} - To:{activity.Recipient.Id} - Message:{activity.AsMessageActivity().Text}"); }}The logging activity is an event that takes place for the lifetime of the bot application. Application_Start method is called when the application starts and lasts for the lifetime of the application.

Global.asax allows us to write event handlers that react to important life-cycle events. Global.asax events are never called directly by the user. They are called automatically in response to application events. For this lab, we must register DebugActivityLogger in Application_Start (in Global.asax) as follows.protected void Application_Start(){ Conversation.UpdateContainer(builder => { builder.RegisterType<DebugActivityLogger>().AsImplementedInterfaces().InstancePerDependency(); }); GlobalConfiguration.Configure(WebApiConfig.Register);}The Application_Start method is called only one time during the life cycle of an application. You can use this method to perform startup tasks.

4. Log Results

Run the bot application and test in the emulator with messages. The log data is written using Debug.WriteLine. You can view the result using Output window. Ensure Debug is selected for Show output from. If Output is not visible, select View->Output from the menu. You will see an entry such as From:56800324 - To:2c1c7fa3 - Message:hello. Since this is an echo bot, you will also see the echoed message logged.

Page 71: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

File Logger

1. Objectives

The aim of this lab is to log chat conversations to a file using middleware. Excessive Input/Output (I/O) operations can result in slow responses from the bot. In this lab, the global events are leveraged to perform efficient logging of chat conversations to a file. This activity is an extension of the previous lab's concepts.

Import the project from code\file-core-Middleware in Visual Studio.

2. An inefficient way of logging

A very simple way of logging chat conversations to a file is to use File.AppendAllLines as shown below in the code snippet. File.AppendAllLines opens a file, appends the specified string to the file, and then closes the file. However, this can be very inefficient way of logging as we will be opening and closing the file for every message from the user/bot. tupublic class DebugActivityLogger : IActivityLogger{ public async Task LogAsync(IActivity activity) { File.AppendAllLines("C:\\Users\\username\\log.txt", new[] { $"From:{activity.From.Id} - To:{activity.Recipient.Id} - Message:{activity.AsMessageActivity().Text}" }); }}

3. An efficient way of logging

A more efficient way of logging to a file is to use the Global.asax file and use the events related to the lifecycle of the bot. Extend the global.asax as shown in the code snippet below. Change the below line to a path that exists in your environment.tw = new StreamWriter("C:\\Users\\username\\log.txt", true);Application_Start opens a StreamWriter that implements a Stream wrapper that writes characters to a stream in a particular encoding. This lasts for the life of the bot and we can now just write to it as opposed to opening and closing the file for each message. It is also worth noting that the StreamWriter is now sent as a parameter to DebugActivityLogger. The log file can be closed in Application_End which is called once per lifetime of the application before the application is unloaded.using System.Web.Http;using Autofac;

Page 72: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

using Microsoft.Bot.Builder.Dialogs;using System.IO;using System.Diagnostics;using System;

namespace MiddlewareBot{ public class WebApiApplication : System.Web.HttpApplication { static TextWriter tw = null; protected void Application_Start() { tw = new StreamWriter("C:\\Users\\username\\log.txt", true); Conversation.UpdateContainer(builder => { builder.RegisterType<DebugActivityLogger>().AsImplementedInterfaces().InstancePerDependency().WithParameter("inputFile", tw); });

GlobalConfiguration.Configure(WebApiConfig.Register); }

protected void Application_End() { tw.Close(); } }}Recieve the file parameter in DebugActivityLogger constructor and update LogAsync to write to the log file now by adding the below lines.tw.WriteLine($"From:{activity.From.Id} - To:{activity.Recipient.Id} - Message:{activity.AsMessageActivity().Text}", true);tw.Flush();The entire DebugActivityLogger code looks as follows:using System.Diagnostics;using System.Threading.Tasks;using Microsoft.Bot.Builder.History;using Microsoft.Bot.Connector;using System.IO;

namespace MiddlewareBot{ public class DebugActivityLogger : IActivityLogger { TextWriter tw; public DebugActivityLogger(TextWriter inputFile) { this.tw = inputFile; }

public async Task LogAsync(IActivity activity) {

Page 73: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

tw.WriteLine($"From:{activity.From.Id} - To:{activity.Recipient.Id} - Message:{activity.AsMessageActivity().Text}", true); tw.Flush(); } }

}

4. Log file

Run the bot application in the emulator and test with messages. Investigate the log file specified in the linetw = new StreamWriter("C:\\Users\\username\\log.txt", true);To view the log messages from the user and the bot, the messages should appear as follows in the log file now:From:2c1c7fa3 - To:56800324 - Message:a log messageFrom:56800324 - To:2c1c7fa3 - Message:You sent a log message which was 13 characters

5. Selective Logging

In certain scenarios, it would be desirable to perform modeling on selective messages or selectively log. There are many such examples: a) A very active community bot can potentially capture a tsunami of chat messages very quickly and a lot of the chat messages may not be very useful. b) There may also be a need to selectively log to focus on certain users/bot or messages related to a particular trending product category/topic.

One can always capture all the chat messages and perform filtering to mine selective messages. However, having the flexibility to selectively log can be useful and the Microsoft Bot Framework allows this. To selectively log messages from the users, you can investigate the activity json and filter on "name". For example, in the json below, the message was sent by "Bot1".{ "type": "message", "timestamp": "2017-10-27T11:19:15.2025869Z", "localTimestamp": "2017-10-27T07:19:15.2140891-04:00", "serviceUrl": "http://localhost:9000/", "channelId": "emulator", "from": { "id": "56800324", "name": "Bot1" }, "conversation": { "isGroup": false, "id": "8a684db8", "name": "Conv1"

Page 74: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

}, "recipient": { "id": "2c1c7fa3", "name": "User1" }, "membersAdded": [], "membersRemoved": [], "text": "You sent hi which was 2 characters", "attachments": [], "entities": [], "replyToId": "09df56eecd28457b87bba3e67f173b84"}After investigating the json, the LogAsync method in DebugActivityLogger can be modified to filter on user messages by introducing a simple condition to check activity.From.Name property:public class DebugActivityLogger : IActivityLogger{ public async Task LogAsync(IActivity activity) { if (!activity.From.Name.Contains("Bot")) { Debug.WriteLine($"From:{activity.From.Id} - To:{activity.Recipient.Id} - Message:{activity.AsMessageActivity()?.Text}"); } }}

Page 75: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

SQL Logger

1. Objectives

The aim of this lab is to log chat conversations to Azure SQL database. This lab is an extension of the previous File Logger lab where we used Global.asax events and LogAsync methods.

2. Setup/Pre-requisites

2.1. Import the project from code\sql-core-Middleware in Visual Studio.

2.2. Since we will be writing to a SQL database. To create a new one, go to the azure portal and follow the Create DB - Portalsteps. But create a database called Botlog, not "MySampleDatabase" as suggested in the link. At the end of the process, you should see the Overview tab, as you can see in the image below.

2.2. Select "Show database connection strings" from the Overview tab and make a note (paste in a text document) of the connection string as we will be using it later in the lab.

Page 76: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

2.3. Change your firewall settings to capture your ip address. You may have already done this if you followed the steps from Create DB - Portal . Your ip address can be found here: https://whatismyipaddress.com/

2.4. Create a new table called userChatLog with the below create table statement (or schema). We will use the same tool of the "Query the SQL database" section at the Create DB - Portal link. Within the Azure Portal, click "Data Explorer (preview)" on the left menu. To login, use the same account and password you specified when creating the database. Paste the script below and click "Run". The expected result is the message "Query succeeded: Affected rows: 0.".

Page 77: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

CREATE TABLE userChatLog(id int IDENTITY(1, 1),fromId varchar(25),toId varchar(25),message varchar(max),PRIMARY KEY(id));2.5. Import the code from sql-core-Middleware into Visual Studio. The easiest way to do this is by clicking on the solution sql-core-Middleware.sln.

3. SQL Logging

The framework is very much the same as what was used in the previous labs. In short, we will use the Global.asax's global events to setup our logging. The ideal way of doing this is to initiate the connection to SQL server via Application_Start, pass the connection object to LogAsync method for storing chat messages and close the connection via Application_End.public class WebApiApplication : System.Web.HttpApplication { SqlConnection connection = null; protected void Application_Start() { // Setting up sql string connection SqlConnectionStringBuilder sqlbuilder = new SqlConnectionStringBuilder(); sqlbuilder.DataSource = "botlogserver.database.windows.net"; sqlbuilder.UserID = "botlogadmin"; sqlbuilder.Password = "�"; sqlbuilder.InitialCatalog = "Botlog";

connection = new SqlConnection(sqlbuilder.ConnectionString); connection.Open(); Debug.WriteLine("Connection Success");

Conversation.UpdateContainer(builder => { builder.RegisterType<SqlActivityLogger>().AsImplementedInterfaces().InstancePerDependency().WithParameter("conn", connection); });

GlobalConfiguration.Configure(WebApiConfig.Register); }

protected void Application_End() { connection.Close(); Debug.WriteLine("Connection to database closed"); } }It is worth noting that the connection object is passed as a parameter to SqlActivityLogger in the above code snippet. As a result, the LogAsync method is now ready to log any message from the bot or the user. The from/to ids along with the chat message can be obtained from the activity object (activity.From.Id, activity.Recipient.Id, activity.AsMessageActivity().Text).

Page 78: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

public class SqlActivityLogger : IActivityLogger { SqlConnection connection;

public SqlActivityLogger(SqlConnection conn) { this.connection = conn; } public async Task LogAsync(IActivity activity) { string fromId = activity.From.Id; string toId = activity.Recipient.Id; string message = activity.AsMessageActivity().Text;

string insertQuery = "INSERT INTO userChatLog(fromId, toId, message) VALUES (@fromId,@toId,@message)"; // Passing the fromId, toId, message to the the user chatlog table SqlCommand command = new SqlCommand(insertQuery, connection); command.Parameters.AddWithValue("@fromId", fromId); command.Parameters.AddWithValue("@toId", toId); command.Parameters.AddWithValue("@message", message); // Insert to Azure sql database command.ExecuteNonQuery(); Debug.WriteLine("Insertion successful of message: " + activity.AsMessageActivity().Text); } }

SQL InjectionSQL Injection refers to an injection attack wherein an attacker can execute malicious SQL statements that control an application’s database server. SQL Injection can provide an attacker with unauthorized access to sensitive data. In the LogAsync method, parameters allow for defense against SQL injection. The prime benefit of parameterized queries is to prevent SQL injection.

4. SQL Query Results

Run the project from visual studio and open the bot emulator. Begin to send messages to your bot to test the SQL logging functionality.

Page 79: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

From the database page of the portal, select Tools -> Query editor (preview) to preview log messages stored in the table. Login to run any queries. This is a quick way to see results but is not the only way of doing it. Feel free to use any SQL client to perform query operations. Run the query Select * from userChatLog to view chat inserts into the table userChatLog. In the below example, the message My Beautiful Los Angeles sent via the bot emulator is logged along with the ids.

Page 80: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

Bot TestingDevelopment and Testing with Ngrok

1. Objectives

With Microsoft Bot Framework, to configure the bot to be available to a particular channel, you will need to host the Bot service on a public URL endpoint. The channel will not be able to access your bot service if it is on a local server port hidden behind a NAT or firewall.

When designing / building / testing your code, you do not always want to have to keep redeploying. This will result in additional hosting costs. This is where ngrok can really help in speeding up the development/testing phases of bots. The goal of this lab is to use ngrok to expose your bot to public internet and use the public endpoints to test your bots in the emulator.

2. Setup

a. If you do not have ngrok installed, download ngrok from https://ngrok.com/download and install for your OS. Unzip ngrok file downloaded and install it.

b. Open any Bot project (from the previous labs) in Visual Studio and run it. If you are using EchoBot, you should be seeing the below message in the browser:

EchoBot

Describe your bot here and your terms of use etc.

Visit Bot Framework to register your bot. When you register it, remember to set your bot's endpoint to https://your_bots_hostname/api/messages

*** The above information is informational only. You do not need to register your bot for this lab.

3. Forwarding

a. Given the bot is being hosted on localhost:3979, we can use ngrok to expose this port to the public internet

b. Open terminal and go to the folder where ngrok is installed

c. Run the below command and you should see the forwarding url:

Page 81: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

ngrok.exe http 3979 -host-header="localhost:3979"

e. Enter the forwarding url (http) in the bot emulator. The bot url will have /api/messages appended to the forwarding url. Test the bot in the emulator by sending messages. We have now used a public endpoint instead of localhost to test the bot.

Page 83: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

Unit Testing

1. Objectives

Writing code using Microsoft Bot Framework is fun and exciting. But before rushing to code bots, you need to think about unit testing your code. Chatbots bring their own set of challenges to testing including testing across environments, integrating third party APIs, etc. Unit testing is a software testing method where individual units/components of a software are tested.

Unit tests can help:

Verify functionality as you add it Validate your components in isolation People unfamiliar with your code verify they haven't broken it when they

are working with it

The goal of this lab is to introduce unit testing for bots developed using Microsoft Bot Framework.

2. Setup

Import the EchoBot Solution in VisualStudio from code\EchoBot. On successful import, you will see two projects (EchoBot, a Bot Application and EchoBotTests, a Test Project) as shown below.

Page 84: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

3. Echobot

In this lab, we will use EchoBot to develop unit tests. EchoBot is a very simple bot that echos back to the user with any message typed. For example, if the user types "Hello", EchoBot responds with the message "You sent: Hello". The core of EchoBot code that uses Dialogs can be found below (MessagesController.cs). MessageReceivedAsync echos back to the user with "You said: "public class EchoDialog : IDialog<object>{ public async Task StartAsync(IDialogContext context) { context.Wait(MessageReceivedAsync); }

public async Task MessageReceivedAsync(IDialogContext context, IAwaitable<IMessageActivity> argument) { var message = await argument; await context.PostAsync("You said: " + message.Text); context.Wait(MessageReceivedAsync); }}

4. Echobot - Unit Tests

With complex chatbots, you will have conversation state and conversation flow to worry about. There is also the fact that chatbots can respond multiple times to a single user message and that after sending a message, you don't immediately get a response. Given these complexities, mocking can help with unit testing. Mocking is primarily used in unit testing. An object under test may have dependencies on other (complex) objects. To isolate the behaviour of the object you want to test you replace the other objects by mocks that simulate the behavior of the real objects. This is useful if the real objects are impractical to incorporate into the unit test.

Unit tests can be created using Visual Studio's Unit Test Project within the EchoBot Solution. On importing EchoBot.sln, you will see EchoBotTests project which is a Unit Test Project. This project contains a few helper classes (developed by reusing the Bot Builder code) to help develop Unit Tests for Dialogs:

DialogTestBase.cs FiberTestBase.cs MockConnectorFactory.cs

Inside EchoBotTests.cs, you will find a TestMethod called ShouldReturnEcho. ShouldReturnEcho verifies the result from EchoBot. The below line in EchoBotTests.cs mocks the behavior of

Page 85: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

EchoBot using RootDialog. RootDialog is used to provide the functionality of EchoBot.using (new FiberTestBase.ResolveMoqAssembly(rootDialog))Run all Tests by selecting Test -> Run -> All Tests as shown below and verify the tests run successfully.

Page 86: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

Connect Directly to a Bot – Direct Line

Objectives

Communication directly with your bot may be required in some situations. For example, you may want to perform functional tests with a hosted bot. Communication between your bot and your own client application can be performed using Direct Line API. This hands-on lab introduces key concepts related to Direct Line API.

Setup

1. Open the project from code\core-DirectLine and import the solution in Visual Studio.

2. In the DirectLineBot solution, you will find two projects: DirectLineBot and DirectLineSampleClient. You can choose to use the published bot (from the earlier labs) or publish DirectLineBot for this lab.

To use DirectLineBot, you must:

Deploy it to Azure. Follow this tutorial to learn how to deploy a .NET bot to Azure directly from Visual Studio.

Register it with the Bot Framework before others can use DirectLineBot. The steps to register can be found in the registration instructions.

DirectLineSampleClient is the client that will send messages to the bot.

Authentication

Direct Line API requests can be authenticated either by using a secret that you obtain from the Direct Line channel configuration page in the Bot Framework Portal. Go to the Bot Framework Portal and find your bot. Add Direct Line via Connect to channels for your bot.

Page 87: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

You can obtain a Secret Key from the Direct Line channel after adding as shown below:

Security Scope

Secret Key: Secret key is application wide and is embedded in the client application. Every conversation that is initiated by the application would use the same secret. This makes it very convenient.

Page 88: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

Tokens: A token is conversation specific. You request a token using that secret and you can initiate a conversation with that token. Its valid for 30 minutes from when it is issued but it can be refreshed.

App Config

The Secret key obtained from Configure Direct Line in the Bot Framework Portal is then added to the Configuration settings in App.config file as shown below. In addition, for the published bot, capture the bot id (also known as the app id) and enter in the appSettings part of App.config from DirectLineSampleClient project. The relevant lines of App.config to enter in the App.config are listed as follows:<add key="DirectLineSecret" value="YourBotDirectLineSecret" /><add key="BotId" value="YourBotId/" />

Sending and Receiving Messages

Using Direct Line API, a client can send messages to your bot by issuing HTTP Post requests. A client can receive messages from your bot either via WebSocket stream or by issuing HTTP GET requests. In this lab, we will explore HTTP Get option to receive messages.

Page 89: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

1. Run project DirectLineSampleClient after making the Config changes.

2. Submit a message via console and obtain the conversation id. Line 52 of Program.cs prints the conversation ID that you will need in order to talk to bots:Console.WriteLine("Conversation ID:" + conversation.ConversationId);

3. Once you have the conversation id, you can retrieve user and bot messages using HTTP Get. To retrieve messages for a specific conversation, you can issue a GET request to https://directline.botframework.com/api/conversations/{conversationId}/messages endpoint. You will also need to pass the Secret Key as part of raw header (i.e. Authorization: Bearer {secretKey}).

4. Any Rest Client can be used to receive messages via HTTP Get. In this lab, we will leverage curl or web based client:

4.1 Curl:

Curl is a command line tool for transferring data using various protocols. Curl can be downloaded fromhttps://curl.haxx.se/download.html

Open terminal and go to the location where curl is installed and run the below command for a specific conversation:curl -H "Authorization:Bearer {SecretKey}" https://directline.botframework.com/api/conversations/{conversationId}/messages -XGET

Page 90: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

4.2 Web based Rest Clients:

You can use Advanced Rest Client with Chrome for receiving messages from the bot.

To use Advanced Rest Client, the header would need to contain header name (Authorization) and header value (Bearer SecretKey). The request url would be https://directline.botframework.com/api/conversations/{conversationId}/messagesendpoint

The below images indicate the conversations obtained from Advanced Rest Client. Note the conversation "Hi there" and the corresponding bot response that is echoed back.

Page 91: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

 

5. Direct Line API 3.0

With 3.0, you can also send rich media such as images or hero cards unlike the earlier versions. If you are using DirectLineBotDialog.cs, one of the case statements looks for the text "send me a botframework image" to send image

Page 92: azurebootcampdk.azurewebsites.net  · Web viewThe names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and

case "send me a botframework image":

reply.Text = $"Sample message with an Image attachment";

var imageAttachment = new Attachment(){ContentType = "image/png",

ContentUrl = "https://docs.microsoft.com/en-us/bot-framework/media/how-it-works/architecture-resize.png",

};

reply.Attachments.Add(imageAttachment);

Enter this text using the client and view the results via curl as shown below. You will find the image url displayed in the images array.