Share via


Cognitive Services: Jewelry showroom theft detection

Introduction

CCTV cameras are now everywhere. Large-scale video processing is a grand challenge representing an important frontier for analytics, what with videos from factory floors, Jewellery showroom, traffic intersections, police vehicles, and retail shops. It’s the golden era for computer vision, AI, and machine learning – it’s a great time now to extract value from videos to impact science, society, and business!

I would like to share one recent incident happened in India, TamilNadu. The one of the famous Jewellery store in Trichy has more than 20 securities, and 60 CCTV cameras in surveillance. Two members were wearing animal masks and packing all the jewels in a sack, just like a scene from the movie. The burglars entered the showroom by drilling a hole in the wall which is enough for a man to enter, in the backside of the building. stole almost 100 kgs of gold which is worth Rs 50 crore.

Video surveillance is incredibly useful for helping track down a suspect. Often burglars are intelligent enough to mask their faces, so only facial recognition is not a possibility. However, video surveillance will help identify what the perpetrator looked like by recording height build, hair color, skin color and human body and object recognition and it will alert to showroom manager.

https://lh4.googleusercontent.com/4Mp-orAvNxkH_mCjOB21QTCMluhrRK17WdHzR31OgJci65VVZt0Q3bli5suzxl4aFfB5R_CNO4H0qF13RpmyvY6q-NIyihM0FeQtL-4YIxW3Pan673-E0lxr1ksJgTIUTBshyqKzzbVSvMCkhQ  

 This wiki post will share with how to implement computer Vision and Face API Analyze Videos in Real-time.

How to use Microsoft cognitive services

Perform of the real-time analysis on frames taken from a live video stream and identify the face with extracted the content using vision API, the application will follow the below.

  • Used Microsoft’s Cognitive Face API to identify sentiment person by person.
  • Used Microsoft’s Emotion API to match facial patterns to emotions.
  • Extracted emotion data using Microsoft’s Vision API.

https://lh5.googleusercontent.com/c8pdqOwMp2KnWpGEhChfCDoXwL05IYP_917svBBUurImiYp2hCyBJPOAq8LzN6j41cofxSlJvCz50h4Pc58HEXPYsUzUQ3G8OJ-zI99ZSfX3cFrjxaaT0fgxnXvDJrJXR08sEDJ1URhbUp2jrw

Let we start implement Live Video analysis, in the following steps

Create Computer Vision API

The Computer Vision API will help developers to identify the objects with access to advanced algorithms for processing images and returning image metadata information.

Step 1: Microsoft Announcing Azure cognitive services free for 12 month, it's fantastic News, Create a free trial Azure subscription account from Azure portal.

https://lh4.googleusercontent.com/QpJ8hkAf1hCBMKIg7l7B0pFS7GsFLGWjgnPJRps6J2OSUR65u8Xd1z-K4NccMLzuG1CGCKUjtZU8-Sv4dMBze3jJnIR2q5ASYXbZOkLOxWk0EbD-Ca6yMJukK75Gxb_Pw5IHhCr4B1OmA8xQwA

Step 2: Create New resource and select the AI+ Machine learning and click on Computer Vision API

https://lh5.googleusercontent.com/RE2j5b-qpRDeX_VAS32hGCrh6xCGCwK_sQZNOjqm8h2R5vKXEnqtJ2ejz3mJTeqHd2kaeoU95d1EUpmL0y9u0CRljITAIBGD7KL6iOZTc-qksmdC_ji-Iy_GlSD_bZMPnFW772S6fzA3pCpc1g

Step 3: After this, fill in the following information for creating Cognitive Service and click on "Create".

  • **Name **- Microsoft recommends a descriptive name for API, for example - <common name><APIName>Account.
  • **Subscription **- Select the available Azure subscriptions.
  • **Location **- Select the service locations.
  • Pricing tier - You can choose your pricing tier. F0 is free service and S0 is paid service. Based on your usage, you can choose the pricing tier.
  • Resource group - Select the Resource group > confirm the Microsoft notice and click on "Create" again for creating the account

https://lh4.googleusercontent.com/G2ZkDLbewc-jUeaJF3xsbFWLdi7QGUC8y9Av_6YNWQA4v4z6LmEksbuRzh8tHUSR0OIaKaAYxkaQcaGVDNZopPgNUyiBUWm-HJ5HexG-BjE1yi__ilHyl9ZEQ6Ti5dC6hkgmf87Fjum4VB689Q

Step 4: The resource successfully complete after few min and will get confirmation message,you can click on “Go to Resource “ Button .

https://lh4.googleusercontent.com/WZrLVMIeXQkbiQxufRT9n3Z5jSg9UruI96BUitj3T-X0s3E9Ulkus23SsBiF7byeEYF4yPN2fZcr9WM-q6SQHcbqfOHKzeSyi3HtmZEyHsINOTAS9wjY-Oc85tJO_8uxY38cC5nu01tnFJDgHg
Step 5: You can capture/copy the Azure commuter vision API Endpoint url and key from All resources.it will while implementing the application.
https://lh6.googleusercontent.com/Lsz1t7SgmuFBnr5Y6zeWpfHofjIrM9uxxEnaJDgB9g1pNdUkC5Vh8qTHGJaqE3BaZwDtZIhYuuJ5bKcBvcwgJddtYmooRhJXMqhMftClPoDkgHGQWkmyCoZ_sWjQYHe5w5ppDOqziaBGVZjukw

Create Face API

The Face API will Detect one or more human faces in an image and get back face rectangles for where in the image the faces are, along with face attributes which contain machine learning-based predictions of facial features.

Step 1: You have already login into the Azure portal again click on Create on resource > Select AI + Machine Learning and click on Face  and follow the** step 3** and step 4 like computer vision API.

https://lh5.googleusercontent.com/3diQbA9ZayU7VloselzCSAM8LafZwraTBOXyg3iE6dBTGGgjK5V0jyXhmsrQCL2UefxCY3V6p19lIItm2w9ASn7i6oK85Qn8j00lM0h7QVNbHxTz8vL93xr15__6pEWNchSAn_6JzC2XVQpBwg

Step 2: You can capture/copy the Azure Face API End point url and key from All resources.it will while implement the application.

https://lh5.googleusercontent.com/ykz8xkdGyInfABY-OV2stzjjlM41Xux3FykB9Hc77lhRScYF_cUBpFcH-bgEDSMfq-P72bTiax5wwwxLPzI5l7W62KZve48D6v6z0Chiva232FnLLQllK9uyjsfGABin0yfpEF7whWv7ZVT_HQ

Demo Source code

Microsoft Create and released the sample POC application for Video Analysis, let you download and replace the cognitive service API Endpoint url and key. Application contains a library, along with two applications, for analyzing video frames from a webcam in near-real-time using APIs from **Microsoft Cognitive Services **and use the **OpenCvSharp **package for webcam support.

Step 1Open the sample in Visual Studio 2019++.
Step 2: Restore all required nuget package. Right click on solutions and select the restore nuget pakages
https://lh5.googleusercontent.com/gaXz2BT71kBzlTgrKtZqw2zs-kzsC4uPT6YIEpEVsh2vEus36cA4g15zWOTK1w8lEtLU19_buJ9dAPbLB9WRdSBAo0RYW2eGFNyWf38AsHeE_KZEuzYGgKtf3I_yuFNnDbUGVTNcwthOqSrE-w
**Step 3: **Select the solutions and build the applications. You can change the key and End point url from following files.

  • For BasicConsoleSample, the Face API key is hard-coded directly in BasicConsoleSample/Program.cs.
  • For LiveCameraSample, the keys should be entered into the Settings pane of the app.

They will be persisted across sessions as user data. You can also go change the value from App.config file.

Step 4: Let start run the application. Select your camera and Mode of event.

https://lh3.googleusercontent.com/lJxJ97SopUEJxuU1ZzoeegN0t1w5JOHwK59_EyLxZbZrNHc51C6KnLwxR5XtYu_te9DrD4pA6bDu8perM1fhyw3KMyLx6ccupxVEmwPv0KjiosjsXa0xpPtEzkT53QOETF3th7QXRY1s-a-VAQ

Summary

In this wiki, you learned how to run real-time analysis on live video streams using the Face, Computer Vision APIs, and how to use Microsoft sample code to get started. Let you start building your app with free API keys and reference sample code.

References