This solution uses Azure Kubernetes Services (AKS) to run the microservices that query master data from the various connected services. The hierarchy service is an ASP.NET Core REST API hosted in an AKS cluster.
Download a Visio file of this architecture.
Azure Digital Twins helps build the hierarchy service by creating a model of nodes, like machines, work centers, and locations, and their relationships. Each node has metadata that includes identifiers from enterprise resource management (ERP) systems. Downstream applications can use this contextual information.
- The web app lets users manage the hierarchy through a UI.
- Azure Digital Twins Explorer allows managing the hierarchy directly against Azure Digital Twins.
- The IO API supports bulk import and export for manufacturing-specific scenarios.
- The Query API provides query capabilities for manufacturing-specific data needs.
- The Admin API supports atomic business operations and validation of business rules.
The hierarchy service lets you filter query operations by node types and node attributes. The service supports the following operations:
IO (Bulk) API
||Export hierarchy data.|
||Import hierarchy data file.|
||Validate hierarchy data import file.|
||Get the status of a bulk import operation.|
||Get nodes by their attribute values.|
||Get a node by ID.|
||Get subtree of a hierarchy node.|
||Get direct children of a hierarchy node.|
||Get parent of a hierarchy node.|
||Add new node with relations into hierarchy.|
||Remove a leaf node from hierarchy.|
||Update existing node and relationships with parents.|
The hierarchy service provides a consolidated data model that supports defining and querying hierarchical views of production assets. The hierarchy service can validate business rules to enforce hierarchy consistency and data integrity.
The service retrieves hierarchy data from either Azure Digital Twins, or an in-memory cache when materializing large graphs. The cache improves performance for queries that would have long response times when issued directly against Azure Digital Twins. The in-memory cache improves the speed of a 3,000-node graph traversal from about 10 seconds to less than a second.
Azure Digital Twins is an IoT platform that creates digital representations of real-world things, places, processes, and people in the cloud.
Azure Digital Twins Explorer lets you connect to an Azure Digital Twins instance to understand, visualize, and modify your digital twin data.
Azure Kubernetes Services (AKS) offers serverless Kubernetes for running microservices, integrated continuous integration and continuous deployment (CI/CD), and enterprise-grade security and governance.
Azure App Service is a platform-as-a-service (PaaS) for building and hosting apps on managed virtual machines (VMs). App Service manages the underlying compute infrastructure that runs your apps.
Azure Data Explorer is a fast, fully managed data analytics service. Azure Data Explorer provides real-time analysis of large data volumes streaming from applications, websites, and IoT devices.
This solution uses AKS to run the microservices that query data from the connected services. You can also run the microservices in Azure Container Instances (ACI). ACI offers the fastest and simplest way to run a container in Azure, without having to adopt a higher-level service like AKS.
Instead of hosting the web application separately from the microservices, you can deploy the web app inside the AKS cluster. Then there's no need to introduce another service such as Azure App Service.
Consider using Azure Monitor to analyze and optimize the performance of the AKS cluster and other resources, and to monitor and diagnose networking issues.
This system design is intentionally simple to avoid the introduction of more services or dependencies. Consider supporting the following functionality:
Change notifications. This solution implements cache synchronization by periodically polling Azure Digital Twins for changes. You can also use Azure Digital Twins event notifications to initiate a cache refresh and to notify downstream applications.
Telemetry data. The example doesn't use the Azure Digital Twins telemetry data processing capability. You can extend the solution to process telemetry data if the resulting data rates are compatible with the Azure Digital Twins service limits.
Integration with Azure Data Explorer. You can ingest data directly into a store that can manage manufacturing data rates. Azure Digital Twins/Azure Data Explorer joint queries via the Azure Digital Twins query plugin for Azure Data Explorer can provide contextualization.
This article explores a connected factory hierarchy service implementation.
A hierarchy service centrally defines the organization of production assets like machines within factories, from both an operational and maintenance point of view. Business stakeholders can use this information as a common data source for monitoring plant conditions or overall equipment effectiveness (OEE).
Production assets like machines are organized within factories in context-specific hierarchies. Machines can be organized by their physical location, maintenance requirements, or products. Individual stakeholders, processes, and IT systems have different definitions for asset hierarchies.
Multiple IT systems might define hierarchical structures redundantly. Information from ERP systems might be replicated across multiple applications. These redundancies can lead to inconsistencies, heterogeneous governance concepts, and missing correlations between master data and application-specific hierarchies.
Changes to hierarchical structures and the metadata that defines them are very time consuming. If an enterprise adds new machines or reorganizes a production line, it must apply and verify the changes manually in multiple places. Decentralized access control increases the need for manual processes, and makes links between application-specific hierarchies difficult to establish. These issues impact business agility and scalability.
Another challenge is that individual sites or organizations might use different ERP systems, often for historical reasons such as acquisitions. Standardizing ERP systems might not be feasible within a reasonable time frame. This heterogeneous ERP landscape adds even more complexity and challenge to the process of integrating shop floor applications with ERP systems.
A hierarchy service addresses these problems by providing a centralized, consolidated, and consistent overall hierarchy definition for assets. Anytime an application needs to reference hierarchy data, it retrieves the latest definitions from the hierarchy service. Any changes to the hierarchy always reflect across all applications, without manual steps.
The service issues every node in the hierarchy a system-defined unique identifier. This ID uniquely identifies items, such as a specific machine in a specific factory, across applications throughout an entire organization. The ID can also be added to telemetry data sent by machines, to contextualize that data based on the hierarchy.
To maintain a separation of concerns, the hierarchy service only contains information about nodes, relationships, and references to corresponding master data. The system maintains actual master data records or application-specific parameters separately. A dedicated master data document service can provide master data records. A shop floor application can maintain parameters that are defined on a machine level. The hierarchy service remains lean and efficient, and avoids evolving into a parallel master data management system.
The service provides access control to govern changes. Different views cover the needs of maintenance and operational perspectives. Business stakeholders can define and maintain the hierarchy by using a graphical UI, without involving IT personnel.
The hierarchy service acts as the single point of integration with ERP systems, decoupling the lifecycle of ERP systems from the hierarchy. Users can integrate with ERP systems via graphical UI, bulk import, or an API the hierarchy service provides.
Potential use cases
- Standardize asset organization across IT systems.
- Easily incorporate new machines or changes to production lines.
- Centrally manage different ERP systems within an enterprise.
- Identify machines that can fulfill a given order.
- Aggregate machine data.
These considerations implement the pillars of the Azure Well-Architected Framework, which is a set of guiding tenets that can be used to improve the quality of a workload. For more information, see Microsoft Azure Well-Architected Framework.
The following considerations apply to this solution:
Consider deploying AKS in availability zones. An AKS cluster distributes resources such as nodes and storage across logical sections of the underlying Azure infrastructure. Deploying AKS in availability zones ensures that nodes in one availability zone are physically separated from nodes defined in another availability zone. Multiple availability zones configured across an AKS cluster provide high availability by minimizing the chances that hardware failure or planned maintenance will disrupt service.
AKS services can scale up or out, manually or automatically. The AKS cluster autoscaler can automatically scale an entire cluster to meet application demands on AKS. The cluster autoscaler watches for cluster pods that can't be scheduled because of resource constraints. When the autoscaler detects issues, it increases the number of nodes in the node pool to meet the application demand.
Azure App Service can also scale up or out, manually or automatically.
Security provides assurances against deliberate attacks and the abuse of your valuable data and systems. For more information, see Overview of the security pillar.
Use role-based access control (RBAC) to restrict who can access and use the connected factory resources. Limit data access based on the user's identity or role. This solution uses Azure Active Directory (Azure AD) for identity and access control, and Azure Key Vault to manage keys and secrets.
To improve AKS security, apply and enforce built-in security policies by using Azure Policy. Azure Policy helps enforce organizational standards and assess compliance at scale. The Azure Policy Add-on for AKS can apply individual policy definitions or groups of policy definitions called initiatives to your cluster.
Cost optimization is about looking at ways to reduce unnecessary expenses and improve operational efficiencies. For more information, see Overview of the cost optimization pillar.
In general, use the Azure pricing calculator to estimate costs. Use the AKS calculator to estimate the cost of running AKS in Azure. See the Cost section in Microsoft Azure Well-Architected Framework to learn about other considerations.
This article is maintained by Microsoft. It was originally written by the following contributors.
- Max Zeier | Senior Technical Program Manager
- Industrial services on Azure Kubernetes
- Develop with Azure Digital Twins (Learning path)
- Introduction to Kubernetes on Azure (Learning path)