Događaj
Izgradite inteligentne aplikacije
17. mar 21 - 21. mar 10
Pridružite se seriji sastanaka kako biste izgradili skalabilna AI rešenja zasnovana na stvarnim slučajevima korišćenja sa kolegama programerima i stručnjacima.
Registrujte se odmahOvaj pregledač više nije podržan.
Nadogradite na Microsoft Edge biste iskoristili najnovije funkcije, bezbednosne ispravke i tehničku podršku.
Networking is a fundamental area for a mission-critical application, given the recommended globally distributed active-active design approach.
This design area explores various network topology topics at an application level, considering requisite connectivity and redundant traffic management. More specifically, it highlights critical considerations and recommendations intended to inform the design of a secure and scalable global network topology for a mission-critical application.
Važno
This article is part of the Azure Well-Architected mission-critical workload series. If you aren't familiar with this series, we recommend you start with what is a mission-critical workload?
The use of multiple active regional deployment stamps requires a global routing service to distribute traffic to each active stamp.
Azure Front Door, Azure Traffic Manager, and Azure Standard Load Balancer provide the needed routing capabilities to manage global traffic across a multi-region application.
Alternatively, third-party globally routing technologies can be considered. These options can almost seamlessly be swapped in to replace or extend the use of Azure-native global routing services. Popular choices include routing technologies by CDN providers.
This section explores the key differences Azure routing services to define how each can be used to optimize different scenarios.
A routing service bound to a single region represents a single-point-of-failure and a significant risk with regard to regional outages.
If the application workload encompasses client control, such as with mobile or desktop client applications, it's possible to provide service redundancy within client routing logic.
Capability disparity between Azure Front Door and Traffic Manager means that if the two technologies are positioned alongside one another for redundancy, a different ingress path or design changes would be required to ensure a consistent and acceptable level of service is maintained.
Azure Front Door and Azure Traffic Manager are globally distributed services with built-in multi-region redundancy and availability.
Global routing service redundancy provides mitigation for an extremely small number of hypothetical failure scenarios, where the impact of a global outage is constrained to the routing service itself.
To provide broader redundancy to global outage scenarios, a multicloud active-active deployment approach can be considered. A multicloud active-active deployment approach introduces significant operational complexities, which pose significant resiliency risks, likely far outweighing the hypothetical risks of a global outage.
For scenarios where client control isn't possible, a dependency must be taken on a single global routing service to provide a unified entry point for all active deployment regions.
When client control isn't possible, operational mitigations can be considered to define a process for migrating to a secondary global routing service if a global outage disables the primary service. Migrating from one global routing service to another is typically a lengthy process lasting several hours, particularly where DNS propagation is considered.
Some third-party global routing services provide a 100% SLA. However, the historic and attainable SLA provided by these services is typically lower than 100%.
Azure Front Door
Azure Front Door provides global HTTP/S load balancing and optimized connectivity using the Anycast protocol with split TCP to take advantage of the Microsoft global backbone network.
Provides a built-in cache that serves static content from edge nodes. In many use cases, this can also eliminate the need for a dedicated Content Delivery Network (CDN).
Azure Web Application Firewall (WAF) can be used on Azure Front Door, and since it's deployed to Azure network edge locations around the globe, every incoming request delivered by Front Door in inspected at the network edge.
Azure Front Door protects application endpoints against DDoS attacks using Azure DDoS protection Basic. Azure DDoS Standard provides additional and more advanced protection and detection capabilities and can be added as an additional layer to Azure Front Door.
Azure Front Door offers a fully managed certificate service. Enables TLS connection security for endpoints without having to manage the certificate lifecycle.
Azure Front Door Premium supports private endpoints, enabling traffic to flow from the internet directly onto Azure virtual networks. This would eliminate the need of using public IPs on the VNet for making the backends accessible via Azure Front Door Premium.
Azure Front Door relies on health probes and backend health endpoints (URLs) that are called on an interval basis to return an HTTP status code reflecting if the backend is operating normally, with an HTTP 200 (OK) response reflecting a healthy status. As soon as a backend reflects an unhealthy status, from the perspective of a certain edge node, that edge node will stop sending requests there. Unhealthy backends are therefore transparently removed from traffic circulation without any delay.
Supports HTTP/S protocols only.
The Azure Front Door WAF and Application Gateway WAF provide a slightly different feature set, though both support built-in and custom rules and can be set to operate in either detection mode or prevention mode.
The Front Door backend IP space may change, but Microsoft will ensure integration with Azure IP Ranges and Service Tags. It's possible to subscribe to Azure IP Ranges and Service Tags to receive notifications about any changes or updates.
Azure Front Door supports various load distribution configurations:
By default Azure Front Door uses latency-based routing that can lead to situations where some backends get a lot more incoming traffic than others, depending on where clients originate from.
If a series of client requests must be handled by the same backend, Session Affinity can be configured on the frontend. It uses a client-side cookie to send subsequent requests to the same backend as the first request, provided the backend is still available.
Azure Traffic Manager
Azure Traffic Manager is a DNS redirection service.
The DNS response is cached and reused by the client for a specified Time-To-Live (TTL) period, and requests made during this period will go directly to the backend endpoint without Traffic Manager interaction. Eliminates the extra connectivity step that provides cost benefits compared to Front Door.
Since the request is made directly from the client to the backend service, any protocol supported by the backend can be leveraged.
Similar to Azure Front Door, Azure Traffic Manager also relies on health probes to understand if a backend is healthy and operating normally. If another value is returned or nothing is returned, the routing service recognizes ongoing issues and will stop routing requests to that specific backend.
Azure Standard Load Balancer
Važno
Cross-Region Standard Load Balancer is available in preview with technical limitations. This option isn't recommended for mission-critical workloads.
Use Azure Front Door as the primary global traffic routing service for HTTP/S scenarios. Azure Front Door is strongly advocated for HTTP/S workloads as it provides optimized traffic routing, transparent fail over, private backend endpoints (with the Premium SKU), edge caching and integration with Web Application Firewall (WAF).
For application scenarios where client control is possible, apply client side routing logic to consider failover scenarios where the primary global routing technology fails. Two or more global routing technologies should be positioned in parallel for added redundancy, if single service SLA isn't sufficient. Client logic is required to route to the redundant technology in case of a global service failure.
This image shows a redundant global load balancer configuration with client failover using Azure Front Door as primary global load balancer.
Važno
To truly mitigate the risk of global failures within the Azure platform, a multicloud active-active deployment approach should be considered, with active deployment stamps hosted across two or more cloud providers and redundant third-party routing technologies used for global routing.
Azure can effectively be integrated with other cloud platforms. However, it's strongly recommended not to apply a multicloud approach because it introduces significant operational complexity, with different deployment stamp definitions and representations of operational health across the different cloud platforms. This complexity in-turn introduces numerous resiliency risks within the normal operation of the application, which far outweigh the hypothetical risks of a global platform outage.
It's recommended to consider a third-party global routing service in place of two Azure global routing services. This provides the maximum level of fault mitigation and a more simple design approach since most industry leading CDN providers offer edge capabilities largely consistent with that offered by Azure Front Door.
Azure Front Door
Use the Azure Front Door managed certificate service to enable TLS connections, and remove the need to manage certificate lifecycles.
Use the Azure Front Door Web Application Firewall (WAF) to provide protection at the edge from common web exploits and vulnerabilities, such as SQL injection.
Use the Azure Front Door built-in cache to serve static content from edge nodes.
Configure the application platform ingress points to validate incoming requests through header based filtering using the X-Azure-FDID to ensure all traffic is flowing through the configured Front Door instance. Consider also configuring IP ACLing using Front Door Service Tags to validate traffic originates from the Azure Front Door backend IP address space and Azure infrastructure services. This will ensure traffic flows through Azure Front Door at a service level, but header based filtering will still be required to ensure the use of a configured Front Door instance.
Define a custom TCP health endpoint to validate critical downstream dependencies within a regional deployment stamp, including data platform replicas, such as Azure Cosmos DB in the example provided by the foundational reference implementation. If one or more dependencies becomes unhealthy, the health probe should reflect this in the response returned so that the entire regional stamp can be taken out of circulation.
Ensure health probe responses are logged and ingest all operational data exposed by Azure Front Door into the global Log Analytics workspace to facilitate a unified data sink and single operational view across the entire application.
Unless the workload is extremely latency sensitive, spread traffic evenly across all considered regional stamps to most effectively use deployed resources.
Don't enable Session Affinity unless it's required by the application, since it can have a negative impact the balance of traffic distribution. With a fully stateless application, if the recommended mission-critical application design approach is followed, any request could be handled by any of the regional deployments.
Azure Traffic Manager
Use Traffic Manager for non HTTP/S scenarios as a replacement to Azure Front Door. Capability differences will drive different design decisions for cache and WAF capabilities, and TLS certificate management.
WAF capabilities should be considered within each region for the Traffic Manager ingress path, using Azure Application Gateway.
Configure a suitably low TTL value to optimize the time required to remove an unhealthy backend endpoint from circulation in the event that backend becomes unhealthy.
Similar to with Azure Front Door, a custom TCP health endpoint should be defined to validate critical downstream dependencies within a regional deployment stamp, which should be reflected in the response provided by health endpoints.
However, for Traffic Manager additional consideration should be given to service level regional fail over. such as 'dog legging', to mitigate the potential delay associated with the removal of an unhealthy backend due to dependency failures, particularly if it's not possible to set a low TTL for DNS records.
Consideration should be given to third-party CDN providers in order to achieve edge caching when using Azure Traffic Manager as a primary global routing service. Where edge WAF capabilities are also offered by the third-party service, consideration should be given to simplify the ingress path and potentially remove the need for Application Gateway.
The network ingress path for a mission-critical application must also consider application delivery services to ensure secure, reliable, and scalable ingress traffic.
This section builds on global routing recommendations by exploring key application delivery capabilities, considering relevant services such as Azure Standard Load Balancer, Azure Application Gateway, and Azure API Management.
TLS encryption is critical to ensure the integrity of inbound user traffic to a mission-critical application, with TLS Offloading applied only at the point of a stamp's ingress to decrypt incoming traffic. TLS Offloading Requires the private key of the TLS certificate to decrypt traffic.
A Web Application Firewall provides protection against common web exploits and vulnerabilities, such as SQL injection or cross site scripting, and is essential to achieve the maximum reliability aspirations of a mission-critical application.
Azure WAF provides out-of-the-box protection against the top 10 OWASP vulnerabilities using managed rule sets.
Third-party WAF technologies such as NVAs and advanced ingress controllers within Kubernetes can also be considered to provide requisite vulnerability protection.
Optimal WAF configuration typically requires fine tuning, regardless of the technology used.
Azure Front Door
Azure Front Door only accepts HTTP and HTTPS traffic, and will only process requests with a known Host
header. This protocol blocking helps to mitigate volumetric attacks spread across protocols and ports, and DNS amplification and TCP poisoning attacks.
Azure Front Door is a global Azure resource so configuration is deployed globally to all edge locations.
Azure Front Door provides both a fully managed certificate service and a bring-your-own-certificate method for the client-facing SSL certificates. The fully managed certificate service provides a simplified operational approach and helps to reduce complexity in the overall design by performing certificate management within a single area of the solution.
Azure Front Door auto-rotates "Managed" certificates at least 60 days ahead of certificate expiration to protect against expired certificate risks. If self-managed certificates are used, updated certificates should be deployed no later than 24 hours prior to expiration of the existing certificate, otherwise clients may receive expired certificate errors.
Certificate updates will only result in downtime if Azure Front Door is switched between "Managed" and "Use Your Own Certificate".
Azure Front Door is protected by Azure DDoS Protection Basic, which is integrated into Front Door by default. This provides always-on traffic monitoring, real-time mitigation, and also defends against common Layer 7 DNS query floods or Layer 3/4 volumetric attacks.
Azure Front Door also provides WAF capabilities at a global traffic level, while Application Gateway WAF must be provided within each regional deployment stamp. Capabilities include firewall rulesets to protect against common attacks, geo-filtering, address blocking, rate limiting, and signature matching.
Azure Load Balancer
The Azure Basic Load Balancer SKU isn't backed by an SLA and has several capability constraints compared to the Standard SKU.
Perform TLS Offloading in as few places as possible in order to maintain security whilst simplifying the certificate management lifecycle.
Use encrypted connections (e.g. HTTPS) from the point where TLS offloading occurs to the actual application backends. Application endpoints won't be visible to end users, so Azure-managed domains, such as azurewebsites.net
or cloudapp.net
, can be used with managed certificates.
For HTTP(S) traffic, ensure WAF capabilities are applied within the ingress path for all publicly exposed endpoints.
Enable WAF capabilities at a single service location, either globally with Azure Front Door or regionally with Azure Application Gateway, since this simplifies configuration fine tuning and optimizes performance and cost.
Configure WAF in Prevention mode to directly block attacks. Only use WAF in Detection mode (i.e. only logging but not blocking suspicious requests) when the performance penalty of Prevention mode is too high. The implied additional risk must be fully understood and aligned to the specific requirements of the workload scenario.
Prioritize the use of Azure Front Door WAF since it provides the richest Azure-native feature set and applies protections at the global edge, which simplifies the overall design and drives further efficiencies.
Use Azure API Management only when exposing a large number of APIs to external clients or different application teams.
Use the Azure Standard Load Balancer SKU for any internal traffic distribution scenario within micros-service workloads.
Use Azure DDoS Network Protection to help protect public endpoints hosted within each application virtual network.
Special treatment of static content like images, JavaScript, CSS and other files can have a significant impact on the overall user experience as well as on the overall cost of the solution. Caching static content at the edge can speed up the client load times which results in a better user experience and can also reduce the cost for traffic, read operations and computing power on backend services involved.
/static/*
traffic can be transparently redirected to static content.A mission-critical application will typically encompass requirements for integration with other applications or dependent systems, which could be hosted on Azure, another public cloud, or on-premises data centers. This application integration can be accomplished using public-facing endpoints and the internet, or private networks through network-level integration. Ultimately, the method by which application integration is achieved will have a significant impact on the security, performance, and reliability of the solution, and strongly impacting design decisions within other design areas.
A mission-critical application can be deployed within one of three overarching network configurations, which determines how application integration can occur at a network level.
Oprez
When deploying within an Azure landing zone, configuration 1. should be deployed within an Online Landing Zone, while both 2) and 3) should be deployed within a Corp. Connected Landing Zone to facilitate network-level integration.
This section explores these network integration scenarios, layering in the appropriate use of Azure Virtual Networks and surrounding Azure networking services to ensure integration requirements are optimally satisfied.
The simplest design approach is to not deploy the application within a virtual network.
This design approach adopts "identity as a security perimeter" to provide access control between the various service components and dependent solution. This may be an acceptable solution for scenarios that are less sensitive to security. All application services and dependencies are accessible through a public endpoint leaves them vulnerable to additional attack vectors orientated around gaining unauthorized access.
This design approach is also not applicable for all Azure services, since many services, such as AKS, have a hard requirement for an underlying virtual network.
To mitigate the risks associated with unnecessary public endpoints, the application can be deployed within a standalone network that isn't connected to other networks.
Incoming client requests will still require a public endpoint to be exposed to the internet, however, all subsequent communication can be within the virtual network using private endpoints. When using Azure Front Door Premium, it's possible to route directly from edge nodes to private application endpoints.
While private connectivity between application components will occur over virtual networks, all connectivity with external dependencies will still rely on public endpoints.
For scenarios where there are no network integration requirements for external dependencies, deploying the solution within an isolated network environment provides maximum design flexibility. No addressing and routing constraints associated with broader network integration.
Azure Bastion is a fully platform-managed PaaS service that can be deployed on a virtual network and provides secure RDP/SSH connectivity to Azure VMs. When you connect via Azure Bastion, virtual machines don't need a public IP address.
The use of application virtual networks introduces significant deployment complexities within CI/CD pipelines, since both data plane and control plane access to resources hosted on private networks is required to facilitate application deployments.
For scenarios with external network integration requirements, application virtual networks can be connected to other virtual networks within Azure, another cloud provider, or on-premises networks using a variety of connectivity options. For example, some application scenarios might consider application-level integration with other line-of-business applications hosted privately within an on-premises corporate network.
The application network design must align with the broader network architecture, particularly concerning topics such as addressing and routing.
Overlapping IP address spaces across Azure regions or on-premises networks will create major contention when network integration is considered.
When on-premises or cross-cloud network integration is required, Azure offers two different solutions to establish a secure connection.
Napomena
When deploying within an Azure landing zone, be aware that any required connectivity to on-premises networks should be provided by the landing zone implementation. The design can use ExpressRoute and other virtual networks in Azure using either Virtual WAN or a hub-and-spoke network design.
It's recommended that mission-critical solutions are deployed within Azure virtual networks where possible to remove unnecessary public endpoints, limiting the application attack surface to maximize security and reliability.
For application scenarios that don't require corporate network connectivity, treat all virtual networks as ephemeral resources that are replaced when a new regional deployment is conducted.
When connecting to other Azure or on-premises networks, application virtual networks shouldn't be treated as ephemeral since it creates significant complications where virtual network peering and virtual network gateway resources are concerned. All relevant application resources within the virtual network should continue to be ephemeral, with parallel subnets used to facilitate blue-green deployments of updated regional deployment stamps.
In scenarios where corporate network connectivity is required to facilitate application integration over private networks, ensure that the IPv4 address space used for regional application virtual networks doesn't overlap with other connected networks and is properly sized to facilitate required scale without needing to update the virtual network resource and incur downtime.
Prioritize the use Azure CNI for AKS network integration, since it supports a richer feature set.
Consider Kubenet for scenarios with a limited rage available IP addresses to fit the application within a constrained address space.
Prioritize the use of the Azure CNI network plugin for AKS network integration and consider Kubenet for scenarios with a limited range of available IP addresses. See Micro-segmentation and kubernetes network policies for more details.
For scenarios requiring on-premises network integration, prioritize the use Express Route to ensure secure and scalable connectivity.
Ensure all components on critical network paths are in line with the reliability and availability requirements of associated user flows, regardless of whether the management of these paths and associated component is delivered by the application team of central IT teams.
Napomena
When deploying within an Azure landing zone and integrating with a broader organizational network topology, consider the network guidance to ensure the foundational network is aligned with Microsoft best-practices.
Use Azure Bastion or proxied private connections to access the data plane of Azure resources or perform management operations.
Internet egress is a foundational network requirement for a mission-critical application to facilitate external communication in the context of:
This section explores how internet egress can be achieved while ensuring security, reliability, and sustainable performance are maintained, highlighting key egress requirements for services recommended in a mission-critical context.
Many Azure services require access to public endpoints for various management and control plane functions to operate as intended.
Azure provides different direct internet outbound connectivity methods, such as Azure NAT gateway or Azure Load Balancer, for virtual machines or compute instances on a virtual network.
When traffic from inside a virtual network travels out to the Internet, Network Address Translation (NAT) must take place. This is a compute operation that occurs within the networking stack and that can therefore impact system performance.
When NAT takes place at a small scale the performance impact should be negligible, however, if there are a large number of outbound requests network issues may occur. These issues typically come in the form of 'Source NAT (or SNAT) port exhaustion'.
In a multitenant environment, such as Azure App Service, there's a limited number of outbound ports available to each instance. If these ports run out, no new outbound connections can be initiated. This issue can be mitigated by reducing the number of private/public edge traversals or by using a more scalable NAT solution such as the Azure NAT Gateway.
In addition to NAT limitations, outbound traffic may also be subject to requisite security inspections.
Azure Firewall provides appropriate security capabilities to secure network egress.
Azure Firewall (or an equivalent NVA) can be used to secure Kubernetes egress requirements by providing granular control over outbound traffic flows.
Large volumes of internet egress will incur data transfer charges.
Azure NAT Gateway
Azure NAT Gateway supports 64,000 connections for TCP and UDP per assigned outbound IP address.
NAT gateway can't provide zonal isolation out-of-the-box. To get zonal redundancy, a subnet containing zonal resources must be aligned with corresponding zonal NAT gateways.
Minimize the number of outgoing Internet connections as this will impact NAT performance.
Use Azure Firewall where requirements to control and inspect outbound internet traffic exist.
Napomena
When deploying within an Azure landing zone, consider using the foundational platform Azure Firewall resource (or equivalent NVA). If a dependency is taken on a central platform resource for internet egress, then the reliability level of that resource and associated network path should be closely aligned with application requirements. Operational data from the resource should also be made available to the application in order to inform potential operational action in failure scenarios.
If there are high-scale requirements associated with outbound traffic, consideration should be given to a dedicated Azure Firewall resource for a mission-critical application, to mitigate risks associated with using a centrally shared resource, such as noisy neighbor scenarios.
While the application design strongly advocates independent regional deployment stamps, many application scenarios may still require network integration between application components deployed within different zones or Azure regions, even if only under degraded service circumstances. The method by which inter-zone and inter-region communication is achieved has a significant bearing on overall performance and reliability, which will be explored through the considerations and recommendations within this section.
The application design approach for a mission-critical application endorses the use of independent regional deployments with zonal redundancy applied at all component levels within a single region.
An Availability Zone (AZ) is a physically separate data center location within an Azure region, providing physical and logical fault isolation up to the level of a single data center.
A round-trip latency of less than 2 ms is guaranteed for inter-zone communication. Zones will have a small latency variance given varied distances and fiber paths between zones.
Availability Zone connectivity depends on regional characteristics, and therefore traffic entering a region via an edge location may need to be routed between zones to reach its destination. This will add a ~1ms-2ms latency given inter-zone routing and 'speed of light' constraints, but this should only have a bearing for hyper sensitive workloads.
Availability Zones are treated as logical entities within the context of a single subscription, so different subscriptions might have a different zonal mapping for the same region. For example, zone 1 in Subscription A could correspond to the same physical data center as zone 2 in subscription B.
With application scenarios that are extremely chatty between application components, spreading application tiers across zones can introduce significant latency and increased costs. It's possible to mitigate this within the design by constraining a deployment stamp to a single zone and deploying multiple stamps across the different zones.
Communication between different Azure regions incurs a larger data transfer charge per GB of bandwidth.
Express Route and VPN connectivity methods can also be used to directly connect different Azure regions together for certain scenarios, or even different cloud platforms.
For services to service communication Private Link can be used for direct communication using private endpoints.
Traffic can be hair-pinned through Express Route circuits used for on-premises connectivity in order to facilitate routing between virtual networks within an Azure region and across different Azure regions within the same geography.
When submillisecond latency are required between services, Proximity Placement Groups can be used when supported by the services used.
Use virtual network peering to connect networks within a region and across different regions. It's strongly recommended to avoid hair-pinning within Express Route.
Use Private Link to establish communication directly between services in the same region or across regions (service in Region A communicating with service in Region B.
For application workloads that are extremely chatty between components, consider constraining a deployment stamp to a single zone and deploying multiple stamps across the different zones. This ensures zonal redundancy is maintained at the level of an encapsulated deployment stamp rather than a single application component.
Where possible, treat each deployment stamp as independent and disconnected from other stamps.
For hyper latency sensitive application scenarios, prioritize the use of zones with regional network gateways to optimize network latency for ingress paths.
Micro-segmentation is a network security design pattern used to isolate and secure individual application workloads, with policies applied to limit network traffic between workloads based on a Zero Trust model. It's typically applied to reduce network attack surface, improve breach containment, and strengthen security through policy-driven application-level network controls.
A mission-critical application can enforce application-level network security using Network Security Groups (NSG) at either a subnet or network interface level, service Access Control Lists (ACL), and network policies when using Azure Kubernetes Service (AKS).
This section explores the optimal use of these capabilities, providing key considerations and recommendations to achieve application-level micro-segmentation.
AKS can be deployed in two different networking models:
Napomena
Azure CNI requires more IP address space compared to Kubenet. Proper upfront planning and sizing of the network is required. For more information, refer to the Azure CNI documentation.
By default, pods are non-isolated and accept traffic from any source and can send traffic to any destination; a pod can communicate with every other pod in a given Kubernetes cluster; Kubernetes doesn't ensure any network level isolation, and doesn't isolate namespaces at the cluster level.
Communication between Pods and Namespaces can be isolated using Network Policies. Network Policy is a Kubernetes specification that defines access policies for communication between Pods. Using Network Policies, an ordered set of rules can be defined to control how traffic is sent/received, and applied to a collection of pods that match one or more label selectors.
AKS supports the creation of different node pools to separate different workloads using nodes with different hardware and software characteristics, such as nodes with and without GPU capabilities.
Configure an NSG on all considered subnets to provide an IP ACL to secure ingress paths and isolate application components based on a Zero Trust model.
Use Front Door Service Tags within NSGs on all subnets containing application backends defined within Azure Front Door, since this will validate traffic originates from a legitimate Azure Front Door backend IP address space. This will ensure traffic flows through Azure Front Door at a service level, but header based filtering will still be required to ensure the use of a particular Front Door instance and to also mitigate 'IP spoofing' security risks.
Public internet traffic should be disabled on RDP and SSH ports across all applicable NSGs.
Prioritize the use of the Azure CNI network plugin and consider Kubenet for scenarios with a limited range of available IP addresses to fit the application within a constrained address space.
The Network Policy feature in Kubernetes should be used to define rules for ingress and egress traffic between pods in a cluster. Define granular Network Policies to restrict and limit cross-pod communication.
Review the considerations for quantifying and observing application health.
Događaj
Izgradite inteligentne aplikacije
17. mar 21 - 21. mar 10
Pridružite se seriji sastanaka kako biste izgradili skalabilna AI rešenja zasnovana na stvarnim slučajevima korišćenja sa kolegama programerima i stručnjacima.
Registrujte se odmahObuka
Modul
Secure outbound internet connectivity for Azure VMware Solution - Training
Secure outbound internet connectivity for Azure VMware Solution using Azure Route Server, Azure Firewall and third-party NVA
Certifikacija
Microsoft Certified: Azure Network Engineer Associate - Certifications
Demonstrate the design, implementation, and maintenance of Azure networking infrastructure, load balancing traffic, network routing, and more.
Dokumentacija
This design area describes architecture patterns and scaling strategies that can help make your application resilient to failures.
This design area describes decision factors and provides recommendations related to the selection, design, and configuration of an appropriate application hosting platform for a mission-critical application on Azure.
Learn about implementing reliable and effective operations by using DevOps processes. Use automated deployment pipelines to run versioned application and infrastructure code artifacts within a source repository.