Събитие
Създаване на интелигентни приложения
17.03, 21 ч. - 21.03, 10 ч.
Присъединете се към поредицата срещи, за да изградите мащабируеми AI решения, базирани на реални случаи на употреба с колеги разработчици и експерти.
Регистрирайте се сегаТози браузър вече не се поддържа.
Надстройте до Microsoft Edge, за да се възползвате от най-новите функции, актуализации на защитата и техническа поддръжка.
The sustainable software engineering principles are a set of competencies to help you define, build, and run sustainable applications. The overall goal is to reduce the carbon footprint in every aspect of your application. The Azure Well-Architected Framework guidance for sustainability aligns with the The Principles of Sustainable Software Engineering from the Green Software Foundation and provides an overview of the principles of sustainable software engineering.
Sustainable software engineering is a shift in priorities and focus. In many cases, the way most software is designed and run highlights fast performance and low latency. Sustainable software engineering focuses on reducing as much carbon emission as possible.
The following guidance focuses on services you're building or operating on Azure with Azure Kubernetes Service (AKS). This article includes design and configuration checklists, recommended design practices, and configuration options. Before applying sustainable software engineering principles to your application, review the priorities, needs, and trade-offs of your application.
Sustainability is a shared responsibility between the cloud provider and the customer or partner designing and deploying AKS clusters on the platform. Deploying AKS doesn't automatically make it sustainable, even if the data centers are optimized for sustainability. Applications that aren't properly optimized may still emit more carbon than necessary.
Learn more about the shared responsibility model for sustainability.
Carbon Efficiency: Emit the least amount of carbon possible.
A carbon efficient cloud application is one that's optimized, and the starting point is the cost optimization.
Energy Efficiency: Use the least amount of energy possible.
One way to increase energy efficiency is to run the application on as few servers as possible with the servers running at the highest utilization rate, also increasing hardware efficiency.
Hardware Efficiency: Use the least amount of embodied carbon possible.
There are two main approaches to hardware efficiency:
Carbon Awareness: Do more when the electricity is cleaner and do less when the electricity is dirtier.
Being carbon aware means responding to shifts in carbon intensity by increasing or decreasing your demand.
Before reviewing the detailed recommendations in each of the design areas, we recommend you carefully consider the following design patterns for building sustainable workloads on AKS:
Explore this section to learn more about how to optimize your applications for a more sustainable application design.
A microservice architecture may reduce the compute resources required, as it allows for independent scaling of its logical components and ensures they're scaled according to demand.
When you scale your workload based on relevant business metrics, such as HTTP requests, queue length, and cloud events, you can help reduce resource utilization and carbon emissions.
Removing state from your design reduces the in-memory or on-disk data required by the workload to function.
Explore this section to learn how to make better informed platform-related decisions around sustainability.
An up-to-date cluster avoids unnecessary performance issues and ensures you benefit from the latest performance improvements and compute optimizations.
Add-ons and extensions covered by the AKS support policy provide further supported functionalities to your cluster while allowing you to benefit from the latest performance improvements and energy optimizations throughout your cluster lifecycle.
Containers allow for reducing unnecessary resource allocation and making better use of the resources deployed as they allow for bin packing and require less compute resources than virtual machines.
Ampere's Cloud Native Processors are uniquely designed to meet both the high performance and power efficiency needs of the cloud.
An oversized cluster doesn't maximize utilization of compute resources and can lead to a waste of energy. Separate your applications into different node pools to allow for cluster right-sizing and independent scaling according to the application requirements. As you run out of capacity in your AKS cluster, grow from AKS to ACI to scale out extra pods to serverless nodes and ensure your workload uses all the allocated resources efficiently.
Workloads may not need to run continuously and could be turned off to reduce energy waste and carbon emissions. You can completely turn off (stop) your node pools in your AKS cluster, allowing you to also save on compute costs.
Explore this section to set up your environment for measuring and continuously improving your workloads cost and carbon efficiency.
You should identify and delete any unused resources, such as unreferenced images and storage resources, as they have a direct impact on hardware and energy efficiency. To ensure continuous energy optimization, you must treat identifying and deleting unused resources as a process rather than a point-in-time activity.
Getting the right information and insights at the right time is important for producing reports about performance and resource utilization.
Explore this section to learn how to design a more sustainable data storage architecture and optimize existing deployments.
The data retrieval and data storage operations can have a significant impact on both energy and hardware efficiency. Designing solutions with the correct data access pattern can reduce energy consumption and embodied carbon.
Explore this section to learn how to enhance and optimize network efficiency to reduce unnecessary carbon emissions.
The distance from a data center to users has a significant impact on energy consumption and carbon emissions. Shortening the distance a network packet travels improves both your energy and carbon efficiency.
Placing nodes in a single region or a single availability zone reduces the physical distance between the instances. However, for business critical workloads, you need to ensure your cluster is spread across multiple availability zones, which may result in more network traversal and increase in your carbon footprint.
A service mesh deploys extra containers for communication, typically in a sidecar pattern, to provide more operational capabilities, which leads to an increase in CPU usage and network traffic. Nevertheless, it allows you to decouple your application from these capabilities as it moves them out from the application layer and down to the infrastructure layer.
Sending and storing all logs from all possible sources (workloads, services, diagnostics, and platform activity) can increase storage and network traffic, which impacts costs and carbon emissions.
Using Content Delivery Network (CDN) is a sustainable approach to optimizing network traffic because it reduces the data movement across a network. It minimizes latency through storing frequently read static data closer to users, and helps reduce network traffic and server load.
Explore this section to learn more about the recommendations leading to a sustainable, right-sized security posture.
Transport Layer Security (TLS) ensures that all data passed between the web server and web browsers remain private and encrypted. However, terminating and re-establishing TLS increases CPU utilization and might be unnecessary in certain architectures. A balanced level of security can offer a more sustainable and energy efficient workload, while a higher level of security may increase the compute resource requirements.
Azure Front Door and Application Gateway help manage traffic from web applications, while Azure Web Application Firewall provides protection against OWASP top 10 attacks and load shedding bad bots at the network edge. These capabilities help remove unnecessary data transmission and reduce the burden on the cloud infrastructure with lower bandwidth and fewer infrastructure requirements.
Many attacks on cloud infrastructure seek to misuse deployed resources for the attacker's direct gain leading to an unnecessary spike in usage and cost. Vulnerability scanning tools help minimize the window of opportunity for attackers and mitigate any potential malicious usage of resources.
Обратна връзка за Azure Kubernetes Service
Azure Kubernetes Service е проект с отворен код. Изберете връзка, за да предоставите обратна връзка:
Събитие
Създаване на интелигентни приложения
17.03, 21 ч. - 21.03, 10 ч.
Присъединете се към поредицата срещи, за да изградите мащабируеми AI решения, базирани на реални случаи на употреба с колеги разработчици и експерти.
Регистрирайте се сега