Events
17 Mar, 21 - 21 Mar, 10
Join the meetup series to build scalable AI solutions based on real-world use cases with fellow developers and experts.
Register nowThis browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
This tutorial shows you how to migrate Oracle WebLogic Server (WLS) to Azure Kubernetes Service (AKS) and configure automatic horizontal scaling based on Prometheus metrics.
In this tutorial, you accomplish the following tasks:
The following diagram illustrates the architecture you build:
The Oracle WebLogic Server on AKS offer runs a WLS operator and a WLS domain on AKS. The WLS operator manages a WLS domain deployed using a model in image domain source type. To learn more about the WLS operator, see Oracle WebLogic Kubernetes Operator.
The WebLogic Monitoring Exporter scrapes WebLogic Server metrics and feeds them to Prometheus. The exporter uses the WebLogic Server 12.2.1.x RESTful Management Interface for accessing runtime state and metrics.
The Azure Monitor managed service for Prometheus collects and saves metrics from WLS at scale using a Prometheus-compatible monitoring solution, based on the Prometheus project from the Cloud Native Computing Foundation. For more information, see Azure Monitor managed service for Prometheus.
This article integrates KEDA with your AKS cluster to scale the WLS cluster based on Prometheus metrics from the Azure Monitor workspace. KEDA monitors the Azure Monitor managed service for Prometheus and feeds that data to AKS and the Horizontal Pod Autoscaler (HPA) to drive rapid scaling of the WLS workload.
The following WLS state and metrics are exported by default. You can configure the exporter to export other metrics on demand. For a detailed description of WebLogic Monitoring Exporter configuration and usage, see WebLogic Monitoring Exporter.
Owner
role or the Contributor
and User Access Administrator
roles in the subscription. You can verify the assignment by following the steps in List Azure role assignments using the Azure portal.This article uses testwebapp from the weblogic-kubernetes-operator repository as a sample application.
Use the following commands to download the prebuilt sample app and expand it into a directory. Because this article writes several files, these commands create a top level directory to contain everything.
export BASE_DIR=$PWD/wlsaks
mkdir $BASE_DIR && cd $BASE_DIR
curl -L -o testwebapp.war https://aka.ms/wls-aks-testwebapp
unzip -d testwebapp testwebapp.war
This article uses the metric openSessionsCurrentCount
to scale up and scale down the WLS cluster. By default, the session timeout on WebLogic Server is 60 minutes. To observe the scaling down capability quickly, use the following steps to set a short timeout:
Use the following command to specify a session timeout of 150 seconds using wls:timeout-secs
. The HEREDOC
format is used to overwrite the file at testwebapp/WEB-INF/weblogic.xml with the desired content.
cat <<EOF > testwebapp/WEB-INF/weblogic.xml
<?xml version="1.0" encoding="UTF-8"?>
<wls:weblogic-web-app xmlns:wls="http://xmlns.oracle.com/weblogic/weblogic-web-app" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd http://xmlns.oracle.com/weblogic/weblogic-web-app http://xmlns.oracle.com/weblogic/weblogic-web-app/1.4/weblogic-web-app.xsd">
<wls:weblogic-version>12.2.1</wls:weblogic-version>
<wls:jsp-descriptor>
<wls:keepgenerated>false</wls:keepgenerated>
<wls:debug>false</wls:debug>
</wls:jsp-descriptor>
<wls:context-root>testwebapp</wls:context-root>
<wls:session-descriptor>
<wls:timeout-secs>150</wls:timeout-secs>
</wls:session-descriptor>
</wls:weblogic-web-app>
EOF
Use the following command to rezip the sample app:
cd testwebapp && zip -r ../testwebapp.war * && cd ..
Use the following steps to create a storage account and container. Some of these steps direct you to other guides. After completing the steps, you can upload a sample application to deploy on WLS.
In this section, you create a WLS cluster on AKS using the Oracle WebLogic Server on AKS offer. The offer provides a full feature set for easily deploying WebLogic Server on AKS. This article focuses on the advanced dynamic scaling capabilities of the offer. For more information about the offer, see Deploy a Java application with WebLogic Server on an Azure Kubernetes Service (AKS) cluster. For the complete reference documentation for the offer, see the Oracle documentation.
This offer implements the following choices for horizontal autoscaling:
Kubernetes Metrics Server. This choice sets up all necessary configuration at deployment time. A horizontal pod autoscaler (HPA) is deployed with a choice of metrics. You can further customize the HPA after deployment.
WebLogic Monitoring Exporter. This choice automatically provisions WebLogic Monitoring Exporter, Azure Monitor managed service for Prometheus, and KEDA. After the offer deployment completes, the WLS metrics are exported and saved in the Azure Monitor workspace. KEDA is installed with the ability to retrieve metrics from the Azure Monitor workspace.
With this option, you must take more steps after deployment to complete the configuration.
This article describes the second option. Use the following steps to to complete the configuration:
Open the Oracle WebLogic Server on AKS offer in your browser and select Create. You should see the Basics pane of the offer.
Use the following steps to fill out the Basics pane:
Select Next and go to the AKS tab.
Under Image selection, use the following steps:
Under Application, use the following steps:
In the Application section, next to Deploy an application?, select Yes.
Next to Application package (.war,.ear,.jar), select Browse.
Start typing the name of the storage account from the preceding section. When the desired storage account appears, select it.
Select the storage container from the preceding section.
Select the checkbox next to testwebapp.war, which you uploaded in the previous section. Select Select.
Select Next.
Leave the default values in the TLS/SSL Configuration pane. Select Next to go to the Load Balancing pane, then use the following steps:
Leave the default values for the DNS pane, then select Next to go to the Database pane.
Leave the default values for the Database pane, select Next to go to the Autoscaling pane, then use the following steps:
Wait until Running final validation... successfully completes, then select Create. After a while, you should see the Deployment page where Deployment is in progress is displayed.
If you see any problems during Running final validation..., fix them and try again.
The following sections require a terminal with kubectl
installed to manage the WLS cluster. To install kubectl
locally, use the az aks install-cli command.
Use the following steps to connect to the AKS cluster:
Use the following steps to see metrics in the Azure Monitor workspace using Prometheus Query Language (PromQL) queries:
In the Azure portal, view the resource group you used in the Deploy WLS on AKS using the Azure Marketplace offer section.
Select the resource of type Azure Monitor workspace.
Under Managed Prometheus, select Prometheus explorer.
Input webapp_config_open_sessions_current_count
to query the current account of open sessions, as shown in the following screenshot:
Note
You can use the following command to access the metrics by exposing the WebLogic Monitoring Exporter:
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
name: sample-domain1-cluster-1-exporter
namespace: sample-domain1-ns
spec:
ports:
- name: default
port: 8080
protocol: TCP
targetPort: 8080
selector:
weblogic.domainUID: sample-domain1
weblogic.clusterName: cluster-1
sessionAffinity: None
type: LoadBalancer
EOF
kubectl get svc -n sample-domain1-ns -w
Wait for the EXTERNAL-IP
column in the row for sample-domain1-cluster-1-exporter
to switch from <pending>
to an IP address. Then, open the URL http://<exporter-public-ip>:8080/metrics
in a browser and sign in with the credentials you specified when deploying the offer. Here, you can find all the available metrics. You can input any of these in the PromQL window to display them in Azure Monitor. For example, heap_free_percent
shows an interesting graph. To watch the memory pressure as the load is applied to the application, set Auto refresh and Time range to the smallest possible interval and leave the tab open.
Scalers define how and when KEDA should scale a deployment. This article uses the Prometheus scaler to retrieve Prometheus metrics from the Azure Monitor workspace.
This article uses openSessionsCurrentCount
as the trigger. The rule for this metric is described as follows. When the average open session count is more than 10, scale up the WLS cluster until it reaches the maximum replica size. Otherwise, scale down the WLS cluster until it reaches its minimum replica size. The following table lists the important parameters:
Parameter name | Value |
---|---|
serverAddress |
The Query endpoint of your Azure Monitor workspace. |
metricName |
webapp_config_open_sessions_current_count |
query |
sum(webapp_config_open_sessions_current_count{app="app1"}) |
threshold |
10 |
minReplicaCount |
1 |
maxReplicaCount |
The default value is 5. If you modified the maximum cluster size during offer deployment, replace with your maximum cluster size. |
Because you selected WebLogic Monitoring Exporter at deployment time, a KEDA scaler is ready to deploy. The following steps show you how to configure the KEDA scaler for use with your AKS cluster:
Open the Azure portal and go to the resource group that you provisioned in the Deploy WLS on AKS using the Azure Marketplace offer section.
In the navigation pane, in the Settings section, select Deployments. You see an ordered list of the deployments to this resource group, with the most recent one first.
Scroll to the oldest entry in this list. This entry corresponds to the deployment you started in the previous section. Select the oldest deployment, whose name starts with something similar to oracle.20210620-wls-on-aks
.
Select Outputs. This option shows the list of outputs from the deployment.
The kedaScalerServerAddress value is the server address that saves the WLS metrics. KEDA is able to access and retrieve metrics from the address.
The shellCmdtoOutputKedaScalerSample value is the base64
string of a scaler sample. Copy the value and run it in your terminal. The command should look similar to the following example:
echo -e YXBpVm...XV0aAo= | base64 -d > scaler.yaml
This command produces a scaler.yaml file in the current directory.
Modify the metric:
and query:
lines in scaler.yaml as shown in the following example:
metricName: webapp_config_open_sessions_current_count
query: sum(webapp_config_open_sessions_current_count{app="app1"})
Note
When you deploy an app with the offer, it's named app1
by default. You can use the following steps to access the WLS admin console to obtain the application name:
app1
. Use app1
as the application name in the query.If desired, modify the maxReplicaCount:
line in scaler.yaml as shown in the following example. It's an error to set this value higher than what you specified at deployment time on the AKS tab.
maxReplicaCount: 10
Use the following command to create the KEDA scaler rule by applying scaler.yaml :
kubectl apply -f scaler.yaml
It takes several minutes for KEDA to retrieve metrics from the Azure Monitor workspace. You can watch the scaler status by using the following command:
kubectl get hpa -n sample-domain1-ns -w
After the scaler is ready to work, the output looks similar to the following content. The value in the TARGETS
column switches from <unknown>
to 0
.
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
keda-hpa-azure-managed-prometheus-scaler Cluster/sample-domain1-cluster-1 0/10 (avg) 1 5 2 15s
Now, you're ready to observe the autoscaling capability. This article opens new sessions using curl
to access the application. After the average session count is larger than 10, the scaling-up action happens. The sessions last for 150 seconds, and the open session count decreases as the sessions expire. After the average session count is less than 10, the scaling-down action happens. Use the following steps to cause the scaling-up and scaling-down actions:
Use the following steps to obtain the application URL:
${clusterExternalUrl}testwebapp
- for example, http://wlsgw202403-wlsaks0314-domain1.eastus.cloudapp.azure.com/testwebapp/
.Run the curl
command to access the application and cause new sessions. The following example opens 22 new sessions. The sessions are expired after 150 seconds. Replace the WLS_CLUSTER_EXTERNAL_URL value with yours.
COUNTER=0
MAXCURL=22
WLS_CLUSTER_EXTERNAL_URL="http://wlsgw202403-wlsaks0314-domain1.eastus.cloudapp.azure.com/"
APP_URL="${WLS_CLUSTER_EXTERNAL_URL}testwebapp/"
while [ $COUNTER -lt $MAXCURL ]; do curl ${APP_URL}; let COUNTER=COUNTER+1; sleep 1;done
In two separate shells, use the following commands:
Use the following command to observe the scaler:
kubectl get hpa -n sample-domain1-ns -w
This command produces output that looks similar to the following example:
$ kubectl get hpa -n sample-domain1-ns -w
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
keda-hpa-azure-managed-prometheus-scaler Cluster/sample-domain1-cluster-1 0/10 (avg) 1 10 1 24m
keda-hpa-azure-managed-prometheus-scaler Cluster/sample-domain1-cluster-1 0/10 (avg) 1 10 1 24m
keda-hpa-azure-managed-prometheus-scaler Cluster/sample-domain1-cluster-1 5/10 (avg) 1 10 1 26m
keda-hpa-azure-managed-prometheus-scaler Cluster/sample-domain1-cluster-1 22/10 (avg) 1 10 1 27m
keda-hpa-azure-managed-prometheus-scaler Cluster/sample-domain1-cluster-1 7334m/10 (avg) 1 10 3 29m
keda-hpa-azure-managed-prometheus-scaler Cluster/sample-domain1-cluster-1 14667m/10 (avg) 1 10 3 48m
keda-hpa-azure-managed-prometheus-scaler Cluster/sample-domain1-cluster-1 0/10 (avg) 1 10 3 30m
keda-hpa-azure-managed-prometheus-scaler Cluster/sample-domain1-cluster-1 0/10 (avg) 1 10 3 35m
keda-hpa-azure-managed-prometheus-scaler Cluster/sample-domain1-cluster-1 0/10 (avg) 1 10 1 35m
keda-hpa-azure-managed-prometheus-scaler Cluster/sample-domain1-cluster-1 0/10 (avg) 1 10 5 53m
In a separate shell, use the following command to observe the WLS pods:
kubectl get pod -n sample-domain1-ns -w
This command produces output that looks similar to the following example:
$ kubectl get pod -n sample-domain1-ns -w
NAME READY STATUS RESTARTS AGE
sample-domain1-admin-server 2/2 Running 0 28h
sample-domain1-managed-server1 2/2 Running 0 28h
sample-domain1-managed-server1 2/2 Running 0 28h
sample-domain1-managed-server2 0/2 Pending 0 0s
sample-domain1-managed-server2 0/2 Pending 0 0s
sample-domain1-managed-server2 0/2 ContainerCreating 0 0s
sample-domain1-managed-server3 0/2 Pending 0 0s
sample-domain1-managed-server3 0/2 Pending 0 0s
sample-domain1-managed-server3 0/2 ContainerCreating 0 0s
sample-domain1-managed-server3 1/2 Running 0 1s
sample-domain1-admin-server 2/2 Running 0 95m
sample-domain1-managed-server1 2/2 Running 0 94m
sample-domain1-managed-server2 2/2 Running 0 56s
sample-domain1-managed-server3 2/2 Running 0 55s
sample-domain1-managed-server4 1/2 Running 0 9s
sample-domain1-managed-server5 1/2 Running 0 9s
sample-domain1-managed-server5 2/2 Running 0 37s
sample-domain1-managed-server4 2/2 Running 0 42s
sample-domain1-managed-server5 1/2 Terminating 0 6m46s
sample-domain1-managed-server5 1/2 Terminating 0 6m46s
sample-domain1-managed-server4 1/2 Running 0 6m51s
sample-domain1-managed-server4 1/2 Terminating 0 6m53s
sample-domain1-managed-server4 1/2 Terminating 0 6m53s
sample-domain1-managed-server3 1/2 Running 0 7m40s
sample-domain1-managed-server3 1/2 Terminating 0 7m45s
sample-domain1-managed-server3 1/2 Terminating 0 7m45s
The graph in the Azure Monitor workspace looks similar to the following screenshot:
To avoid Azure charges, you should clean up unnecessary resources. When you no longer need the cluster, use the az group delete command. The following commands remove the resource group, container service, container registry, and all related resources:
az group delete --name <wls-resource-group-name> --yes --no-wait
az group delete --name <ama-resource-group-name> --yes --no-wait
Continue to explore the following references for more options to build autoscaling solutions and run WLS on Azure:
Events
17 Mar, 21 - 21 Mar, 10
Join the meetup series to build scalable AI solutions based on real-world use cases with fellow developers and experts.
Register nowTraining
Learning path
Azure Kubernetes Service (AKS) application and cluster scalability - Training
Azure Kubernetes Service (AKS) application and cluster scalability
Certification
Microsoft Certified: Azure Administrator Associate - Certifications
Demonstrate key skills to configure, manage, secure, and administer key professional functions in Microsoft Azure.