ईवेंट्स
17 मार्च, 9 pm - 21 मार्च, 10 am
साथी डेवलपर्स और विशेषज्ञों के साथ वास्तविक दुनिया के उपयोग के मामलों के आधार पर स्केलेबल एआई समाधान बनाने के लिए मीटअप श्रृंखला में शामिल हों।
अभी पंजीकरण करेंयह ब्राउज़र अब समर्थित नहीं है.
नवीनतम सुविधाओं, सुरक्षा अपडेट और तकनीकी सहायता का लाभ लेने के लिए Microsoft Edge में अपग्रेड करें.
This guide describes what you should be aware of when you want to migrate an existing JBoss EAP application to run on WildFly in an Azure Kubernetes Service container.
To ensure a successful migration, before you start, complete the assessment and inventory steps described in the following sections.
Document the hardware (memory, CPU, disk) of the current production server(s) and the average and peak request counts and resource utilization. You'll need this information regardless of the migration path you choose. It's useful, for example, to help guide selection of the size of the VMs in your node pool, the amount of memory to be used by the container, and how many CPU shares the container needs.
It's possible to resize node pools in AKS. To learn how, see Resize node pools in Azure Kubernetes Service (AKS).
Check all properties and configuration files on the production server(s) for any secrets and passwords. Be sure to check jboss-web.xml in your WARs. Configuration files that contain passwords or credentials may also be found inside your application.
Consider storing those secrets in Azure KeyVault. For more information, see Azure Key Vault basic concepts.
Document all the certificates used for public SSL endpoints. You can view all certificates on the production server(s) by running the following command:
keytool -list -v -keystore <path to keystore>
Using WildFly on Azure Kubernetes Service requires a specific version of Java, so you'll need to confirm that your application runs correctly using that supported version.
नोट
This validation is especially important if your current server is running on an unsupported JDK (such as Oracle JDK or IBM OpenJ9).
To obtain your current Java version, sign in to your production server and run the following command:
java -version
See Requirements for guidance on what version to use to run WildFly.
Inventory all JNDI resources. Some, such as JMS message brokers, may require migration or reconfiguration.
If your application relies on session replication, you'll have to change your application to remove this dependency.
Inspect the WEB-INF/jboss-web.xml and/or WEB-INF/web.xml files.
If your application uses any databases, you need to capture the following information:
For more information, see About JBoss EAP Datasources in the JBoss EAP documentation.
Any usage of the file system on the application server will require reconfiguration or, in rare cases, architectural changes. File system may be used by JBoss EAP modules or by your application code. You may identify some or all of the scenarios described in the following sections.
If your application currently serves static content, you need an alternate location for it. You should consider moving static content to Azure Blob Storage and adding Azure Front Door for fast downloads globally. For more information, see Static website hosting in Azure Storage and Integrate an Azure Storage account with Azure Front Door.
For files that are frequently written and read by your application (such as temporary data files), or static files that are visible only to your application, you can mount Azure Storage shares as persistent volumes. For more information, see Create and use a volume with Azure Files in Azure Kubernetes Service (AKS).
Scheduled jobs, such as Quartz Scheduler tasks or Unix cron jobs, should NOT be used with Azure Kubernetes Service (AKS). Azure Kubernetes Service will not prevent you from deploying an application containing scheduled tasks internally. However, if your application is scaled out, the same scheduled job may run more than once per scheduled period. This situation can lead to unintended consequences.
To execute scheduled jobs on your AKS cluster, define Kubernetes CronJobs as needed. For more information, see Running Automated Tasks with a CronJob.
If your application needs to access any of your on-premises services, you'll need to provision one of Azure's connectivity services. For more information, see Connect an on-premises network to Azure. Alternatively, you'll need to refactor your application to use publicly available APIs that your on-premises resources expose.
If your application is using JMS Queues or Topics, you'll need to migrate them to an externally hosted JMS server. Azure Service Bus and the Advanced Message Queuing Protocol (AMQP) can be a great migration strategy for those using JMS. For more information, see Use Java Message Service 1.1 with Azure Service Bus standard and AMQP 1.0.
If JMS persistent stores have been configured, you must capture their configuration and apply it after the migration.
If your application uses JBoss-EAP-specific APIs, you'll need to refactor it to remove those dependencies.
If your application uses Entity Beans or EJB 2.x style CMP beans, you'll need to refactor your application to remove these dependencies.
If you have client applications that connect to your (server) application using the Java EE Application Client feature, you'll need to refactor both your client applications and your (server) application to use HTTP APIs.
If your application contains any code with dependencies on the host OS, then you need to refactor it to remove those dependencies. For example, you may need to replace any use of /
or \
in file system paths with File.Separator
or Paths.get
if your application is running on Windows.
If your application uses EJB timers, you'll need to validate that the EJB timer code can be triggered by each WildFly instance independently. This validation is needed because, in the Azure Kubernetes Service deployment scenario, each EJB timer will be triggered on its own WildFly instance.
If your application uses JCA connectors, validate that you can use the JCA connector on WildFly. If the JCA implementation is tied to JBoss EAP, you must refactor your application to remove that dependency. If you can use the JCA connector on WildFly, then for it to be available, you must add the JARs to the server classpath and put the necessary configuration files in the correct location in the WildFly server directories.
If your application is using JAAS, you'll need to capture how JAAS is configured. If it's using a database, you can convert it to a JAAS domain on WildFly. If it's a custom implementation, you'll need to validate that it can be used on WildFly.
If your application needs a Resource Adapter (RA), it needs to be compatible with WildFly. Determine whether the RA works fine on a standalone instance of WildFly by deploying it to the server and properly configuring it. If the RA works properly, you'll need to add the JARs to the server classpath of the Docker image and put the necessary configuration files in the correct location in the WildFly server directories for it to be available.
If your application is composed of multiple WARs, you should treat each of those WARs as separate applications and go through this guide for each of them.
If your application is packaged as an EAR file, be sure to examine the application.xml file and capture the configuration.
नोट
If you want to be able to scale each of your web applications independently for better use of your Azure Kubernetes Service (AKS) resources you should break up the EAR into separate web applications.
If you have any processes running outside the application server, such as monitoring daemons, you'll need to eliminate them or migrate them elsewhere.
Prior to creating your container images, migrate your application to the JDK and WildFly versions that you intend to use on AKS. Test the application thoroughly to ensure compatibility and performance.
Use the following commands to create a container registry and an Azure Kubernetes cluster with a Service Principal that has the Reader role on the registry. Be sure to choose the appropriate network model for your cluster's networking requirements.
az group create \
--resource-group $resourceGroup \
--location eastus
az acr create \
--resource-group $resourceGroup \
--name $acrName \
--sku Standard
az aks create \
--resource-group $resourceGroup \
--name $aksName \
--attach-acr $acrName \
--network-plugin azure
To create a Dockerfile, you'll need the following prerequisites:
You can then perform the steps described in the following sections, where applicable. You can use the WildFly Container Quickstart repo as a starting point for your Dockerfile and web application.
Create an Azure KeyVault and populate all the necessary secrets. For more information, see Quickstart: Set and retrieve a secret from Azure Key Vault using Azure CLI. Then, configure a KeyVault FlexVolume to make those secrets accessible to pods.
You will also need to update the startup script used to bootstrap WildFly. This script must import the certificates into the keystore used by WildFly before starting the server.
To configure WildFly to access a data source, you'll need to add the JDBC driver JAR to your Docker image, and then execute the appropriate JBoss CLI commands. These commands must set up the data source when building your Docker image.
The following steps provide instructions for PostgreSQL, MySQL and SQL Server.
Download the JDBC driver for PostgreSQL, MySQL, or SQL Server.
Unpack the downloaded archive to get the driver .jar file.
Create a file with a name like module.xml
and add the following markup. Replace the <module name>
placeholder (including the angle brackets) with org.postgres
for PostgreSQL, com.mysql
for MySQL, or com.microsoft
for SQL Server. Replace <JDBC .jar file path>
with the name of the .jar file from the previous step, including the full path to the location you will place the file in your Docker image, for example in /opt/database
.
<?xml version="1.0" ?>
<module xmlns="urn:jboss:module:1.1" name="<module name>">
<resources>
<resource-root path="<JDBC .jar file path>" />
</resources>
<dependencies>
<module name="javax.api"/>
<module name="javax.transaction.api"/>
</dependencies>
</module>
Create a file with a name like datasource-commands.cli
and add the following code. Replace <JDBC .jar file path>
with the value you used in the previous step. Replace <module file path>
with the file name and path from the previous step, for example /opt/database/module.xml
.
नोट
Microsoft recommends using the most secure authentication flow available. The authentication flow described in this procedure, such as for databases, caches, messaging, or AI services, requires a very high degree of trust in the application and carries risks not present in other flows. Use this flow only when more secure options, like managed identities for passwordless or keyless connections, are not viable. For local machine operations, prefer user identities for passwordless or keyless connections.
batch
module add --name=org.postgres --resources=<JDBC .jar file path> --module-xml=<module file path>
/subsystem=datasources/jdbc-driver=postgres:add(driver-name=postgres,driver-module-name=org.postgres,driver-class-name=org.postgresql.Driver,driver-xa-datasource-class-name=org.postgresql.xa.PGXADataSource)
data-source add --name=postgresDS --driver-name=postgres --jndi-name=java:jboss/datasources/postgresDS --connection-url=$DATABASE_CONNECTION_URL --user-name=$DATABASE_SERVER_ADMIN_FULL_NAME --password=$DATABASE_SERVER_ADMIN_PASSWORD --use-ccm=true --max-pool-size=5 --blocking-timeout-wait-millis=5000 --enabled=true --driver-class=org.postgresql.Driver --exception-sorter-class-name=org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLExceptionSorter --jta=true --use-java-context=true --valid-connection-checker-class-name=org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLValidConnectionChecker
reload
run batch
shutdown
Update the the JTA datasource configuration for your application:
Open the src/main/resources/META-INF/persistence.xml
file for your app and find the <jta-data-source>
element. Replace its contents as shown here:
<jta-data-source>java:jboss/datasources/postgresDS</jta-data-source>
Add the following to your Dockerfile
so the data source is created when you build your Docker image
RUN /bin/bash -c '<WILDFLY_INSTALL_PATH>/bin/standalone.sh --start-mode admin-only &' && \
sleep 30 && \
<WILDFLY_INSTALL_PATH>/bin/jboss-cli.sh -c --file=/opt/database/datasource-commands.cli && \
sleep 30
Determine the DATABASE_CONNECTION_URL
to use as they are different for each database server, and different than the values on the Azure portal. The URL formats shown here are required for use by WildFly:
jdbc:postgresql://<database server name>:5432/<database name>?ssl=true
When creating your deployment YAML at a later stage you will need to pass the following environment variables, DATABASE_CONNECTION_URL
, DATABASE_SERVER_ADMIN_FULL_NAME
and DATABASE_SERVER_ADMIN_PASSWORD
with the appropriate values.
नोट
Microsoft recommends using the most secure authentication flow available. The authentication flow described in this procedure, such as for databases, caches, messaging, or AI services, requires a very high degree of trust in the application and carries risks not present in other flows. Use this flow only when more secure options, like managed identities for passwordless or keyless connections, are not viable. For local machine operations, prefer user identities for passwordless or keyless connections.
For more info on configuring database connectivity with WildFly, see PostgreSQL, MySQL, or SQL Server.
To set up each JNDI resource you need to configure on WildFly, you will generally use the following steps:
The example below shows the steps needed to create the JNDI resource for JMS connectivity to Azure Service Bus.
Download the Apache Qpid JMS provider
Unpack the downloaded archive to get the .jar files.
Create a file with a name like module.xml
and add the following markup in /opt/servicebus
. Make sure the version numbers of the JAR files align with the names of the JAR files of the previous step.
<?xml version="1.0" ?>
<module xmlns="urn:jboss:module:1.1" name="org.jboss.genericjms.provider">
<resources>
<resource-root path="proton-j-0.31.0.jar"/>
<resource-root path="qpid-jms-client-0.40.0.jar"/>
<resource-root path="slf4j-log4j12-1.7.25.jar"/>
<resource-root path="slf4j-api-1.7.25.jar"/>
<resource-root path="log4j-1.2.17.jar"/>
<resource-root path="netty-buffer-4.1.32.Final.jar" />
<resource-root path="netty-codec-4.1.32.Final.jar" />
<resource-root path="netty-codec-http-4.1.32.Final.jar" />
<resource-root path="netty-common-4.1.32.Final.jar" />
<resource-root path="netty-handler-4.1.32.Final.jar" />
<resource-root path="netty-resolver-4.1.32.Final.jar" />
<resource-root path="netty-transport-4.1.32.Final.jar" />
<resource-root path="netty-transport-native-epoll-4.1.32.Final-linux-x86_64.jar" />
<resource-root path="netty-transport-native-kqueue-4.1.32.Final-osx-x86_64.jar" />
<resource-root path="netty-transport-native-unix-common-4.1.32.Final.jar" />
<resource-root path="qpid-jms-discovery-0.40.0.jar" />
</resources>
<dependencies>
<module name="javax.api"/>
<module name="javax.jms.api"/>
</dependencies>
</module>
Create a jndi.properties
file in /opt/servicebus
.
connectionfactory.${MDB_CONNECTION_FACTORY}=amqps://${DEFAULT_SBNAMESPACE}.servicebus.windows.net?amqp.idleTimeout=120000&jms.username=${SB_SAS_POLICY}&jms.password=${SB_SAS_KEY}
queue.${MDB_QUEUE}=${SB_QUEUE}
topic.${MDB_TOPIC}=${SB_TOPIC}
Create a file with a name like servicebus-commands.cli
and add the following code.
batch
/subsystem=ee:write-attribute(name=annotation-property-replacement,value=true)
/system-property=property.mymdb.queue:add(value=myqueue)
/system-property=property.connection.factory:add(value=java:global/remoteJMS/SBF)
/subsystem=ee:list-add(name=global-modules, value={"name" => "org.jboss.genericjms.provider", "slot" =>"main"}
/subsystem=naming/binding="java:global/remoteJMS":add(binding-type=external-context,module=org.jboss.genericjms.provider,class=javax.naming.InitialContext,environment=[java.naming.factory.initial=org.apache.qpid.jms.jndi.JmsInitialContextFactory,org.jboss.as.naming.lookup.by.string=true,java.naming.provider.url=/opt/servicebus/jndi.properties])
/subsystem=resource-adapters/resource-adapter=generic-ra:add(module=org.jboss.genericjms,transaction-support=XATransaction)
/subsystem=resource-adapters/resource-adapter=generic-ra/connection-definitions=sbf-cd:add(class-name=org.jboss.resource.adapter.jms.JmsManagedConnectionFactory, jndi-name=java:/jms/${MDB_CONNECTION_FACTORY})
/subsystem=resource-adapters/resource-adapter=generic-ra/connection-definitions=sbf-cd/config-properties=ConnectionFactory:add(value=${MDB_CONNECTION_FACTORY})
/subsystem=resource-adapters/resource-adapter=generic-ra/connection-definitions=sbf-cd/config-properties=JndiParameters:add(value="java.naming.factory.initial=org.apache.qpid.jms.jndi.JmsInitialContextFactory;java.naming.provider.url=/opt/servicebus/jndi.properties")
/subsystem=resource-adapters/resource-adapter=generic-ra/connection-definitions=sbf-cd:write-attribute(name=security-application,value=true)
/subsystem=ejb3:write-attribute(name=default-resource-adapter-name, value=generic-ra)
run-batch
reload
shutdown
Add the following to your Dockerfile
so the JNDI resource is created when you build your Docker image
RUN /bin/bash -c '<WILDFLY_INSTALL_PATH>/bin/standalone.sh --start-mode admin-only &' && \
sleep 30 && \
<WILDFLY_INSTALL_PATH>/bin/jboss-cli.sh -c --file=/opt/servicebus/servicebus-commands.cli && \
sleep 30
When creating your deployment YAML at a later stage you will need to pass the following environment variables, MDB_CONNECTION_FACTORY
, DEFAULT_SBNAMESPACE
and SB_SAS_POLICY
, SB_SAS_KEY
, MDB_QUEUE
, SB_QUEUE
, MDB_TOPIC
and SB_TOPIC
with the appropriate values.
Review the WildFly Admin Guide to cover any additional pre-migration steps not covered by the previous guidance.
After you've created the Dockerfile, you'll need to build the Docker image and publish it to your Azure container registry.
If you used our WildFly Container Quickstart GitHub repo, the process of building and pushing your image to your Azure container registry would be the equivalent of invoking the following three commands.
In these examples, the MY_ACR
environment variable holds the name of your Azure container registry and the MY_APP_NAME
variable holds the name of the web application you want to use on your Azure container registry.
Build the WAR file:
mvn package
Log into your Azure container registry:
az acr login --name ${MY_ACR}
Build and push the image:
az acr build --image ${MY_ACR}.azurecr.io/${MY_APP_NAME} --file src/main/docker/Dockerfile .
Alternatively, you can use Docker CLI to first build and test the image locally, as shown in the following commands. This approach can simplify testing and refining the image before initial deployment to ACR. However, it requires you to install the Docker CLI and ensure the Docker daemon is running.
Build the image:
docker build -t ${MY_ACR}.azurecr.io/${MY_APP_NAME}
Run the image locally:
docker run -it -p 8080:8080 ${MY_ACR}.azurecr.io/${MY_APP_NAME}
Your can now access your application at http://localhost:8080
.
Log into your Azure container registry:
az acr login --name ${MY_ACR}
Push the image to your Azure container registry:
docker push ${MY_ACR}.azurecr.io/${MY_APP_NAME}
For more in-depth information on building and storing container images in Azure, see the Learn module Build and store container images with Azure Container Registry.
If your application is to be accessible from outside your internal or virtual network(s), you'll need a public static IP address. You should provision this IP address inside your cluster's node resource group, as shown in the following example:
export nodeResourceGroup=$(az aks show \
--resource-group $resourceGroup \
--name $aksName \
--query 'nodeResourceGroup' \
--output tsv)
export publicIp=$(az network public-ip create \
--resource-group $nodeResourceGroup \
--name applicationIp \
--sku Standard \
--allocation-method Static \
--query 'publicIp.ipAddress' \
--output tsv)
echo "Your public IP address is ${publicIp}."
Create and apply your Kubernetes YAML file(s). For more information, see Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using the Azure CLI. If you're creating an external load balancer (whether for your application or for an ingress controller), be sure to provide the IP address provisioned in the previous section as the LoadBalancerIP
.
Include externalized parameters as environment variables. For more information, see Define Environment Variables for a Container. Don't include secrets (such as passwords, API keys, and JDBC connection strings). These are covered in the following section.
Be sure to include memory and CPU settings when creating your deployment YAML so your containers are properly sized.
If your application requires non-volatile storage, configure one or more Persistent Volumes.
To execute scheduled jobs on your AKS cluster, define Kubernetes CronJobs as needed. For more information, see Running Automated Tasks with a CronJob.
Now that you've migrated your application to Azure Kubernetes Service, you should verify that it works as you expect. After you've done that, we have some recommendations for you that can make your application more cloud-native.
Consider adding a DNS name to the IP address allocated to your ingress controller or application load balancer. For more information, see Use TLS with an ingress controller on Azure Kubernetes Service (AKS).
Consider adding HELM charts for your application. A helm chart allows you to parameterize your application deployment for use and customization by a more diverse set of customers.
Design and implement a DevOps strategy. In order to maintain reliability while increasing your development velocity, consider automating deployments and testing with Azure Pipelines. For more information, see Build and deploy to Azure Kubernetes Service with Azure Pipelines.
Enable Azure Monitoring for the cluster. For more information, see Enable monitoring for Kubernetes clusters. This allows Azure monitor to collect container logs, track utilization, and so on.
Consider exposing application-specific metrics via Prometheus. Prometheus is an open-source metrics framework broadly adopted in the Kubernetes community. You can configure Prometheus Metrics scraping in Azure Monitor instead of hosting your own Prometheus server to enable metrics aggregation from your applications and automated response to or escalation of aberrant conditions. For more information, see Enable Prometheus and Grafana.
Design and implement a business continuity and disaster recovery strategy. For mission-critical applications, consider a multi-region deployment architecture. For more information, see High availability and disaster recovery overview for Azure Kubernetes Service (AKS).
Review the Kubernetes version support policy. It's your responsibility to keep updating your AKS cluster to ensure that it's always running a supported version. For more information, see Upgrade options for Azure Kubernetes Service (AKS) clusters.
Have all team members responsible for cluster administration and application development review the pertinent AKS best practices. For more information, see Cluster operator and developer best practices to build and manage applications on Azure Kubernetes Service (AKS).
Make sure your deployment file specifies how rolling updates are done. For more information, see Rolling Update Deployment in the Kubernetes documentation.
Set up auto scaling to deal with peak time loads. For more information, see Use the cluster autoscaler in Azure Kubernetes Service (AKS).
Consider monitoring the code cache size and adding the JVM parameters -XX:InitialCodeCacheSize
and -XX:ReservedCodeCacheSize
in the Dockerfile to further optimize performance. For more information, see Codecache Tuning in the Oracle documentation.
ईवेंट्स
17 मार्च, 9 pm - 21 मार्च, 10 am
साथी डेवलपर्स और विशेषज्ञों के साथ वास्तविक दुनिया के उपयोग के मामलों के आधार पर स्केलेबल एआई समाधान बनाने के लिए मीटअप श्रृंखला में शामिल हों।
अभी पंजीकरण करेंप्रशिक्षण
मॉड्यूल
कंटेनराइज़ करें और Azure में जावा ऐप परिनियोजित करें - Training
Java अनुप्रयोग को कंटेनरीकृत करें, कंटेनर छवि को Azure Container Registry पर पुश करें और फिर Azure Kubernetes सेवा में परिनियोजित करें.
Certification
Microsoft प्रमाणित: Azure for SAP Workloads विशेषता - Certifications
Azure संसाधनों का लाभ उठाते समय Microsoft Azure पर SAP समाधान की योजना, माइग्रेशन और संचालन प्रदर्शित करें.
दस्तावेज़ीकरण
Migrate WildFly applications to WildFly on Azure Kubernetes Service - Java on Azure
This guide describes what you should be aware of when you want to migrate an existing WildFly application to run on WildFly in an Azure Kubernetes Service container.