How to always use new docker named volume for every web app deployment?

Ting Chou 1 Reputation point
2020-11-05T17:42:42.56+00:00

Hi,

I am using docker compose for my web app and using docker named volume for file sharing between containers. Every time the images are built, the data will be written into the volume. My problem is after I rebuild the images (get new data and write into the volume) and redeploy, the web app still use the old volume with old data. But what I want is always using the new volume (same name to the old one but having different data) after deployment.

To achieve this, in local I will need to run docker volume rm cronData to remove the old local named volume before I run docker-compose up. But, how can we do this using in the Azure DevOps pipeline?

This is the current yml file for web app deployment in Azure DevOps release pipeline.

steps:
- task: AzureWebAppContainer@1
  displayName: 'Azure Web App on Container Deploy: '
  inputs:
    azureSubscription: 'xxx
    appName: 'xxx'
    multicontainerConfigFile: '$(System.DefaultWorkingDirectory)/xxx/xxx/docker-compose-configure.yml'
    appSettings: ' -WEBSITE_LOCAL_CACHE_OPTION Never'

And, this is my docker-compose.yml file. docker-compose-configure.yml is same to the docker-compose.yml execpt for the image field (pull image from ACR) and the no build field.

version: "3"
services:
  shiny:
    build: ./shiny
    image: shiny
    restart: always
    volumes:
      - cronData:/srv/shiny-server/cronData
    expose: 
      - 3838
    ports: 
      - 3838:3838

  cron:
    build: ./cron
    image: cron
    volumes:
      - cronData:/task/cronData

volumes:
  cronData:

Thanks in advance!!

Azure App Service
Azure App Service
Azure App Service is a service used to create and deploy scalable, mission-critical web apps.
8,937 questions
{count} votes

2 answers

Sort by: Most helpful
  1. Ting Chou 1 Reputation point
    2020-11-09T22:00:19.557+00:00

    hi @Anonymous ,

    Thank you for the prompt response. Appreciate that. :)

    I have tried to use ${WEBAPP_STORAGE_HOME} to map the volumes using ${WEBAPP_STORAGE_HOME}/cronData:/task/cronData ), but I found that since in the beginning the directory ( ${WEBAPP_STORAGE_HOME}/cronData ) will be empty, so if I mount my volume to that directory, then my original data in /task/cronData will be wiped out. But I do need the initial data in /task/cronData which is written in in the building stage.

    I guess the things I want to achieve is files from the container mounted to the host(https://stackoverflow.com/questions/42395748/docker-compose-volume-is-empty-even-from-initialize) rather than the other way around. And it seems that there is no way for us to run the docker commands to the web app host machine, so I don't think I can use the walk around suggested from the above link.

    I think BYOS can be a good solution. I can write the data into blob storage in the building stage and mount it back to the containers. However, after reading the documents, I am still not quite sure how to properly configure this. I can create a New Azure Storage Mount like how you did, then what I should specify in my docker-compose.yml and docker-compose-configure.yml file?

    Thank you!


  2. ajkuma 28,036 Reputation points Microsoft Employee Moderator
    2020-11-11T08:04:37.953+00:00

    @Ting Chou , Just summarizing the comments with additional info.
    With the current limitations (below) and based on your requirement, you may try something like these. If you have any further questions, kindly connect with me offline for more detailed insights.

    As mentioned above (comments), based on your scenario, the Multi-containers would offer a limited support (as it is in preview currently).

      a) Multi-container support on Web Apps for Containers is in preview and *isn’t recommend* for production use.    
      b) “*build*”, *depends_on*, *ports other than 80 and 8080* are not a supported as Docker Compose option on multi-containers ([see here][1]).  
      c) Bring your own storage only supports Azure Files for read/write and is also in preview.  
    

    The docker-compose.yml provided should expose “web”, how to deal with that as outlined in this doc.

    1. Build the images push them to ACR or any other repo.  
    2. Expose a web container for App Service to ping for container health checks.  
    3. Mount Azure Files as the volume for your CRON container so that content is added during build.  
    

    Additionally, to run cron jobs:

    • For Custom Containers

    1. Enable CRON job for custom containers:
    2. To Enable cron, add the following line to Dockerfile:
    3. RUN apt-get install -yqq cron
    4. And following line in the container init script (init_container.sh)
    5. (crontab -l && echo "* * * * * echo 'hello from cron' >> /home/site/wwwroot/cron1.txt")|crontab
    6. In this case we will write “hello from cron” to file /home/site/wwwroot/cron1.txt every minute. You can replace this with anything else you wish to run periodically.
    7. echo 'hello from cron' >> /home/site/wwwroot/cron1.txt

    For Blessed Images
    1.Create a script file in /home directory of the App Service and trigger it via the startup command under Configuration blade.
    2. A test script which installs cron as well as create a CRON job:

       #!/bin/sh  
      apt-get update -qq && apt-get install cron -yqq  
      service cron start  
      mkdir /home/BackupLogs  
     (crontab -l 2>/dev/null; echo "*/5 * * * * cp /home/LogFiles/*.log /home/BackupLogs")|crontab  
    

    [1] Document : Supported and Unsupported docker composition

    Apologies for any inconvenience! Hope this helps! If you have any further questions, kindly let us know.


Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.