Alternative to hyperscale sql long term backups

Inma 66 Reputation points
2022-04-06T06:21:46.057+00:00

We would like to extend the backup/snapshot retention policy period from 7 days to at least 30 days in our Hyperscale sqlserver db. As far as I know it´s not possible and I would like to know if this feature will be released in the close future.
Alternatively I would like to know another supported tools to implement long term retention backups in Hyperscale (bacpac, export, ADF pipelines etc) at schema level or table level because of the DB size. I have checked some alternatives (New-AzSqlDatabaseExport etc) but unfortunately not supported for Hyperscale architecture.

Thanks in advance.

Azure SQL Database
0 comments No comments
{count} votes

Accepted answer
  1. Oury Ba-MSFT 16,901 Reputation points Microsoft Employee
    2022-04-08T17:20:47.58+00:00

    @Inma Thank you for posting your query on Microsoft Q&A.

    From my understanding, you are looking to extend the backup/snapshot retention policy period from 7 days to at least 30 days in Hyperscale sqlserver db, and also want to know another supported tool to implement long term retention backups in Hyperscale (bacpac, export, ADF pipelines etc) at schema level or table level.
    Our product Group is planning for a public preview for short term retention (PITR) greater than 7 days (up to 35 days) this year. As far as I know, there is no supported tool to implement LTRs backup at schema level or table level.

    Regards,
    Oury

    0 comments No comments

3 additional answers

Sort by: Most helpful
  1. Alberto Morillo 33,086 Reputation points MVP
    2022-04-06T10:59:13.167+00:00

    I would suggest you to consider creating copies of the database that are really fast because they are based on snaphots (fast copy). When the copy is done in the same region as the source, the copy will be created from the snapshots of blobs, this copy is a fast operation regardless of the database size. The following is an example of how to copy the database.

    az sql db copy --resource-group "sourceRG" --server sourceServer --name "sourceDB" --dest-resource-group destDB --dest-server destServer --dest-name destDB --service-objective HS_Gen5_2 --read-replicas 0  
    

    Here you will get more information about how to copy a database on Hyperscale.

    Exporting to bacpac you may get timeouts after 30-45 minutes of progress. You may get errors like the one below:

    Could not extract package from specified database. (Microsoft.SqlServer.Dac) Unable to reconnect to database: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. (Microsoft.Data.Tools.Schema.Sql) Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. (Microsoft SQL Server) The wait operation timed out.

    0 comments No comments

  2. Inma 66 Reputation points
    2022-04-06T11:24:36.863+00:00

    Yes, I have already considered fast copies, but the problem is that the database can not be shrunk, so at the end is not acceptable backup solution if you need to keep at least quarterly copies.
    I would like consider more flexible and economic backup solution like where I could exclude historical tables, or where you can export the metadata and some contents, etc.
    Any suggestion is welcome.
    Thanks a lot.


  3. Inma 66 Reputation points
    2022-04-07T05:36:52.447+00:00

    It is possible apply compression directly in the copy ? (I don´t find any parameter/option at "az sql db copy" command level)
    Because in the source DB it is not possible because of the performance requirements.
    I would like to consider another options like sqlpackage.exe, to export at schema or object level.. Any suggestion about how to export hyperscale sql db with this or any supported tool?
    Thanks.

    0 comments No comments