ADF Amazon Redshift linked service produces SSL error

Vincents Goldmanis 20 Reputation points
2025-11-20T12:15:04.1133333+00:00

I have a Amazon Redshift v1.0 linked service in Azure Data Factory that connects successfully, when I test the connection. When the service is used in a dataset, it fails to fetch data with 27809 error code:

Failed while reading data from source during connector transparent migration

ERROR [HY000] [Redshift][ODBC Driver][Server]SSL error: certificate verify failed

I can only use Azure Intergation Runtime, and cannot use self hosted integration runtime.

What are the recommended steps to solve this issue?

User's imageUser's image

Azure Data Factory
Azure Data Factory
An Azure service for ingesting, preparing, and transforming data at scale.
{count} votes

2 answers

Sort by: Most helpful
  1. Jerald Felix 9,830 Reputation points
    2025-11-20T16:40:32.5433333+00:00

    Hello Vincents Goldmanis,

    Thanks for raising this question in Q&A forum.

    I understand that you are using an Amazon Redshift v1.0 linked service with the Azure Integration Runtime (Auto-Resolve IR), and you are encountering the error SSL error: certificate verify failed when trying to read data, even though the "Test Connection" succeeds. You also mentioned that you cannot use a Self-Hosted IR.

    This error typically occurs because the legacy Redshift v1.0 connector in Azure Data Factory is based on an older ODBC driver that may not trust the newer CA certificates used by AWS Redshift, or it is failing during the "Transparent Migration" phase where ADF attempts to optimize the copy command.

    Here are the recommended steps to resolve this, specifically for Azure IR:

    1. Use the Amazon Redshift V2 Connector (Recommended):
      • The V1.0 connector is legacy. Microsoft strongly recommends using the Amazon Redshift V2 connector for new workloads.
      • Action: Create a new Linked Service using the Amazon Redshift V2 type. This connector is built on top of the newer drivers and handles SSL/TLS certificates much better natively within the Azure IR environment.
      • Note: The V2 connector creates a dataset typed as "Amazon Redshift" (same name in UI, but the underlying type is often different in JSON). If you edit your existing dataset, check if you can switch the Linked Service reference to the new V2 one.
    2. Bypass Certificate Validation (If staying on V1):
      • If you must use V1 and cannot use a Self-Hosted IR to install custom certificates, you can try to disable strict SSL verification in the connection string, though this is less secure.
      • In your Linked Service, look for "Encryption Method" or "Additional Connection Properties".
      • Try adding EncryptionMethod=1;ValidateServerCertificate=0; to the connection string or additional properties. This tells the driver to use encryption but not to validate the server's certificate chain (which is failing because the Azure IR doesn't have the specific Redshift CA trusted).
    3. EncryptionMethod Property:
      • Ensure your connection string explicitly sets EncryptionMethod=1 (SSL). Sometimes the default negotiation fails if not explicitly forced.
    4. Check "Unload" Settings:
      • The error mentions "connector transparent migration". This implies ADF might be trying to use the UNLOAD command to S3 for performance (which is the default efficient copy method).
      • Ensure the S3 bucket you are using for staging (if configured) is accessible and that the Redshift cluster has permission to write to it. Sometimes SSL errors mask underlying permission issues during the UNLOAD handshake.

    Summary: The most robust fix is to switch your Linked Service to the Amazon Redshift V2 connector, which is designed to fix these exact driver obsolescence issues on the Azure IR.

    If helps, approve the answer.

    Best Regards,

    Jerald Felix


  2. VRISHABHANATH PATIL 1,820 Reputation points Microsoft External Staff Moderator
    2025-12-03T06:35:27.6833333+00:00

    Hi @Vincents Goldmanis **

    **Thank you contacting to Microsoft QA, below are the few mitigation steps that may help you to resolve the issue

    ADF Redshift v1.0 uses an older, built‑in ODBC driver. During “transparent migration” ADF re-routes the data read through that driver. Newer AWS certificate chains or hostnames that don’t match the server certificate (e.g., connecting via a CNAME) commonly trip this driver, hence the SSL certificate verify failed. [learn.microsoft.com]

    AWS moved Redshift to ACM-issued certs. If the client doesn’t trust the updated root bundle or the hostname doesn’t match the cert, you’ll see exactly this error. [docs.aws.amazon.com], [docs.aws.amazon.com]

    The concrete fix (works with Azure Integration Runtime, no SHIR)

    • Use the cluster’s real endpoint (no CNAME/IP). In the Linked Service → Server, put the native xxx.redshift.amazonaws.com endpoint. Avoid Route 53 aliases or private IPs so the hostname matches the certificate’s CN/SAN. This alone fixes many verify-full failures. [learn.microsoft.com]
    • Keep SSL on, but relax strict certificate validation in v1.0. In the Redshift v1.0 Linked Service, open Advanced / Additional connection properties and add:

    EncryptionMethod=1;ValidateServerCertificate=0;ssl=true;sslmode=require

      - EncryptionMethod=1 + ssl=true keeps traffic encrypted.
      
         - ValidateServerCertificate=0 avoids the chain/hostname check that the built‑in driver often fails on Azure IR.
         
            - sslmode=require (not verify-full) ensures TLS is used without enforcing a cert match. This is the recommended workaround when you must stay on Azure IR (no SHIR). Use it only if your org’s security policy permits. [[learn.microsoft.com]](https://learn.microsoft.com/en-us/answers/questions/5629747/adf-amazon-redshift-linked-service-produces-ssl-er)
            
    
    • Double‑check Redshift side: TLS ≥ 1.2 and “require_SSL” if your policy mandates it. If you set require_SSL=true in the Redshift parameter group, the above still works; just don’t point to a CNAME that breaks hostname matching. AWS’s doc explains cert bundles and SSL modes in detail. [docs.aws.amazon.com]
    • Retry the dataset preview / pipeline run. The 27809 message is generic; if the SSL validation hurdle is gone, the preview and copy should proceed. If it still fails, re‑check that the exact endpoint is used and no gateway/proxy is doing TLS interception.

    If you’re allowed to move off v1.0 (future‑proof option)

    References -

    0 comments No comments

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.