Hi @Cezary Opacki ,
Welcome to Microsoft Q&A forum and thanks for reaching out here.
In general, Azure Synapse Analytics JDBC driver may experience poor performance due many factors like network latency, database configuration, and query optimization issues.
To improve performance, you can implement batch processing to group multiple insert or update operations into a single transaction, reducing the number of database round trips. Connection pooling can also be used to reuse existing database connections and reduce connection overhead. In addition, proper indexing of database tables can improve query performance by allowing the database to quickly locate data. Finally, optimizing SQL queries for efficiency and proper syntax can also improve performance. It is important to note that some performance issues can be caused by outdated statistics and unhealthy clustered columnstore indexes, so it is important to ensure that statistics are up-to-date and CCIs have been rebuilt.
Below is a sample code which uses batch processing to insert multiple rows into the table in a single transaction. You may play around with it as per your specific requirement and see if that helps:
import java.sql.*;
import java.util.ArrayList;
import java.util.List;
public class JDBCExample {
static final String JDBC_DRIVER = "com.microsoft.sqlserver.jdbc.SQLServerDriver";
static final String DB_URL = "jdbc:sqlserver://yourserver.database.windows.net:1433;database=yourdatabase;encrypt=true;trustServerCertificate=false;hostNameInCertificate=*.database.windows.net;loginTimeout=30;";
static final String USER = "yourusername";
static final String PASS = "yourpassword";
public static void main(String[] args) {
Connection conn = null;
PreparedStatement stmt = null;
try {
Class.forName(JDBC_DRIVER);
conn = DriverManager.getConnection(DB_URL, USER, PASS);
conn.setAutoCommit(false);
stmt = conn.prepareStatement("INSERT INTO mytable (col1, col2) VALUES (?, ?)");
List<String[]> data = new ArrayList<>();
data.add(new String[]{"value1", "value2"});
data.add(new String[]{"value3", "value4"});
for (String[] row : data) {
stmt.setString(1, row[0]);
stmt.setString(2, row[1]);
stmt.addBatch();
}
stmt.executeBatch();
conn.commit();
} catch (SQLException se) {
se.printStackTrace();
if (conn != null) {
try {
conn.rollback();
} catch (SQLException e) {
e.printStackTrace();
}
}
} catch (Exception e) {
e.printStackTrace();
} finally {
try {
if (stmt != null) stmt.close();
} catch (SQLException se2) {
}
try {
if (conn != null) conn.close();
} catch (SQLException se) {
se.printStackTrace();
}
}
}
}
I would also recommend having a look at below resources and see if that helps.
- Best practices for loading data into a dedicated SQL pool in Azure Synapse Analytics
- Azure Synapse analytics (dedicated SQL pool) data modelling best practices
- Memory and concurrency limits for dedicated SQL pool in Azure Synapse Analytics
- Synapse POC playbook: Data warehousing with dedicated SQL pool in Azure Synapse Analytics
Hope this info helps. In case if you still continue to see the performance issues, would recommend filing a support ticket for deeper analysis.
Please don’t forget to Accept Answer
and Yes
for "was this answer helpful" wherever the information provided helps you, this can be beneficial to other community members.