You may need to rewrite the table, so start by reading the existing table into a Spark datagrame, cast the columns to the desired data types and then write the df back to a new delta table with the desired schema (here is an example) :
import pyspark.sql.functions as F
df = spark.read.table("db_name.tblname")
df = df.withColumn("int_column", F.col("int_column").cast("int"))
df = df.withColumn("decimal_column", F.col("decimal_column").cast("decimal(10,2)"))
df.write.format("delta").mode("overwrite").saveAsTable("db_name.new_tblname")
Or, if you are familiar with Delta Lake column mapping, it can allow you to read and write data to a Delta table with a different schema than the table's actual schema :
# Enable column mapping for the table
spark.sql("ALTER TABLE db_name.tblname SET TBLPROPERTIES ('delta.columnMapping.enabled' = 'true')")
# Create a new Spark DataFrame with the desired schema
new_df = spark.createDataFrame([], ["int_column"])
# Write the DataFrame to the Delta table using the column mapping configuration
new_df.write.format("delta").mode("overwrite") \
.option("delta.columnMapping.int_column", "string") \
.saveAsTable("db_name.tblname")