@Senad Hadzikic - If you want to release the cached memory in your Databricks cluster without restarting the cluster itself, you can try the following steps:
- Use the
spark.catalog.clearCache()
method to clear the cached data in Spark. This method removes all cached data from memory and disk. You can run this method in a notebook cell to clear the cached data. - Use the
dbutils.fs.unmount()
method to unmount any mounted file systems. Mounted file systems can consume memory, so unmounting them can help free up memory. You can run this method in a notebook cell to unmount any mounted file systems. - Use the
sync
command to flush the file system buffers and free up memory. You can run this command in a notebook cell to flush the file system buffers. - Use the
echo /proc/sys/vm/drop_caches
command to drop the page cache, dentries, and inodes. This command can help free up memory that is being used by the operating system cache. However, this command requires root access, so you might need to contact your Databricks administrator to run this command. - Consider using a different type of Databricks cluster. For example, you might try using a different instance type or a different number of nodes to see if this improves memory usage.
Note that these steps might not free up all of the memory that is being used by your Databricks cluster, but they can help free up some memory. If you are still experiencing high memory usage after trying these steps, you might need to consider opening a support ticket for further assistance.
Hope this helps. Do let us know if you any further queries.