Azure Machine Learning's AutoML currently does not directly support training object detection models using DICOM (.dcm) files. However, you can still work with DICOM files by converting them into a supported image format, such as .jpg or .png, before training the model. Here’s the general approach to handle this:
- Convert DICOM to JPEG/PNG: Use a tool or library like Python's
pydicom
to load DICOM files and convert them to JPEG or PNG format. Here's a sample code to do that:import pydicom from PIL import Image import numpy as np def convert_dcm_to_jpg(dicom_file, output_file): ds = pydicom.dcmread(dicom_file) img_array = ds.pixel_array img = Image.fromarray(img_array) img.save(output_file) # Example usage convert_dcm_to_jpg('input_image.dcm', 'output_image.jpg')
- Prepare the Dataset: After converting the images, ensure that the bounding box annotations are correctly aligned with the new image format. The labels from the Azure Data Labelling tool should remain applicable, but you may need to verify the label format matches the requirements for the converted images.
- Train Object Detection Model with AutoML: Now that you have the images in a supported format, you can use AutoML to train your object detection model by following these steps:
- Upload the converted images and corresponding label files to Azure Blob Storage or the appropriate storage location.
- Use AutoML for Object Detection and provide the dataset with the newly converted images and bounding boxes.
Here's an official guide on how to set up and train an object detection model with AutoML in Azure ML Studio.
By converting your DICOM files to JPEG or PNG, you can leverage AutoML for object detection while maintaining your bounding box labels. Let me know if you need more details on any step!