Compartilhar via


ImageEstimatorsCatalog.ExtractPixels Método

Definição

Crie um ImagePixelExtractingEstimator, que extrai valores de pixels dos dados especificados na coluna: inputColumnName para uma nova coluna: outputColumnName.

public static Microsoft.ML.Transforms.Image.ImagePixelExtractingEstimator ExtractPixels (this Microsoft.ML.TransformsCatalog catalog, string outputColumnName, string inputColumnName = default, Microsoft.ML.Transforms.Image.ImagePixelExtractingEstimator.ColorBits colorsToExtract = Microsoft.ML.Transforms.Image.ImagePixelExtractingEstimator+ColorBits.Rgb, Microsoft.ML.Transforms.Image.ImagePixelExtractingEstimator.ColorsOrder orderOfExtraction = Microsoft.ML.Transforms.Image.ImagePixelExtractingEstimator+ColorsOrder.ARGB, bool interleavePixelColors = false, float offsetImage = 0, float scaleImage = 1, bool outputAsFloatArray = true);
static member ExtractPixels : Microsoft.ML.TransformsCatalog * string * string * Microsoft.ML.Transforms.Image.ImagePixelExtractingEstimator.ColorBits * Microsoft.ML.Transforms.Image.ImagePixelExtractingEstimator.ColorsOrder * bool * single * single * bool -> Microsoft.ML.Transforms.Image.ImagePixelExtractingEstimator
<Extension()>
Public Function ExtractPixels (catalog As TransformsCatalog, outputColumnName As String, Optional inputColumnName As String = Nothing, Optional colorsToExtract As ImagePixelExtractingEstimator.ColorBits = Microsoft.ML.Transforms.Image.ImagePixelExtractingEstimator+ColorBits.Rgb, Optional orderOfExtraction As ImagePixelExtractingEstimator.ColorsOrder = Microsoft.ML.Transforms.Image.ImagePixelExtractingEstimator+ColorsOrder.ARGB, Optional interleavePixelColors As Boolean = false, Optional offsetImage As Single = 0, Optional scaleImage As Single = 1, Optional outputAsFloatArray As Boolean = true) As ImagePixelExtractingEstimator

Parâmetros

catalog
TransformsCatalog

O catálogo da transformação.

outputColumnName
String

Nome da coluna resultante da transformação de inputColumnName. O tipo de dados dessa coluna será um vetor de tamanho conhecido de ou Byte dependendo outputAsFloatArrayde Single .

inputColumnName
String

Nome da coluna com imagens. Esse avaliador opera em .MLImage

colorsToExtract
ImagePixelExtractingEstimator.ColorBits

As cores a serem extraídas da imagem.

orderOfExtraction
ImagePixelExtractingEstimator.ColorsOrder

A ordem na qual extrair cores do pixel.

interleavePixelColors
Boolean

Se deseja intercalar as cores dos pixels, ou seja, mantê-las na orderOfExtraction ordem ou deixá-las no formato de planejador: todos os valores de uma cor para todos os pixels, então todos os valores para outra cor e assim por diante.

offsetImage
Single

Deslocamento do valor de cor de cada pixel por esse valor. Aplicado ao valor de cor antes scaleImagede .

scaleImage
Single

Dimensione o valor de cor de cada pixel por esse valor. Aplicado ao valor de cor após offsetImage.

outputAsFloatArray
Boolean

Matriz de saída como matriz float. Se false, gere como matriz de bytes e ignore offsetImage e scaleImage.

Retornos

Exemplos

using System;
using System.IO;
using System.Linq;
using Microsoft.ML;
using Microsoft.ML.Data;

namespace Samples.Dynamic
{
    public static class ExtractPixels
    {
        // Sample that loads the images from the file system, resizes them (
        // ExtractPixels requires a resizing operation), and extracts the values of
        // the pixels as a vector.
        public static void Example()
        {
            // Create a new ML context, for ML.NET operations. It can be used for
            // exception tracking and logging, as well as the source of randomness.
            var mlContext = new MLContext();

            // Downloading a few images, and an images.tsv file, which contains a
            // list of the files from the dotnet/machinelearning/test/data/images/.
            // If you inspect the fileSystem, after running this line, an "images"
            // folder will be created, containing 4 images, and a .tsv file
            // enumerating the images.
            var imagesDataFile = Microsoft.ML.SamplesUtils.DatasetUtils
                .GetSampleImages();

            // Preview of the content of the images.tsv file
            //
            // imagePath    imageType
            // tomato.bmp   tomato
            // banana.jpg   banana
            // hotdog.jpg   hotdog
            // tomato.jpg   tomato

            var data = mlContext.Data.CreateTextLoader(new TextLoader.Options()
            {
                Columns = new[]
                {
                        new TextLoader.Column("ImagePath", DataKind.String, 0),
                        new TextLoader.Column("Name", DataKind.String, 1),
                }
            }).Load(imagesDataFile);

            var imagesFolder = Path.GetDirectoryName(imagesDataFile);
            // Image loading pipeline.
            var pipeline = mlContext.Transforms.LoadImages("ImageObject",
                imagesFolder, "ImagePath")
                .Append(mlContext.Transforms.ResizeImages("ImageObjectResized",
                    inputColumnName: "ImageObject", imageWidth: 100, imageHeight:
                    100))
                .Append(mlContext.Transforms.ExtractPixels("Pixels",
                    "ImageObjectResized"));

            var transformedData = pipeline.Fit(data).Transform(data);

            // Preview the transformedData.
            PrintColumns(transformedData);

            // ImagePath    Name         ImageObject               ImageObjectResized        Pixels
            // tomato.bmp   tomato       {Width=800, Height=534}   {Width=100, Height=100}   255,255,255,255,255...
            // banana.jpg   banana       {Width=800, Height=288}   {Width=100, Height=100}   255,255,255,255,255...
            // hotdog.jpg   hotdog       {Width=800, Height=391}   {Width=100, Height=100}   255,255,255,255,255...
            // tomato.jpg   tomato       {Width=800, Height=534}   {Width=100, Height=100}   255,255,255,255,255...
        }

        private static void PrintColumns(IDataView transformedData)
        {
            Console.WriteLine("{0, -25} {1, -25} {2, -25} {3, -25} {4, -25}",
                "ImagePath", "Name", "ImageObject", "ImageObjectResized", "Pixels");

            using (var cursor = transformedData.GetRowCursor(transformedData
                .Schema))
            {
                // Note that it is best to get the getters and values *before*
                // iteration, so as to facilitate buffer sharing (if applicable), and
                // column -type validation once, rather than many times.

                ReadOnlyMemory<char> imagePath = default;
                ReadOnlyMemory<char> name = default;
                MLImage imageObject = null;
                MLImage resizedImageObject = null;
                VBuffer<float> pixels = default;

                var imagePathGetter = cursor.GetGetter<ReadOnlyMemory<char>>(cursor
                    .Schema["ImagePath"]);

                var nameGetter = cursor.GetGetter<ReadOnlyMemory<char>>(cursor
                    .Schema["Name"]);

                var imageObjectGetter = cursor.GetGetter<MLImage>(cursor.Schema[
                    "ImageObject"]);

                var resizedImageGetter = cursor.GetGetter<MLImage>(cursor.Schema[
                    "ImageObjectResized"]);

                var pixelsGetter = cursor.GetGetter<VBuffer<float>>(cursor.Schema[
                    "Pixels"]);

                while (cursor.MoveNext())
                {

                    imagePathGetter(ref imagePath);
                    nameGetter(ref name);
                    imageObjectGetter(ref imageObject);
                    resizedImageGetter(ref resizedImageObject);
                    pixelsGetter(ref pixels);

                    Console.WriteLine("{0, -25} {1, -25} {2, -25} {3, -25} " +
                        "{4, -25}", imagePath, name,
                        $"Width={imageObject.Width}, Height={imageObject.Height}",
                        $"Width={resizedImageObject.Width}, Height={resizedImageObject.Height}",
                        string.Join(",", pixels.DenseValues().Take(5)) + "...");
                }

                // Dispose the image.
                imageObject.Dispose();
                resizedImageObject.Dispose();
            }
        }
    }
}
using System;
using System.Linq;
using Microsoft.ML;
using Microsoft.ML.Data;
using Microsoft.ML.Transforms.Image;

namespace Samples.Dynamic
{
    public static class ApplyOnnxModelWithInMemoryImages
    {
        // Example of applying ONNX transform on in-memory images.
        public static void Example()
        {
            // Download the squeeznet image model from ONNX model zoo, version 1.2
            // https://github.com/onnx/models/tree/master/vision/classification/squeezenet or use
            // Microsoft.ML.Onnx.TestModels nuget.
            // It's a multiclass classifier. It consumes an input "data_0" and
            // produces an output "softmaxout_1".
            var modelPath = @"squeezenet\00000001\model.onnx";

            // Create ML pipeline to score the data using OnnxScoringEstimator
            var mlContext = new MLContext();

            // Create in-memory data points. Its Image/Scores field is the
            // input /output of the used ONNX model.
            var dataPoints = new ImageDataPoint[]
            {
                new ImageDataPoint(red: 255, green: 0, blue: 0), // Red color
                new ImageDataPoint(red: 0, green: 128, blue: 0)  // Green color
            };

            // Convert training data to IDataView, the general data type used in
            // ML.NET.
            var dataView = mlContext.Data.LoadFromEnumerable(dataPoints);

            // Create a ML.NET pipeline which contains two steps. First,
            // ExtractPixle is used to convert the 224x224 image to a 3x224x224
            // float tensor. Then the float tensor is fed into a ONNX model with an
            // input called "data_0" and an output called "softmaxout_1". Note that
            // "data_0" and "softmaxout_1" are model input and output names stored
            // in the used ONNX model file. Users may need to inspect their own
            // models to get the right input and output column names.
            // Map column "Image" to column "data_0"
            // Map column "data_0" to column "softmaxout_1"
            var pipeline = mlContext.Transforms.ExtractPixels("data_0", "Image")
                .Append(mlContext.Transforms.ApplyOnnxModel("softmaxout_1",
                "data_0", modelPath));

            var model = pipeline.Fit(dataView);
            var onnx = model.Transform(dataView);

            // Convert IDataView back to IEnumerable<ImageDataPoint> so that user
            // can inspect the output, column "softmaxout_1", of the ONNX transform.
            // Note that Column "softmaxout_1" would be stored in ImageDataPont
            //.Scores because the added attributed [ColumnName("softmaxout_1")]
            // tells that ImageDataPont.Scores is equivalent to column
            // "softmaxout_1".
            var transformedDataPoints = mlContext.Data.CreateEnumerable<
                ImageDataPoint>(onnx, false).ToList();

            // The scores are probabilities of all possible classes, so they should
            // all be positive.
            foreach (var dataPoint in transformedDataPoints)
            {
                var firstClassProb = dataPoint.Scores.First();
                var lastClassProb = dataPoint.Scores.Last();
                Console.WriteLine("The probability of being the first class is " +
                    (firstClassProb * 100) + "%.");

                Console.WriteLine($"The probability of being the last class is " +
                    (lastClassProb * 100) + "%.");
            }

            // Expected output:
            //  The probability of being the first class is 0.002542659%.
            //  The probability of being the last class is 0.0292684%.
            //  The probability of being the first class is 0.02258059%.
            //  The probability of being the last class is 0.394428%.
        }

        // This class is used in Example() to describe data points which will be
        // consumed by ML.NET pipeline.
        private class ImageDataPoint
        {
            // Height of Image.
            private const int height = 224;

            // Width of Image.
            private const int width = 224;

            // Image will be consumed by ONNX image multiclass classification model.
            [ImageType(height, width)]
            public MLImage Image { get; set; }

            // Expected output of ONNX model. It contains probabilities of all
            // classes. Note that the ColumnName below should match the output name
            // in the used ONNX model file.
            [ColumnName("softmaxout_1")]
            public float[] Scores { get; set; }

            public ImageDataPoint()
            {
                Image = null;
            }

            public ImageDataPoint(byte red, byte green, byte blue)
            {
                byte[] imageData = new byte[width * height * 4]; // 4 for the red, green, blue and alpha colors
                for (int i = 0; i < imageData.Length; i += 4)
                {
                    // Fill the buffer with the Bgra32 format
                    imageData[i] = blue;
                    imageData[i + 1] = green;
                    imageData[i + 2] = red;
                    imageData[i + 3] = 255;
                }

                Image = MLImage.CreateFromPixels(width, height, MLPixelFormat.Bgra32, imageData);
            }
        }
    }
}

Aplica-se a