다음을 통해 공유


TextCatalog.ProduceNgrams 메서드

정의

NgramExtractingEstimator 입력 텍스트에서 발생한 n-gram(연속된 단어 시퀀스)의 벡터를 생성하는 벡터를 만듭니다.

public static Microsoft.ML.Transforms.Text.NgramExtractingEstimator ProduceNgrams (this Microsoft.ML.TransformsCatalog.TextTransforms catalog, string outputColumnName, string inputColumnName = default, int ngramLength = 2, int skipLength = 0, bool useAllLengths = true, int maximumNgramsCount = 10000000, Microsoft.ML.Transforms.Text.NgramExtractingEstimator.WeightingCriteria weighting = Microsoft.ML.Transforms.Text.NgramExtractingEstimator+WeightingCriteria.Tf);
static member ProduceNgrams : Microsoft.ML.TransformsCatalog.TextTransforms * string * string * int * int * bool * int * Microsoft.ML.Transforms.Text.NgramExtractingEstimator.WeightingCriteria -> Microsoft.ML.Transforms.Text.NgramExtractingEstimator
<Extension()>
Public Function ProduceNgrams (catalog As TransformsCatalog.TextTransforms, outputColumnName As String, Optional inputColumnName As String = Nothing, Optional ngramLength As Integer = 2, Optional skipLength As Integer = 0, Optional useAllLengths As Boolean = true, Optional maximumNgramsCount As Integer = 10000000, Optional weighting As NgramExtractingEstimator.WeightingCriteria = Microsoft.ML.Transforms.Text.NgramExtractingEstimator+WeightingCriteria.Tf) As NgramExtractingEstimator

매개 변수

catalog
TransformsCatalog.TextTransforms

텍스트 관련 변환의 카탈로그입니다.

outputColumnName
String

의 변환에서 생성된 열의 inputColumnName이름입니다. 이 열의 데이터 형식은 벡터가 Single됩니다.

inputColumnName
String

변환할 열의 이름입니다. 이 값으로 null설정하면 해당 값이 outputColumnName 원본으로 사용됩니다. 이 추정기는 키 데이터 형식의 벡터에서 작동합니다.

ngramLength
Int32

Ngram 길이입니다.

skipLength
Int32

각 n-gram 간에 건너뛸 토큰 수입니다. 기본적으로 토큰은 건너뛰지 않습니다.

useAllLengths
Boolean

모든 n-gram 길이를 포함할지 아니면 1개까지 ngramLengthngramLength포함할지 여부입니다.

maximumNgramsCount
Int32

사전에 저장할 최대 n그램 수입니다.

weighting
NgramExtractingEstimator.WeightingCriteria

단어 또는 n-gram이 모음의 문서에 얼마나 중요한지 평가하는 데 사용되는 통계 측정값입니다. 발견 된 n-그램의 총 수보다 작은 경우 maximumNgramsCount 이 측정은 유지할 n 그램을 결정하는 데 사용됩니다.

반환

예제

using System;
using System.Collections.Generic;
using Microsoft.ML;
using Microsoft.ML.Data;
using Microsoft.ML.Transforms.Text;

namespace Samples.Dynamic
{
    public static class ProduceNgrams
    {
        public static void Example()
        {
            // Create a new ML context, for ML.NET operations. It can be used for
            // exception tracking and logging, as well as the source of randomness.
            var mlContext = new MLContext();

            // Create a small dataset as an IEnumerable.
            var samples = new List<TextData>()
            {
                new TextData(){ Text = "This is an example to compute n-grams." },
                new TextData(){ Text = "N-gram is a sequence of 'N' consecutive " +
                    "words/tokens." },

                new TextData(){ Text = "ML.NET's ProduceNgrams API produces " +
                    "vector of n-grams." },

                new TextData(){ Text = "Each position in the vector corresponds " +
                    "to a particular n-gram." },

                new TextData(){ Text = "The value at each position corresponds " +
                    "to," },

                new TextData(){ Text = "the number of times n-gram occurred in " +
                    "the data (Tf), or" },

                new TextData(){ Text = "the inverse of the number of documents " +
                    "that contain the n-gram (Idf)," },

                new TextData(){ Text = "or compute both and multiply together " +
                    "(Tf-Idf)." },
            };

            // Convert training data to IDataView.
            var dataview = mlContext.Data.LoadFromEnumerable(samples);

            // A pipeline for converting text into numeric n-gram features.
            // The following call to 'ProduceNgrams' requires the tokenized
            // text /string as input. This is achieved by calling 
            // 'TokenizeIntoWords' first followed by 'ProduceNgrams'. Please note
            // that the length of the output feature vector depends on the n-gram
            // settings.
            var textPipeline = mlContext.Transforms.Text.TokenizeIntoWords("Tokens",
                "Text")
                // 'ProduceNgrams' takes key type as input. Converting the tokens
                // into key type using 'MapValueToKey'.
                .Append(mlContext.Transforms.Conversion.MapValueToKey("Tokens"))
                .Append(mlContext.Transforms.Text.ProduceNgrams("NgramFeatures",
                    "Tokens",
                    ngramLength: 3,
                    useAllLengths: false,
                    weighting: NgramExtractingEstimator.WeightingCriteria.Tf));

            // Fit to data.
            var textTransformer = textPipeline.Fit(dataview);
            var transformedDataView = textTransformer.Transform(dataview);

            // Create the prediction engine to get the n-gram features extracted
            // from the text.
            var predictionEngine = mlContext.Model.CreatePredictionEngine<TextData,
                TransformedTextData>(textTransformer);

            // Convert the text into numeric features.
            var prediction = predictionEngine.Predict(samples[0]);

            // Print the length of the feature vector.
            Console.WriteLine("Number of Features: " + prediction.NgramFeatures
                .Length);

            // Preview of the produced n-grams.
            // Get the slot names from the column's metadata.
            // The slot names for a vector column corresponds to the names
            // associated with each position in the vector.
            VBuffer<ReadOnlyMemory<char>> slotNames = default;
            transformedDataView.Schema["NgramFeatures"].GetSlotNames(ref slotNames);
            var NgramFeaturesColumn = transformedDataView.GetColumn<VBuffer<
                float>>(transformedDataView.Schema["NgramFeatures"]);
            var slots = slotNames.GetValues();
            Console.Write("N-grams: ");
            foreach (var featureRow in NgramFeaturesColumn)
            {
                foreach (var item in featureRow.Items())
                    Console.Write($"{slots[item.Key]}  ");
                Console.WriteLine();
            }

            // Print the first 10 feature values.
            Console.Write("Features: ");
            for (int i = 0; i < 10; i++)
                Console.Write($"{prediction.NgramFeatures[i]:F4}  ");

            //  Expected output:
            //   Number of Features: 52
            //   N-grams:   This|is|an  is|an|example  an|example|to  example|to|compute  to|compute|n-grams.  N-gram|is|a  is|a|sequence  a|sequence|of  sequence|of|'N'  of|'N'|consecutive  ...
            //   Features:     1.0000      1.0000          1.0000           1.0000             1.0000            0.0000      0.0000          0.0000          0.0000          0.0000          ...
        }

        private class TextData
        {
            public string Text { get; set; }
        }

        private class TransformedTextData : TextData
        {
            public float[] NgramFeatures { get; set; }
        }
    }
}

적용 대상