Share via


PermutationFeatureImportanceExtensions.PermutationFeatureImportanceNonCalibrated 方法

定義

二元分類的 PFI) (排列特徵重要性。

public static System.Collections.Immutable.ImmutableDictionary<string,Microsoft.ML.Data.BinaryClassificationMetricsStatistics> PermutationFeatureImportanceNonCalibrated (this Microsoft.ML.BinaryClassificationCatalog catalog, Microsoft.ML.ITransformer model, Microsoft.ML.IDataView data, string labelColumnName = "Label", bool useFeatureWeightFilter = false, int? numberOfExamplesToUse = default, int permutationCount = 1);
static member PermutationFeatureImportanceNonCalibrated : Microsoft.ML.BinaryClassificationCatalog * Microsoft.ML.ITransformer * Microsoft.ML.IDataView * string * bool * Nullable<int> * int -> System.Collections.Immutable.ImmutableDictionary<string, Microsoft.ML.Data.BinaryClassificationMetricsStatistics>
<Extension()>
Public Function PermutationFeatureImportanceNonCalibrated (catalog As BinaryClassificationCatalog, model As ITransformer, data As IDataView, Optional labelColumnName As String = "Label", Optional useFeatureWeightFilter As Boolean = false, Optional numberOfExamplesToUse As Nullable(Of Integer) = Nothing, Optional permutationCount As Integer = 1) As ImmutableDictionary(Of String, BinaryClassificationMetricsStatistics)

參數

catalog
BinaryClassificationCatalog

二元分類目錄。

model
ITransformer

要評估特徵重要性的模型。

data
IDataView

評估資料集。

labelColumnName
String

標籤資料行名稱。 資料行資料必須是 Boolean

useFeatureWeightFilter
Boolean

使用功能權數預先篩選功能。

numberOfExamplesToUse
Nullable<Int32>

限制要評估的範例數目。 表示最多會使用 2 個來自 的範例。

permutationCount
Int32

要執行的排列數目。

傳回

字典會將每個特徵對應至其每個特徵的「貢獻」到分數。

範例

using System;
using System.Collections.Generic;
using System.Linq;
using Microsoft.ML;

namespace Samples.Dynamic.Trainers.BinaryClassification
{
    public static class PermutationFeatureImportance
    {
        public static void Example()
        {
            // Create a new context for ML.NET operations. It can be used for
            // exception tracking and logging, as a catalog of available operations
            // and as the source of randomness.
            var mlContext = new MLContext(seed: 1);

            // Create sample data.
            var samples = GenerateData();

            // Load the sample data as an IDataView.
            var data = mlContext.Data.LoadFromEnumerable(samples);

            // Define a training pipeline that concatenates features into a vector,
            // normalizes them, and then trains a linear model.
            var featureColumns =
                new string[] { nameof(Data.Feature1), nameof(Data.Feature2) };
            var pipeline = mlContext.Transforms
                .Concatenate("Features", featureColumns)
                .Append(mlContext.Transforms.NormalizeMinMax("Features"))
                .Append(mlContext.BinaryClassification.Trainers
                .SdcaLogisticRegression());

            // Fit the pipeline to the data.
            var model = pipeline.Fit(data);

            // Transform the dataset.
            var transformedData = model.Transform(data);

            // Extract the predictor.
            var linearPredictor = model.LastTransformer;

            // Compute the permutation metrics for the linear model using the
            // normalized data.
            var permutationMetrics = mlContext.BinaryClassification
                .PermutationFeatureImportance(linearPredictor, transformedData,
                permutationCount: 30);

            // Now let's look at which features are most important to the model
            // overall. Get the feature indices sorted by their impact on AUC.
            var sortedIndices = permutationMetrics
                .Select((metrics, index) => new { index, metrics.AreaUnderRocCurve })
                .OrderByDescending(
                feature => Math.Abs(feature.AreaUnderRocCurve.Mean))
                .Select(feature => feature.index);

            Console.WriteLine("Feature\tModel Weight\tChange in AUC"
                + "\t95% Confidence in the Mean Change in AUC");
            var auc = permutationMetrics.Select(x => x.AreaUnderRocCurve).ToArray();
            foreach (int i in sortedIndices)
            {
                Console.WriteLine("{0}\t{1:0.00}\t{2:G4}\t{3:G4}",
                    featureColumns[i],
                    linearPredictor.Model.SubModel.Weights[i],
                    auc[i].Mean,
                    1.96 * auc[i].StandardError);
            }

            // Expected output:
            //  Feature     Model Weight Change in AUC  95% Confidence in the Mean Change in AUC
            //  Feature2        35.15     -0.387        0.002015
            //  Feature1        17.94     -0.1514       0.0008963
        }

        private class Data
        {
            public bool Label { get; set; }

            public float Feature1 { get; set; }

            public float Feature2 { get; set; }
        }

        /// <summary>
        /// Generate an enumerable of Data objects, creating the label as a simple
        /// linear combination of the features.
        /// </summary>
        /// <param name="nExamples">The number of examples.</param>
        /// <param name="bias">The bias, or offset, in the calculation of the label.
        /// </param>
        /// <param name="weight1">The weight to multiply the first feature with to
        /// compute the label.</param>
        /// <param name="weight2">The weight to multiply the second feature with to
        /// compute the label.</param>
        /// <param name="seed">The seed for generating feature values and label
        /// noise.</param>
        /// <returns>An enumerable of Data objects.</returns>
        private static IEnumerable<Data> GenerateData(int nExamples = 10000,
            double bias = 0, double weight1 = 1, double weight2 = 2, int seed = 1)
        {
            var rng = new Random(seed);
            for (int i = 0; i < nExamples; i++)
            {
                var data = new Data
                {
                    Feature1 = (float)(rng.Next(10) * (rng.NextDouble() - 0.5)),
                    Feature2 = (float)(rng.Next(10) * (rng.NextDouble() - 0.5)),
                };

                // Create a noisy label.
                var value = (float)(bias + weight1 * data.Feature1 + weight2 *
                    data.Feature2 + rng.NextDouble() - 0.5);

                data.Label = Sigmoid(value) > 0.5;
                yield return data;
            }
        }

        private static double Sigmoid(double x) => 1.0 / (1.0 + Math.Exp(-1 * x));
    }
}

備註

PFI) (排列特徵重要性是一種技術,可用來判斷定型機器學習模型中特徵的全域重要性。 PFI 是 Breiman 在隨機樹系檔 10 第 10 節 (Breiman 所動機的簡單但功能強大的技術。 「隨機樹系」。 Machine Learning, 2001.) PFI 方法的優點是它與模型無關,它適用于任何可評估的模型,而且可以使用任何資料集,而不只是定型集,來計算特徵重要性計量。

PFI 的運作方式是採用已加上標籤的資料集、選擇特徵,並將該功能的值分散到所有範例中,讓每個範例現在都有特徵的隨機值,以及所有其他特徵的原始值。 評估計量 (例如 AUC) ,然後計算此修改資料集的評估計量變更,並計算原始資料集中的評估計量變更。 評估計量的變更愈大,功能對模型而言愈重要。 PFI 的運作方式是跨模型的所有功能執行此排列分析,逐一執行。

在此實作中,PFI 會計算每個功能之所有可能二元分類評估計量的變更,並 ImmutableArray 傳回 物件的 BinaryClassificationMetrics 。 如需使用這些結果來分析模型特徵重要性的範例,請參閱下列範例。

適用於