Explainability with ML.NET
I am using ML.NET to train a model using the FastTree regression trainer. I am trying to implement some kind of "explainability", meaning that when a prediction is made using this model I want to explain why the model made that prediction.
In a previous implementation using a multidimensional tree lookup, the "explainability" factor was the M nearest neighbors used to make the prediction.
I know that ML.NET has "Permutation Feature Importance" which can provide some "explainability" for the model itself, but I'm looking to explain each individual prediction. Does ML.NET have any kind of built in functions that could accomplish this?