Dela via


TorchSharpCatalog Class

Definition

Collection of extension methods for MulticlassClassificationCatalog.MulticlassClassificationTrainers to create instances of TorchSharp trainer components.

public static class TorchSharpCatalog
type TorchSharpCatalog = class
Public Module TorchSharpCatalog
Inheritance
TorchSharpCatalog

Remarks

This requires additional nuget dependencies to link against TorchSharp native dlls. See ImageClassificationTrainer for more information.

Methods

EvaluateObjectDetection(MulticlassClassificationCatalog, IDataView, DataViewSchema+Column, DataViewSchema+Column, DataViewSchema+Column, DataViewSchema+Column, DataViewSchema+Column)

Evaluates scored object detection data.

NamedEntityRecognition(MulticlassClassificationCatalog+MulticlassClassificationTrainers, NerTrainer+NerOptions)

Fine tune a Named Entity Recognition model.

NamedEntityRecognition(MulticlassClassificationCatalog+MulticlassClassificationTrainers, String, String, String, Int32, Int32, BertArchitecture, IDataView)

Fine tune a NAS-BERT model for Named Entity Recognition. The limit for any sentence is 512 tokens. Each word typically will map to a single token, and we automatically add 2 specical tokens (a start token and a separator token) so in general this limit will be 510 words for all sentences.

NameEntityRecognition(MulticlassClassificationCatalog+MulticlassClassificationTrainers, NerTrainer+NerOptions)
Obsolete.

Obsolete: please use the NamedEntityRecognition(MulticlassClassificationCatalog+MulticlassClassificationTrainers, NerTrainer+NerOptions) method instead

NameEntityRecognition(MulticlassClassificationCatalog+MulticlassClassificationTrainers, String, String, String, Int32, Int32, BertArchitecture, IDataView)
Obsolete.

Obsolete: please use the NamedEntityRecognition(MulticlassClassificationCatalog+MulticlassClassificationTrainers, String, String, String, Int32, Int32, BertArchitecture, IDataView) method instead

ObjectDetection(MulticlassClassificationCatalog+MulticlassClassificationTrainers, ObjectDetectionTrainer+Options)

Fine tune an object detection model.

ObjectDetection(MulticlassClassificationCatalog+MulticlassClassificationTrainers, String, String, String, String, String, String, Int32)

Fine tune an object detection model.

QuestionAnswer(MulticlassClassificationCatalog+MulticlassClassificationTrainers, QATrainer+Options)

Fine tune a ROBERTA model for Question and Answer. The limit for any sentence is 512 tokens. Each word typically will map to a single token, and we automatically add 2 specical tokens (a start token and a separator token) so in general this limit will be 510 words for all sentences.

QuestionAnswer(MulticlassClassificationCatalog+MulticlassClassificationTrainers, String, String, String, String, String, String, Int32, Int32, Int32, BertArchitecture, IDataView)

Fine tune a ROBERTA model for Question and Answer. The limit for any sentence is 512 tokens. Each word typically will map to a single token, and we automatically add 2 specical tokens (a start token and a separator token) so in general this limit will be 510 words for all sentences.

SentenceSimilarity(RegressionCatalog+RegressionTrainers, SentenceSimilarityTrainer+SentenceSimilarityOptions)

Fine tune a NAS-BERT model for NLP sentence Similarity. The limit for any sentence is 512 tokens. Each word typically will map to a single token, and we automatically add 2 specical tokens (a start token and a separator token) so in general this limit will be 510 words for all sentences.

SentenceSimilarity(RegressionCatalog+RegressionTrainers, String, String, String, String, Int32, Int32, BertArchitecture, IDataView)

Fine tune a NAS-BERT model for NLP sentence Similarity. The limit for any sentence is 512 tokens. Each word typically will map to a single token, and we automatically add 2 specical tokens (a start token and a separator token) so in general this limit will be 510 words for all sentences.

TextClassification(MulticlassClassificationCatalog+MulticlassClassificationTrainers, String, String, String, String, String, Int32, Int32, BertArchitecture, IDataView)

Fine tune a NAS-BERT model for NLP classification. The limit for any sentence is 512 tokens. Each word typically will map to a single token, and we automatically add 2 specical tokens (a start token and a separator token) so in general this limit will be 510 words for all sentences.

TextClassification(MulticlassClassificationCatalog+MulticlassClassificationTrainers, TextClassificationTrainer+TextClassificationOptions)

Fine tune a NAS-BERT model for NLP classification. The limit for any sentence is 512 tokens. Each word typically will map to a single token, and we automatically add 2 specical tokens (a start token and a separator token) so in general this limit will be 510 words for all sentences.

Applies to