BinaryQuantizationCompression Class
Definition
Important
Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
Contains configuration options specific to the binary quantization compression method used during indexing and querying.
public class BinaryQuantizationCompression : Azure.Search.Documents.Indexes.Models.VectorSearchCompression, System.ClientModel.Primitives.IJsonModel<Azure.Search.Documents.Indexes.Models.BinaryQuantizationCompression>, System.ClientModel.Primitives.IPersistableModel<Azure.Search.Documents.Indexes.Models.BinaryQuantizationCompression>
type BinaryQuantizationCompression = class
inherit VectorSearchCompression
interface IJsonModel<BinaryQuantizationCompression>
interface IPersistableModel<BinaryQuantizationCompression>
Public Class BinaryQuantizationCompression
Inherits VectorSearchCompression
Implements IJsonModel(Of BinaryQuantizationCompression), IPersistableModel(Of BinaryQuantizationCompression)
- Inheritance
- Implements
Constructors
| Name | Description |
|---|---|
| BinaryQuantizationCompression(String) |
Initializes a new instance of BinaryQuantizationCompression. |
Properties
| Name | Description |
|---|---|
| CompressionName |
The name to associate with this particular configuration. (Inherited from VectorSearchCompression) |
| RescoringOptions |
Contains the options for rescoring. (Inherited from VectorSearchCompression) |
| TruncationDimension |
The number of dimensions to truncate the vectors to. Truncating the vectors reduces the size of the vectors and the amount of data that needs to be transferred during search. This can save storage cost and improve search performance at the expense of recall. It should be only used for embeddings trained with Matryoshka Representation Learning (MRL) such as OpenAI text-embedding-3-large (small). The default value is null, which means no truncation. (Inherited from VectorSearchCompression) |