Share via


BinaryQuantizationCompression Class

Definition

Contains configuration options specific to the binary quantization compression method used during indexing and querying.

public class BinaryQuantizationCompression : Azure.Search.Documents.Indexes.Models.VectorSearchCompression
type BinaryQuantizationCompression = class
    inherit VectorSearchCompression
Public Class BinaryQuantizationCompression
Inherits VectorSearchCompression
Inheritance
BinaryQuantizationCompression

Constructors

BinaryQuantizationCompression(String)

Initializes a new instance of BinaryQuantizationCompression.

Properties

CompressionName

The name to associate with this particular configuration.

(Inherited from VectorSearchCompression)
DefaultOversampling

Default oversampling factor. Oversampling will internally request more documents (specified by this multiplier) in the initial search. This increases the set of results that will be reranked using recomputed similarity scores from full-precision vectors. Minimum value is 1, meaning no oversampling (1x). This parameter can only be set when rerankWithOriginalVectors is true. Higher values improve recall at the expense of latency.

(Inherited from VectorSearchCompression)
RerankWithOriginalVectors

If set to true, once the ordered set of results calculated using compressed vectors are obtained, they will be reranked again by recalculating the full-precision similarity scores. This will improve recall at the expense of latency.

(Inherited from VectorSearchCompression)
TruncationDimension

The number of dimensions to truncate the vectors to. Truncating the vectors reduces the size of the vectors and the amount of data that needs to be transferred during search. This can save storage cost and improve search performance at the expense of recall. It should be only used for embeddings trained with Matryoshka Representation Learning (MRL) such as OpenAI text-embedding-3-large (small). The default value is null, which means no truncation.

(Inherited from VectorSearchCompression)

Applies to