What's new in .NET libraries for .NET 9
This article describes new features in the .NET libraries for .NET 9. It's been updated for .NET 9 RC 1.
Base64Url
Base64 is an encoding scheme that translates arbitrary bytes into text composed of a specific set of 64 characters. It's a common approach for transferring data and has long been supported via a variety of methods, such as with Convert.ToBase64String or Base64.DecodeFromUtf8(ReadOnlySpan<Byte>, Span<Byte>, Int32, Int32, Boolean). However, some of the characters it uses makes it less than ideal for use in some circumstances you might otherwise want to use it, such as in query strings. In particular, the 64 characters that comprise the Base64 table include '+' and '/', both of which have their own meaning in URLs. This led to the creation of the Base64Url scheme, which is similar to Base64 but uses a slightly different set of characters that makes it appropriate for use in URLs contexts. .NET 9 includes the new Base64Url class, which provides many helpful and optimized methods for encoding and decoding with Base64Url
to and from a variety of data types.
The following example demonstrates using the new class.
ReadOnlySpan<byte> bytes = ...;
string encoded = Base64Url.EncodeToString(bytes);
BinaryFormatter
.NET 9 removes BinaryFormatter from the .NET runtime. The APIs are still present, but their implementations always throw an exception, regardless of project type. For more information about the removal and your options if you're affected, see BinaryFormatter migration guide.
Collections
The collection types in .NET gain the following updates for .NET 9:
- Collection lookups with spans
OrderedDictionary<TKey, TValue>
- PriorityQueue.Remove() method lets you update the priority of an item in the queue.
ReadOnlySet<T>
Collection lookups with spans
In high-performance code, spans are often used to avoid allocating strings unnecessarily, and lookup tables with types like Dictionary<TKey,TValue> and HashSet<T> are frequently used as caches. However, there has been no safe, built-in mechanism for doing lookups on these collection types with spans. With the new allows ref struct
feature in C# 13 and new features on these collection types in .NET 9, it's now possible to perform these kinds of lookups.
The following example demonstrates using Dictionary<TKey,TValue>.GetAlternateLookup.
private static Dictionary<string, int> CountWords(ReadOnlySpan<char> input)
{
Dictionary<string, int> wordCounts = new(StringComparer.OrdinalIgnoreCase);
Dictionary<string, int>.AlternateLookup<ReadOnlySpan<char>> spanLookup =
wordCounts.GetAlternateLookup<ReadOnlySpan<char>>();
foreach (Range wordRange in Regex.EnumerateSplits(input, @"\b\w+\b"))
{
ReadOnlySpan<char> word = input[wordRange];
spanLookup[word] = spanLookup.TryGetValue(word, out int count) ? count + 1 : 1;
}
return wordCounts;
}
OrderedDictionary<TKey, TValue>
In many scenarios, you might want to store key-value pairs in a way where order can be maintained (a list of key-value pairs) but where fast lookup by key is also supported (a dictionary of key-value pairs). Since the early days of .NET, the OrderedDictionary type has supported this scenario, but only in a non-generic manner, with keys and values typed as OrderedDictionary<TKey,TValue> collection, which provides an efficient, generic type to support these scenarios.
The following code uses the new class.
OrderedDictionary<string, int> d = new()
{
["a"] = 1,
["b"] = 2,
["c"] = 3,
};
d.Add("d", 4);
d.RemoveAt(0);
d.RemoveAt(2);
d.Insert(0, "e", 5);
foreach (KeyValuePair<string, int> entry in d)
{
Console.WriteLine(entry);
}
// Output:
// [e, 5]
// [b, 2]
// [c, 3]
PriorityQueue.Remove() method
.NET 6 introduced the PriorityQueue<TElement,TPriority> collection, which provides a simple and fast array-heap implementation. One issue with array heaps in general is that they don't support priority updates, which makes them prohibitive for use in algorithms such as variations of Dijkstra's algorithm.
While it's not possible to implement efficient $O(\log n)$ priority updates in the existing collection, the new PriorityQueue<TElement,TPriority>.Remove(TElement, TElement, TPriority, IEqualityComparer<TElement>) method makes it possible to emulate priority updates (albeit at $O(n)$ time):
public static void UpdatePriority<TElement, TPriority>(
this PriorityQueue<TElement, TPriority> queue,
TElement element,
TPriority priority
)
{
// Scan the heap for entries matching the current element.
queue.Remove(element, out _, out _);
// Re-insert the entry with the new priority.
queue.Enqueue(element, priority);
}
This method unblocks users who want to implement graph algorithms in contexts where asymptotic performance isn't a blocker. (Such contexts include education and prototyping.) For example, here's a toy implementation of Dijkstra's algorithm that uses the new API.
ReadOnlySet<T>
It's often desirable to give out read-only views of collections. ReadOnlyCollection<T> lets you create a read-only wrapper around an arbitrary mutable IList<T>, and ReadOnlyDictionary<TKey,TValue> lets you create a read-only wrapper around an arbitrary mutable IDictionary<TKey,TValue>. However, past versions of .NET had no built-in support for doing the same with ISet<T>. .NET 9 introduces ReadOnlySet<T> to address this.
The new class enables the following usage pattern.
private readonly HashSet<int> _set = [];
private ReadOnlySet<int>? _setWrapper;
public ReadOnlySet<int> Set => _setWrapper ??= new(_set);
Component model - TypeDescriptor
trimming support
System.ComponentModel includes new opt-in trimmer-compatible APIs for describing components. Any application, especially self-contained trimmed applications, can use these new APIs to help support trimming scenarios.
The primary API is the TypeDescriptor.RegisterType method on the TypeDescriptor
class. This method has the DynamicallyAccessedMembersAttribute attribute so that the trimmer preserves members for that type. You should call this method once per type, and typically early on.
The secondary APIs have a FromRegisteredType
suffix, such as TypeDescriptor.GetPropertiesFromRegisteredType(Type). Unlike their counterparts that don't have the FromRegisteredType
suffix, these APIs don't have [RequiresUnreferencedCode]
or [DynamicallyAccessedMembers]
trimmer attributes. The lack of trimmer attributes helps consumers by no longer having to either:
- Suppress trimming warnings, which can be risky.
- Propagate a strongly typed
Type
parameter to other methods, which can be cumbersome or infeasible.
public static void RunIt()
{
// The Type from typeof() is passed to a different method.
// The trimmer doesn't know about ExampleClass anymore
// and thus there will be warnings when trimming.
Test(typeof(ExampleClass));
Console.ReadLine();
}
private static void Test(Type type)
{
// When publishing self-contained + trimmed,
// this line produces warnings IL2026 and IL2067.
PropertyDescriptorCollection properties = TypeDescriptor.GetProperties(type);
// When publishing self-contained + trimmed,
// the property count is 0 here instead of 2.
Console.WriteLine($"Property count: {properties.Count}");
// To avoid the warning and ensure reflection
// can see the properties, register the type:
TypeDescriptor.RegisterType<ExampleClass>();
// Get properties from the registered type.
properties = TypeDescriptor.GetPropertiesFromRegisteredType(type);
Console.WriteLine($"Property count: {properties.Count}");
}
public class ExampleClass
{
public string? Property1 { get; set; }
public int Property2 { get; set; }
}
For more information, see the API proposal.
Cryptography
- CryptographicOperations.HashData() method
- KMAC algorithm
- AES-GCM and ChaChaPoly1305 algorithms enabled for iOS/tvOS/MacCatalyst
- X.509 certificate loading
- OpenSSL providers support
- Windows CNG virtualization-based security
CryptographicOperations.HashData() method
.NET includes several static "one-shot" implementations of hash functions and related functions. These APIs include SHA256.HashData and HMACSHA256.HashData. One-shot APIs are preferable to use because they can provide the best possible performance and reduce or eliminate allocations.
If a developer wants to provide an API that supports hashing where the caller defines which hash algorithm to use, it's typically done by accepting a HashAlgorithmName argument. However, using that pattern with one-shot APIs would require switching over every possible HashAlgorithmName and then using the appropriate method. To solve that problem, .NET 9 introduces the CryptographicOperations.HashData API. This API lets you produce a hash or HMAC over an input as a one-shot where the algorithm used is determined by a HashAlgorithmName.
static void HashAndProcessData(HashAlgorithmName hashAlgorithmName, byte[] data)
{
byte[] hash = CryptographicOperations.HashData(hashAlgorithmName, data);
ProcessHash(hash);
}
KMAC algorithm
.NET 9 provides the KMAC algorithm as specified by NIST SP-800-185. KECCAK Message Authentication Code (KMAC) is a pseudorandom function and keyed hash function based on KECCAK.
The following new classes use the KMAC algorithm. Use instances to accumulate data to produce a MAC, or use the static HashData
method for a one-shot over a single input.
KMAC is available on Linux with OpenSSL 3.0 or later, and on Windows 11 Build 26016 or later. You can use the static IsSupported
property to determine if the platform supports the desired algorithm.
if (Kmac128.IsSupported)
{
byte[] key = GetKmacKey();
byte[] input = GetInputToMac();
byte[] mac = Kmac128.HashData(key, input, outputLength: 32);
}
else
{
// Handle scenario where KMAC isn't available.
}
AES-GCM and ChaChaPoly1305 algorithms enabled for iOS/tvOS/MacCatalyst
IsSupported and ChaChaPoly1305.IsSupported
now return true when running on iOS 13+, tvOS 13+, and Mac Catalyst.
AesGcm only supports 16-byte (128-bit) tag values on Apple operating systems.
X.509 certificate loading
Since .NET Framework 2.0, the way to load a certificate has been new X509Certificate2(bytes)
. There have also been other patterns, such as new X509Certificate2(bytes, password, flags)
, new X509Certificate2(path)
, new X509Certificate2(path, password, flags)
, and X509Certificate2Collection.Import(bytes, password, flags)
(and its overloads).
Those methods all used content-sniffing to figure out if the input was something it could handle, and then loaded it if it could. For some callers, this strategy was very convenient. But it also has some problems:
- Not every file format works on every OS.
- It's a protocol deviation.
- It's a source of security issues.
.NET 9 introduces a new X509CertificateLoader class, which has a "one method, one purpose" design. In its initial version, it only supports two of the five formats that the X509Certificate2 constructor supported. Those are the two formats that worked on all operation systems.
OpenSSL providers support
.NET 8 introduced the OpenSSL-specific APIs OpenPrivateKeyFromEngine(String, String) and OpenPublicKeyFromEngine(String, String). They enable interacting with OpenSSL ENGINE
components and use hardware security modules (HSM), for example.
.NET 9 introduces SafeEvpPKeyHandle.OpenKeyFromProvider(String, String), which enables using OpenSSL providers and interacting with providers such as tpm2
or pkcs11
.
Some distros have removed ENGINE
support since it is now deprecated.
The following snippet shows basic usage:
byte[] data = [ /* example data */ ];
// Refer to your provider documentation, for example, https://github.com/tpm2-software/tpm2-openssl/tree/master.
using (SafeEvpPKeyHandle priKeyHandle = SafeEvpPKeyHandle.OpenKeyFromProvider("tpm2", "handle:0x81000007"))
using (ECDsa ecdsaPri = new ECDsaOpenSsl(priKeyHandle))
{
byte[] signature = ecdsaPri.SignData(data, HashAlgorithmName.SHA256);
// Do stuff with signature created by TPM.
}
There are some performance improvements during the TLS handshake as well as improvements to interactions with RSA private keys that use ENGINE
components.
Windows CNG virtualization-based security
Windows 11 has added new APIs to help secure Windows keys with virtualization-based security (VBS). With this new capability, keys can be protected from admin-level key theft attacks with negligible effect on performance, reliability, or scale.
.NET 9 has added matching CngKeyCreationOptions flags. The following three flags were added:
CngKeyCreationOptions.PreferVbs
matchingNCRYPT_PREFER_VBS_FLAG
CngKeyCreationOptions.RequireVbs
matchingNCRYPT_REQUIRE_VBS_FLAG
CngKeyCreationOptions.UsePerBootKey
matchingNCRYPT_USE_PER_BOOT_KEY_FLAG
The following snippet demonstrates how to use one of the flags:
using System.Security.Cryptography;
CngKeyCreationParameters cngCreationParams = new()
{
Provider = CngProvider.MicrosoftSoftwareKeyStorageProvider,
KeyCreationOptions = CngKeyCreationOptions.RequireVbs | CngKeyCreationOptions.OverwriteExistingKey,
};
using (CngKey key = CngKey.Create(CngAlgorithm.ECDsaP256, "myKey", cngCreationParams))
using (ECDsaCng ecdsa = new ECDsaCng(key))
{
// Do stuff with the key.
}
Date and time - new TimeSpan.From* overloads
The TimeSpan class offers several From*
methods that let you create a TimeSpan
object using a double
. However, since double
is a binary-based floating-point format, inherent imprecision can lead to errors. For instance, TimeSpan.FromSeconds(101.832)
might not precisely represent 101 seconds, 832 milliseconds
, but rather approximately 101 seconds, 831.9999999999936335370875895023345947265625 milliseconds
. This discrepancy has caused frequent confusion, and it's also not the most efficient way to represent such data. To address this, .NET 9 adds new overloads that let you create TimeSpan
objects from integers. There are new overloads from FromDays
, FromHours
, FromMinutes
, FromSeconds
, FromMilliseconds
, and FromMicroseconds
.
The following code shows an example of calling the double
and one of the new integer overloads.
TimeSpan timeSpan1 = TimeSpan.FromSeconds(value: 101.832);
Console.WriteLine($"timeSpan1 = {timeSpan1}");
// timeSpan1 = 00:01:41.8319999
TimeSpan timeSpan2 = TimeSpan.FromSeconds(seconds: 101, milliseconds: 832);
Console.WriteLine($"timeSpan2 = {timeSpan2}");
// timeSpan2 = 00:01:41.8320000
Dependency injection - ActivatorUtilities.CreateInstance
constructor
The constructor resolution for ActivatorUtilities.CreateInstance has changed in .NET 9. Previously, a constructor that was explicitly marked using the ActivatorUtilitiesConstructorAttribute attribute might not be called, depending on the ordering of constructors and the number of constructor parameters. The logic has changed in .NET 9 such that a constructor that has the attribute is always called.
Diagnostics
- Debug.Assert reports assert condition by default
- New Activity.AddLink method
- Metrics.Gauge instrument
- Out-of-proc Meter wildcard listening
Debug.Assert reports assert condition by default
Debug.Assert is commonly used to help validate conditions that are expected to always be true. Failure typically indicates a bug in the code. There are many overloads of Debug.Assert, the simplest of which just accepts a condition:
Debug.Assert(a > 0 && b > 0);
The assert fails if the condition is false. Historically, however, such asserts were void of any information about what condition failed. Starting in .NET 9, if no message is explicitly provided by the user, the assert will contain the textual representation of the condition. For example, for the previous assert example, rather than getting a message like:
Process terminated. Assertion failed.
at Program.SomeMethod(Int32 a, Int32 b)
The message would now be:
Process terminated. Assertion failed.
a > 0 && b > 0
at Program.SomeMethod(Int32 a, Int32 b)
New Activity.AddLink method
Previously, you could only link a tracing Activity to other tracing contexts when you created the Activity
. New in .NET 9, the AddLink(ActivityLink) API lets you link an Activity
object to other tracing contexts after it's created. This change aligns with the OpenTelemetry specifications as well.
ActivityContext activityContext = new(ActivityTraceId.CreateRandom(), ActivitySpanId.CreateRandom(), ActivityTraceFlags.None);
ActivityLink activityLink = new(activityContext);
Activity activity = new("LinkTest");
activity.AddLink(activityLink);
Metrics.Gauge instrument
System.Diagnostics.Metrics now provides the Gauge<T> instrument according to the OpenTelemetry specification. The Gauge
instrument is designed to record non-additive values when changes occur. For example, it can measure the background noise level, where summing the values from multiple rooms would be nonsensical. The Gauge
instrument is a generic type that can record any value type, such as int
, double
, or decimal
.
The following example demonstrates using the the Gauge
instrument.
Meter soundMeter = new("MeasurementLibrary.Sound");
Gauge<int> gauge = soundMeter.CreateGauge<int>(
name: "NoiseLevel",
unit: "dB", // Decibels.
description: "Background Noise Level"
);
gauge.Record(10, new TagList() { { "Room1", "dB" } });
Out-of-proc Meter wildcard listening
It's already possible to listen to meters out-of-process using the System.Diagnostics.Metrics event source provider, but prior to .NET 9, you had to specify the full meter name. In .NET 9, you can listen to all meters by using the wildcard character *
, which allows you to capture metrics from every meter in a process. Additionally, it adds support for listening by meter prefix, so you can listen to all meters whose names start with a specified prefix. For example, specifying MyMeter*
enables listening to all meters with names that begin with MyMeter
.
// The complete meter name is "MyCompany.MyMeter".
var meter = new Meter("MyCompany.MyMeter");
// Create a counter and allow publishing values.
meter.CreateObservableCounter("MyCounter", () => 1);
// Create the listener to use the wildcard character
// to listen to all meters using prefix names.
MyEventListener listener = new MyEventListener();
The MyEventListener
class is defined as follows.
internal class MyEventListener : EventListener
{
protected override void OnEventSourceCreated(EventSource eventSource)
{
Console.WriteLine(eventSource.Name);
if (eventSource.Name == "System.Diagnostics.Metrics")
{
// Listen to all meters with names starting with "MyCompany".
// If using "*", allow listening to all meters.
EnableEvents(
eventSource,
EventLevel.Informational,
(EventKeywords)0x3,
new Dictionary<string, string?>() { { "Metrics", "MyCompany*" } }
);
}
}
protected override void OnEventWritten(EventWrittenEventArgs eventData)
{
// Ignore other events.
if (eventData.EventSource.Name != "System.Diagnostics.Metrics" ||
eventData.EventName == "CollectionStart" ||
eventData.EventName == "CollectionStop" ||
eventData.EventName == "InstrumentPublished"
)
return;
Console.WriteLine(eventData.EventName);
if (eventData.Payload is not null)
{
for (int i = 0; i < eventData.Payload.Count; i++)
Console.WriteLine($"\t{eventData.PayloadNames![i]}: {eventData.Payload[i]}");
}
}
}
When you execute the code, the output is as follows:
CounterRateValuePublished
sessionId: 7cd94a65-0d0d-460e-9141-016bf390d522
meterName: MyCompany.MyMeter
meterVersion:
instrumentName: MyCounter
unit:
tags:
rate: 0
value: 1
instrumentId: 1
CounterRateValuePublished
sessionId: 7cd94a65-0d0d-460e-9141-016bf390d522
meterName: MyCompany.MyMeter
meterVersion:
instrumentName: MyCounter
unit:
tags:
rate: 0
value: 1
instrumentId: 1
You can also use the wildcard character to listen to metrics with monitoring tools like dotnet-counters.
LINQ
New methods CountBy and AggregateBy have been introduced. These methods make it possible to aggregate state by key without needing to allocate intermediate groupings via GroupBy.
CountBy lets you quickly calculate the frequency of each key. The following example finds the word that occurs most frequently in a text string.
string sourceText = """
Lorem ipsum dolor sit amet, consectetur adipiscing elit.
Sed non risus. Suspendisse lectus tortor, dignissim sit amet,
adipiscing nec, ultricies sed, dolor. Cras elementum ultrices amet diam.
""";
// Find the most frequent word in the text.
KeyValuePair<string, int> mostFrequentWord = sourceText
.Split(new char[] { ' ', '.', ',' }, StringSplitOptions.RemoveEmptyEntries)
.Select(word => word.ToLowerInvariant())
.CountBy(word => word)
.MaxBy(pair => pair.Value);
Console.WriteLine(mostFrequentWord.Key); // amet
AggregateBy lets you implement more general-purpose workflows. The following example shows how you can calculate scores that are associated with a given key.
(string id, int score)[] data =
[
("0", 42),
("1", 5),
("2", 4),
("1", 10),
("0", 25),
];
var aggregatedData =
data.AggregateBy(
keySelector: entry => entry.id,
seed: 0,
(totalScore, curr) => totalScore + curr.score
);
foreach (var item in aggregatedData)
{
Console.WriteLine(item);
}
//(0, 67)
//(1, 15)
//(2, 4)
Index<TSource>(IEnumerable<TSource>) makes it possible to quickly extract the implicit index of an enumerable. You can now write code such as the following snippet to automatically index items in a collection.
IEnumerable<string> lines2 = File.ReadAllLines("output.txt");
foreach ((int index, string line) in lines2.Index())
{
Console.WriteLine($"Line number: {index + 1}, Line: {line}");
}
Logging source generator
C# 12 introduced primary constructors, which allow you to define a constructor directly on the class declaration. The logging source generator now supports logging using classes that have a primary constructor.
public partial class ClassWithPrimaryConstructor(ILogger logger)
{
[LoggerMessage(0, LogLevel.Debug, "Test.")]
public partial void Test();
}
Miscellaneous
In this section, find information about:
allows ref struct
used in libraries
C# 13 introduces the ability to constrain a generic parameter with allows ref struct
, which tells the compiler and runtime that a ref struct
can be used for that generic parameter. Many APIs that are compatible with this have now been annotated. For example, the String.Create method has an overload that lets you create a string
by writing directly into its memory, represented as a span. This method has a TState
argument that's passed from the caller into the delegate that does the actual writing.
That TState
type parameter on String.Create
is now annotated with allows ref struct
:
public static string Create<TState>(int length, TState state, SpanAction<char, TState> action)
where TState : allows ref struct;
This annotation enables you to pass a span (or any other ref struct
) as input to this method.
The following example shows a new String.ToLowerInvariant() overload that uses this capability.
public static string ToLowerInvariant(ReadOnlySpan<char> input) =>
string.Create(span.Length, input, static (stringBuffer, input) => span.ToLowerInvariant(stringBuffer));
SearchValues
expansion
.NET 8 introduced the SearchValues<T> type, which provides an optimized solution for searching for specific sets of characters or bytes within spans. In .NET 9, SearchValues
has been extended to support searching for substrings within a larger string.
The following example searches for multiple animal names within a string value, and returns an index to the first one found.
private static readonly SearchValues<string> s_animals =
SearchValues.Create(["cat", "mouse", "dog", "dolphin"], StringComparison.OrdinalIgnoreCase);
public static int IndexOfAnimal(string text) =>
text.AsSpan().IndexOfAny(s_animals);
This new capability has an optimized implementation that takes advantage of the SIMD support in the underlying platform. It also enables higher-level types to be optimized. For example, Regex now utilizes this functionality as part of its implementation.
Networking
- SocketsHttpHandler is default in HttpClientFactory
- System.Net.ServerSentEvents
- TLS resume with client certificates on Linux
- WebSocket keep-alive ping and timeout
- HttpClientFactory no longer logs header values by default
SocketsHttpHandler is default in HttpClientFactory
HttpClientFactory
creates HttpClient objects backed by HttpClientHandler, by default. HttpClientHandler
is itself backed by SocketsHttpHandler, which is much more configurable, including around connection lifetime management. HttpClientFactory
now uses SocketsHttpHandler
by default and configures it to set limits on its connection lifetimes to match that of the rotation lifetime specified in the factory.
System.Net.ServerSentEvents
Server-sent events (SSE) is a simple and popular protocol for streaming data from a server to a client. It's used, for example, by OpenAI as part of streaming generated text from its AI services. To simplify the consumption of SSE, the new System.Net.ServerSentEvents library provides a parser for easily ingesting server-sent events.
The following code demonstrates using the new class.
Stream responseStream = new MemoryStream();
await foreach (SseItem<string> e in SseParser.Create(responseStream).EnumerateAsync())
{
Console.WriteLine(e.Data);
}
TLS resume with client certificates on Linux
TLS resume is a feature of the TLS protocol that allows resuming previously established sessions to a server. Doing so avoids a few roundtrips and saves computational resources during TLS handshake.
TLS resume has already been supported on Linux for SslStream connections without client certificates. .NET 9 adds support for TLS resume of mutually authenticated TLS connections, which are common in server-to-server scenarios. The feature is enabled automatically.
WebSocket keep-alive ping and timeout
New APIs on ClientWebSocketOptions and WebSocketCreationOptions let you opt in to sending WebSocket pings and aborting the connection if the peer doesn't respond in time.
Until now, you could specify a KeepAliveInterval to keep the connection from staying idle, but there was no built-in mechanism to enforce that the peer is responding.
The following example pings the server every 5 seconds and aborts the connection if it doesn't respond within a second.
using var cws = new ClientWebSocket();
cws.Options.HttpVersionPolicy = HttpVersionPolicy.RequestVersionOrHigher;
cws.Options.KeepAliveInterval = TimeSpan.FromSeconds(5);
cws.Options.KeepAliveTimeout = TimeSpan.FromSeconds(1);
await cws.ConnectAsync(uri, httpClient, cancellationToken);
HttpClientFactory no longer logs header values by default
LogLevel.Trace events logged by HttpClientFactory
no longer include header values by default. You can opt in to logging values for specific headers via the RedactLoggedHeaders helper method.
The following example redacts all headers, except for the user agent.
services.AddHttpClient("myClient")
.RedactLoggedHeaders(name => name != "User-Agent");
For more information, see HttpClientFactory logging redacts header values by default.
Reflection
Persisted assemblies
In .NET Core versions and .NET 5-8, support for building an assembly and emitting reflection metadata for dynamically created types was limited to a runnable AssemblyBuilder. The lack of support for saving an assembly was often a blocker for customers migrating from .NET Framework to .NET. .NET 9 adds a new type, PersistedAssemblyBuilder, that you can use to save an emitted assembly.
To create a PersistedAssemblyBuilder
instance, call its constructor and pass the assembly name, the core assembly, System.Private.CoreLib
, to reference base runtime types, and optional custom attributes. After you emit all members to the assembly, call the PersistedAssemblyBuilder.Save(String) method to create an assembly with default settings. If you want to set the entry point or other options, you can call PersistedAssemblyBuilder.GenerateMetadata and use the metadata it returns to save the assembly. The following code shows an example of creating a persisted assembly and setting the entry point.
public void CreateAndSaveAssembly(string assemblyPath)
{
PersistedAssemblyBuilder ab = new PersistedAssemblyBuilder(
new AssemblyName("MyAssembly"),
typeof(object).Assembly
);
TypeBuilder tb = ab.DefineDynamicModule("MyModule")
.DefineType("MyType", TypeAttributes.Public | TypeAttributes.Class);
MethodBuilder entryPoint = tb.DefineMethod(
"Main",
MethodAttributes.HideBySig | MethodAttributes.Public | MethodAttributes.Static
);
ILGenerator il = entryPoint.GetILGenerator();
// ...
il.Emit(OpCodes.Ret);
tb.CreateType();
MetadataBuilder metadataBuilder = ab.GenerateMetadata(
out BlobBuilder ilStream,
out BlobBuilder fieldData
);
PEHeaderBuilder peHeaderBuilder = new PEHeaderBuilder(
imageCharacteristics: Characteristics.ExecutableImage);
ManagedPEBuilder peBuilder = new ManagedPEBuilder(
header: peHeaderBuilder,
metadataRootBuilder: new MetadataRootBuilder(metadataBuilder),
ilStream: ilStream,
mappedFieldData: fieldData,
entryPoint: MetadataTokens.MethodDefinitionHandle(entryPoint.MetadataToken)
);
BlobBuilder peBlob = new BlobBuilder();
peBuilder.Serialize(peBlob);
using var fileStream = new FileStream("MyAssembly.exe", FileMode.Create, FileAccess.Write);
peBlob.WriteContentTo(fileStream);
}
public static void UseAssembly(string assemblyPath)
{
Assembly assembly = Assembly.LoadFrom(assemblyPath);
Type? type = assembly.GetType("MyType");
MethodInfo? method = type?.GetMethod("SumMethod");
Console.WriteLine(method?.Invoke(null, [5, 10]));
}
The new PersistedAssemblyBuilder class includes PDB support. You can emit symbol info and use it to debug a generated assembly. The API has a similar shape to the .NET Framework implementation. For more information, see Emit symbols and generate PDB.
Type-name parsing
TypeName is a parser for ECMA-335 type names that provides much the same functionality as System.Type but is decoupled from the runtime environment. Components like serializers and compilers need to parse and process type names. For example, the Native AOT compiler has switched to using TypeName.
The new TypeName
class provides:
Static
Parse
andTryParse
methods for parsing input represented asReadOnlySpan<char>
. Both methods accept an instance ofTypeNameParseOptions
class (an option bag) that lets you customize the parsing.Name
,FullName
, andAssemblyQualifiedName
properties that work exactly like their counterparts in System.Type.Multiple properties and methods that provide additional information about the name itself:
IsArray
,IsSZArray
(SZ
stands for single-dimension, zero-indexed array),IsVariableBoundArrayType
, andGetArrayRank
for working with arrays.IsConstructedGenericType
,GetGenericTypeDefinition
, andGetGenericArguments
for working with generic type names.IsByRef
andIsPointer
for working with pointers and managed references.GetElementType()
for working with pointers, references, and arrays.IsNested
andDeclaringType
for working with nested types.AssemblyName
, which exposes the assembly name information via the new AssemblyNameInfo class. In contrast toAssemblyName
, the new type is immutable, and parsing culture names doesn't create instances ofCultureInfo
.
Both TypeName
and AssemblyNameInfo
types are immutable and don't provide a way to check for equality (they don't implement IEquatable
). Comparing assembly names is simple, but different scenarios need to compare only a subset of exposed information (Name
, Version
, CultureName
, and PublicKeyOrToken
).
The following code snippet shows some example usage.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Reflection.Metadata;
internal class RestrictedSerializationBinder
{
Dictionary<string, Type> AllowList { get; set; }
RestrictedSerializationBinder(Type[] allowedTypes)
=> AllowList = allowedTypes.ToDictionary(type => type.FullName!);
Type? GetType(ReadOnlySpan<char> untrustedInput)
{
if (!TypeName.TryParse(untrustedInput, out TypeName? parsed))
{
throw new InvalidOperationException($"Invalid type name: '{untrustedInput.ToString()}'");
}
if (AllowList.TryGetValue(parsed.FullName, out Type? type))
{
return type;
}
else if (parsed.IsSimple // It's not generic, pointer, reference, or an array.
&& parsed.AssemblyName is not null
&& parsed.AssemblyName.Name == "MyTrustedAssembly"
)
{
return Type.GetType(parsed.AssemblyQualifiedName, throwOnError: true);
}
throw new InvalidOperationException($"Not allowed: '{untrustedInput.ToString()}'");
}
}
The new APIs are available from the System.Reflection.Metadata
NuGet package, which can be used with down-level .NET versions.
Regular expressions
[GeneratedRegex]
on properties
.NET 7 introduced the Regex
source generator and corresponding GeneratedRegexAttribute attribute.
The following partial method will be source generated with all the code necessary to implement this Regex
.
[GeneratedRegex(@"\b\w{5}\b")]
private static partial Regex FiveCharWord();
C# 13 supports partial properties in addition to partial methods, so starting in .NET 9 you can also use [GeneratedRegex(...)]
on a property.
The following partial property is the property equivalent of the previous example.
[GeneratedRegex(@"\b\w{5}\b")]
private static partial Regex FiveCharWordProperty { get; }
Regex.EnumerateSplits
The Regex class provides a Split method, similar in concept to the String.Split method. With String.Split
, you supply one or more char
or string
separators, and the implementation splits the input text on those separators. With Regex.Split
, instead of specifying the separator as a char
or string
, it's specified as a regular expression pattern.
The following example demonstrates Regex.Split
.
foreach (string s in Regex.Split("Hello, world! How are you?", "[aeiou]"))
{
Console.WriteLine($"Split: \"{s}\"");
}
// Output, split by all English vowels:
// Split: "H"
// Split: "ll"
// Split: ", w"
// Split: "rld! H"
// Split: "w "
// Split: "r"
// Split: " y"
// Split: ""
// Split: "?"
However, Regex.Split
only accepts a string
as input and doesn't support input being provided as a ReadOnlySpan<char>
. Also, it outputs the full set of splits as a string[]
, which requires allocating both the string
array to hold the results and a string
for each split. In .NET 9, the new EnumerateSplits method enables performing the same operation, but with a span-based input and without incurring any allocation for the results. It accepts a ReadOnlySpan<char>
and returns an enumerable of Range objects that represent the results.
The following example demonstrates Regex.EnumerateSplits
, taking a ReadOnlySpan<char>
as input.
ReadOnlySpan<char> input = "Hello, world! How are you?";
foreach (Range r in Regex.EnumerateSplits(input, "[aeiou]"))
{
Console.WriteLine($"Split: \"{input[r]}\"");
}
Serialization (System.Text.Json)
- Indentation options
- Default web options singleton
- JsonSchemaExporter
- Respect nullable annotations
- Require non-optional constructor parameters
- Order JsonObject properties
- Customize enum member names
- Stream multiple JSON documents
Indentation options
JsonSerializerOptions includes new properties that let you customize the indentation character and indentation size of written JSON.
var options = new JsonSerializerOptions
{
WriteIndented = true,
IndentCharacter = '\t',
IndentSize = 2,
};
string json = JsonSerializer.Serialize(
new { Value = 1 },
options
);
Console.WriteLine(json);
//{
// "Value": 1
//}
Default web options singleton
If you want to serialize with the default options that ASP.NET Core uses for web apps, use the new JsonSerializerOptions.Web singleton.
string webJson = JsonSerializer.Serialize(
new { SomeValue = 42 },
JsonSerializerOptions.Web // Defaults to camelCase naming policy.
);
Console.WriteLine(webJson);
// {"someValue":42}
JsonSchemaExporter
JSON is frequently used to represent types in method signatures as part of remote procedure–calling schemes. It's used, for example, as part of OpenAPI specifications, or as part of tool calling with AI services like those from OpenAI. Developers can serialize and deserialize .NET types as JSON using System.Text.Json. But they also need to be able to get a JSON schema that describes the shape of the .NET type (that is, describes the shape of what would be serialized and what can be deserialized). System.Text.Json now provides the JsonSchemaExporter type, which supports generating a JSON schema that represents a .NET type.
The following code generates a JSON schema from a type.
Console.WriteLine(JsonSchemaExporter.GetJsonSchemaAsNode(JsonSerializerOptions.Default, typeof(Book)));
The type is defined as follows:
public class Book
{
public required string Title { get; set; }
public string? Author { get; set; }
public int PublishYear { get; set; }
}
The generated schema is:
{
"type": ["object", "null"],
"properties": {
"Title": {
"type": "string"
},
"Author": {
"type": ["string", "null"]
},
"PublishYear": {
"type": "integer"
}
},
"required": ["Title"]
}
Respect nullable annotations
System.Text.Json now recognizes nullability annotations of properties and can be configured to enforce those during serialization and deserialization using the RespectNullableAnnotations flag.
The following code shows how to set the option (the Book
type definition is shown in the previous section):
JsonSerializerOptions options = new() { RespectNullableAnnotations = true };
// Throws exception: System.Text.Json.JsonException: The property or field
// 'Title' on type 'Serialization+Book' doesn't allow getting null values.
// Consider updating its nullability annotation.
JsonSerializer.Serialize(new Book { Title = null! }, options);
// Throws exception: System.Text.Json.JsonException: The property or field
// 'Title' on type 'Serialization+Book' doesn't allow setting null values.
// Consider updating its nullability annotation.
JsonSerializer.Deserialize<Book>("""{ "Title" : null }""", options);
Note
Due to how nullability annotations are represented in IL, the feature is restricted to annotations of non-generic properties.
You can also enable this setting globally using the System.Text.Json.Serialization.RespectNullableAnnotationsDefault
feature switch in your project file (for example, .csproj file):
<ItemGroup>
<RuntimeHostConfigurationOption Include="System.Text.Json.Serialization.RespectNullableAnnotationsDefault" Value="true" />
</ItemGroup>
You can configure nullability at an individual property level using the IsGetNullable and IsSetNullable properties.
Require non-optional constructor parameters
Historically, System.Text.Json has treated non-optional constructor parameters as optional when using constructor-based deserialization. You can change that behavior using the new RespectRequiredConstructorParameters flag.
The following code shows how to set the option:
JsonSerializerOptions options = new() { RespectRequiredConstructorParameters = true };
// Throws exception: System.Text.Json.JsonException: JSON deserialization
// for type 'Serialization+MyPoco' was missing required properties including: 'Value'.
JsonSerializer.Deserialize<MyPoco>("""{}""", options);
The MyPoco
type is defined as follows:
record MyPoco(string Value);
You can also enable this setting globally using the System.Text.Json.Serialization.RespectRequiredConstructorParametersDefault
feature switch in your project file (for example, .csproj file):
<ItemGroup>
<RuntimeHostConfigurationOption Include="System.Text.Json.Serialization.RespectRequiredConstructorParametersDefault" Value="true" />
</ItemGroup>
As with earlier versions of System.Text.Json, you can configure whether individual properties are required using the JsonPropertyInfo.IsRequired property.
Order JsonObject properties
The JsonObject type now exposes ordered dictionary–like APIs that enable explicit property order manipulation.
JsonObject jObj = new()
{
["key1"] = true,
["key3"] = 3
};
Console.WriteLine(jObj is IList<KeyValuePair<string, JsonNode?>>); // True.
// Insert a new key-value pair at the correct position.
int key3Pos = jObj.IndexOf("key3") is int i and >= 0 ? i : 0;
jObj.Insert(key3Pos, "key2", "two");
foreach (KeyValuePair<string, JsonNode?> item in jObj)
{
Console.WriteLine($"{item.Key}: {item.Value}");
}
// Output:
// key1: true
// key2: two
// key3: 3
Customize enum member names
The new System.Text.Json.Serialization.JsonStringEnumMemberNameAttribute attribute can be used to customize the names of individual enum members for types that are serialized as strings:
JsonSerializer.Serialize(MyEnum.Value1 | MyEnum.Value2); // "Value1, Custom enum value"
[Flags, JsonConverter(typeof(JsonStringEnumConverter))]
enum MyEnum
{
Value1 = 1,
[JsonStringEnumMemberName("Custom enum value")]
Value2 = 2,
}
Stream multiple JSON documents
System.Text.Json.Utf8JsonReader now supports reading multiple, whitespace-separated JSON documents from a single buffer or stream. By default, the reader throws an exception if it detects any non-whitespace characters that are trailing the first top-level document. You can change this behavior using the AllowMultipleValues flag:
JsonReaderOptions options = new() { AllowMultipleValues = true };
Utf8JsonReader reader = new("null {} 1 \r\n [1,2,3]"u8, options);
reader.Read();
Console.WriteLine(reader.TokenType); // Null
reader.Read();
Console.WriteLine(reader.TokenType); // StartObject
reader.Skip();
reader.Read();
Console.WriteLine(reader.TokenType); // Number
reader.Read();
Console.WriteLine(reader.TokenType); // StartArray
reader.Skip();
Console.WriteLine(reader.Read()); // False
This flag also makes it possible to read JSON from payloads that might contain trailing data that's invalid JSON:
Utf8JsonReader reader = new("[1,2,3] <NotJson/>"u8, new() { AllowMultipleValues = true });
reader.Read();
reader.Skip(); // Success
reader.Read(); // throws JsonReaderException
When it comes to streaming deserialization, a new JsonSerializer.DeserializeAsyncEnumerable<TValue>(Stream, Boolean, JsonSerializerOptions, CancellationToken) overload makes streaming multiple top-level values possible. By default, the method attempts to stream elements that are contained in a top-level JSON array. You can toggle this behavior using the new topLevelValues
flag:
ReadOnlySpan<byte> utf8Json = """[0] [0,1] [0,1,1] [0,1,1,2] [0,1,1,2,3]"""u8;
using var stream = new MemoryStream(utf8Json.ToArray());
await foreach (int[] item in JsonSerializer.DeserializeAsyncEnumerable<int[]>(stream, topLevelValues: true))
{
Console.WriteLine(item.Length);
}
Spans
In high-performance code, spans are often used to avoid allocating strings unnecessarily. Span<T> and ReadOnlySpan<T> continue to revolutionize how code is written in .NET, and every release more and more methods are added that operate on spans. .NET 9 includes the following span-related updates:
File helpers
The File class now has new helpers to easily and directly write ReadOnlySpan<char>
/ReadOnlySpan<byte>
and ReadOnlyMemory<char>
/ReadOnlyMemory<byte>
to files.
The following code efficiently writes a ReadOnlySpan<char>
to a file.
ReadOnlySpan<char> text = ...;
File.WriteAllText(filePath, text);
New StartsWith<T>(ReadOnlySpan<T>, T) and EndsWith<T>(ReadOnlySpan<T>, T) extension methods have also been added for spans, making it easy to test whether a ReadOnlySpan<T> starts or ends with a specific T
value.
The following code uses these new convenience APIs.
ReadOnlySpan<char> text = "some arbitrary text";
return text.StartsWith('"') && text.EndsWith('"'); // false
params ReadOnlySpan<T>
overloads
C# has always supported marking array parameters as params
. This keyword enables a simplified calling syntax. For example, the String.Join(String, String[]) method's second parameter is marked with params
. You can call this overload with an array or by passing the values individually:
string result = string.Join(", ", new string[3] { "a", "b", "c" });
string result = string.Join(", ", "a", "b", "c");
Prior to .NET 9, when you pass the values individually, the C# compiler emits code identical to the first call by producing an implicit array around the three arguments.
Starting in C# 13, you can use params
with any argument that can be constructed via a collection expression, including spans (Span<T> and ReadOnlySpan<T>). That's beneficial for usability and performance. The C# compiler can store the arguments on the stack, wrap a span around them, and pass that off to the method, which avoids the implicit array allocation that would have otherwise resulted.
.NET 9 includes over 60 methods with a params ReadOnlySpan<T>
parameter. Some are brand new overloads, and some are existing methods that already took a ReadOnlySpan<T>
but now have that parameter marked with params
. The net effect is if you upgrade to .NET 9 and recompile your code, you'll see performance improvements without making any code changes. That's because the compiler prefers to bind to span-based overloads than to the array-based overloads.
For example, String.Join
now includes the following overload, which implements the new pattern: String.Join(String, ReadOnlySpan<String>)
Now, a call like string.Join(", ", "a", "b", "c")
is made without allocating an array to pass in the "a"
, "b"
, and "c"
arguments.
Enumerate over ReadOnlySpan<char>.Split() segments
string.Split
is a convenient method for quickly partitioning a string with one or more supplied separators. For code focused on performance, however, the allocation profile of string.Split
can be prohibitive, because it allocates a string for each parsed component and a string[]
to store them all. It also doesn't work with spans, so if you have a ReadOnlySpan<char>
, you're forced to allocate yet another string when you convert it to a string to be able to call string.Split
on it.
In .NET 8, a set of Split
and SplitAny
methods were introduced for ReadOnlySpan<char>
. Rather than returning a new string[]
, these methods accept a destination Span<Range>
into which the bounding indices for each component are written. This makes the operation fully allocation-free. These methods are appropriate to use when the number of ranges is both known and small.
In .NET 9, new overloads of Split
and SplitAny
have been added to allow incrementally parsing a ReadOnlySpan<T>
with an a priori unknown number of segments. The new methods enable enumerating through each segment, which is similarly represented as a Range
that can be used to slice into the original span.
public static bool ListContainsItem(ReadOnlySpan<char> span, string item)
{
foreach (Range segment in span.Split(','))
{
if (span[segment].SequenceEquals(item))
{
return true;
}
}
return false;
}
System.Formats
The position or offset of the data in the enclosing stream for a TarEntry object is now a public property. TarEntry.DataOffset returns the position in the entry's archive stream where the entry's first data byte is located. The entry's data is encapsulated in a substream that you can access via TarEntry.DataStream, which hides the real position of the data relative to the archive stream. That's enough for most users, but if you need more flexibility and want to know the real starting position of the data in the archive stream, the new TarEntry.DataOffset API makes it easy to support features like concurrent access with very large TAR files.
// Create stream for tar ball data in Azure Blob Storage.
BlobClient blobClient = new(connectionString, blobContainerName, blobName);
Stream blobClientStream = await blobClient.OpenReadAsync(options, cancellationToken);
// Create TarReader for the stream and get a TarEntry.
TarReader tarReader = new(blobClientStream);
System.Formats.Tar.TarEntry? tarEntry = await tarReader.GetNextEntryAsync();
if (tarEntry is null)
return;
// Get position of TarEntry data in blob stream.
long entryOffsetInBlobStream = tarEntry.DataOffset;
long entryLength = tarEntry.Length;
// Create a separate stream.
Stream newBlobClientStream = await blobClient.OpenReadAsync(options, cancellationToken);
newBlobClientStream.Seek(entryOffsetInBlobStream, SeekOrigin.Begin);
// Read tar ball content from separate BlobClient stream.
byte[] bytes = new byte[entryLength];
await newBlobClientStream.ReadExactlyAsync(bytes, 0, (int)entryLength);
System.Guid
NewGuid() creates a Guid
filled mostly with cryptographically secure random data, following the UUID Version 4 specification in RFC 9562. That same RFC also defines other versions, including Version 7, which "features a time-ordered value field derived from the widely implemented and well-known Unix Epoch timestamp source". In other words, much of the data is still random, but some of it is reserved for data based on a timestamp, which enables these values to have a natural sort order. In .NET 9, you can create a Guid
according to Version 7 via the new Guid.CreateVersion7() and Guid.CreateVersion7(DateTimeOffset) methods. You can also use the new Version property to retrieve a Guid
object's version field.
System.IO
Compression with zlib-ng
System.IO.Compression features like ZipArchive, DeflateStream, GZipStream, and ZLibStream are all based primarily on the zlib library. Starting in .NET 9, these features instead all use zlib-ng, a library that yields more consistent and efficient processing across a wider array of operating systems and hardware.
ZLib and Brotli compression options
ZLibCompressionOptions and BrotliCompressionOptions are new types for setting algorithm-specific compression level and strategy (Default
, Filtered
, HuffmanOnly
, RunLengthEncoding
, or Fixed
). These types are aimed at users who want more fine-tuned settings than the only existing option, <System.IO.Compression.CompressionLevel>.
The new compression option types might be expanded in the future.
The following code snippet shows some example usage:
private MemoryStream CompressStream(Stream uncompressedStream)
{
MemoryStream compressorOutput = new();
using ZLibStream compressionStream = new(
compressorOutput,
new ZLibCompressionOptions()
{
CompressionLevel = 6,
CompressionStrategy = ZLibCompressionStrategy.HuffmanOnly
}
);
uncompressedStream.CopyTo(compressionStream);
compressionStream.Flush();
return compressorOutput;
}
XPS documents from XPS virtual printer
XPS documents coming from a V4 XPS virtual printer previously couldn't be opened using the System.IO.Packaging library, due to lack of support for handling .piece files. This gap has been addressed in .NET 9.
System.Numerics
- BigInteger upper limit
BigMul
APIs- Vector conversion APIs
- Vector create APIs
- Additional acceleration
BigInteger upper limit
BigInteger supports representing integer values of essentially arbitrary length. However, in practice, the length is constrained by limits of the underlying computer, such as available memory or how long it would take to compute a given expression. Additionally, there exist some APIs that fail given inputs that result in a value that's too large. Because of these limits, .NET 9 enforces a maximum length of BigInteger
, which is that it can contain no more than (2^31) - 1
(approximately 2.14 billion) bits. Such a number represents an almost 256 MB allocation and contains approximately 646.5 million digits. This new limit ensures that all APIs exposed are well behaved and consistent while still allowing numbers that are far beyond most usage scenarios.
BigMul
APIs
BigMul
is an operation that produces the full product of two numbers. .NET 9 adds dedicated BigMul
APIs on int
, long
, uint
, and ulong
whose return type is the next larger integer type than the parameter types.
The new APIs are:
- BigMul(Int32, Int32) (returns
long
) - BigMul(Int64, Int64) (returns
Int128
) - BigMul(UInt32, UInt32) (returns
ulong
) - BigMul(UInt64, UInt64) (returns
UInt128
)
Vector conversion APIs
.NET 9 adds dedicated extension APIs for converting between Vector2, Vector3, Vector4, Quaternion, and Plane.
The new APIs are as follows:
- AsPlane(Vector4)
- AsQuaternion(Vector4)
- AsVector2(Vector4)
- AsVector3(Vector4)
- AsVector4(Plane)
- AsVector4(Quaternion)
- AsVector4(Vector2)
- AsVector4(Vector3)
- AsVector4Unsafe(Vector2)
- AsVector4Unsafe(Vector3)
For same-sized conversions, such as between Vector4
, Quaternion
, and Plane
, these conversions are zero cost. The same can be said for narrowing conversions, such as from Vector4
to Vector2
or Vector3
. For widening conversions, such as from Vector2
or Vector3
to Vector4
, there is the normal API, which initializes new elements to 0, and an Unsafe
suffixed API that leaves these new elements undefined and therefore can be zero cost.
Vector create APIs
There are new Create
APIs exposed for Vector, Vector2, Vector3, and Vector4 that parity the equivalent APIs exposed for the hardware vector types exposed in the System.Runtime.Intrinsics namespace.
For more information about the new APIs, see:
These APIs are primarily for convenience and overall consistency across .NET's SIMD-accelerated types.
Additional acceleration
Additional performance improvements have been made to many types in the System.Numerics namespace, including to BigInteger, Vector2, Vector3, Vector4, Quaternion, and Plane.
In some cases, this has resulted in a 2-5x speedup to core APIs including Matrix4x4
multiplication, creation of Plane from a series of vertices, Quaternion concatenation, and computing the cross product of a Vector3.
There's also constant folding support for the SinCos
API, which computes both Sin(x)
and Cos(x)
in a single call, making it more efficient.
Tensors for AI
Tensors are the cornerstone data structure of artificial intelligence (AI). They can often be thought of as multidimensional arrays.
Tensors are used to:
- Represent and encode data such as text sequences (tokens), images, video, and audio.
- Efficiently manipulate higher-dimensional data.
- Efficiently apply computations on higher-dimensional data.
- Store weight information and intermediate computations (in neural networks).
To use the .NET tensor APIs, install the System.Numerics.Tensors NuGet package.
New Tensor<T> type
The new Tensor<T> type expands the AI capabilities of the .NET libraries and runtime. This type:
- Provides efficient interop with AI libraries like ML.NET, TorchSharp, and ONNX Runtime using zero copies where possible.
- Builds on top of TensorPrimitives for efficient math operations.
- Enables easy and efficient data manipulation by providing indexing and slicing operations.
- Is not a replacement for existing AI and machine learning libraries. Instead, it's intended to provide a common set of APIs to reduce code duplication and dependencies, and to achieve better performance by using the latest runtime features.
The following codes shows some of the APIs included with the new Tensor<T>
type.
// Create a tensor (1 x 3).
var t0 = Tensor.Create([1, 2, 3], [1, 3]); // [[1, 2, 3]]
// Reshape tensor (3 x 1).
var t1 = t0.Reshape(3, 1); // [[1], [2], [3]]
// Slice tensor (2 x 1).
var t2 = t1.Slice(1.., ..); // [[2], [3]]
// Broadcast tensor (3 x 1) -> (3 x 3).
// [
// [ 1, 1, 1],
// [ 2, 2, 2],
// [ 3, 3, 3]
// ]
var t3 = Tensor.Broadcast(t1, [3, 3]);
// Math operations.
var t4 = Tensor.Add(t0, 1); // [[2, 3, 4]]
var t5 = Tensor.Add(t0, t0); // [[2, 4, 6]]
var t6 = Tensor.Subtract(t0, 1); // [[0, 1, 2]]
var t7 = Tensor.Subtract(t0, t0); // [[0, 0, 0]]
var t8 = Tensor.Multiply(t0, 2); // [[2, 4, 6]]
var t9 = Tensor.Multiply(t0, t0); // [[1, 4, 9]]
var t10 = Tensor.Divide(t0, 2); // [[0.5, 1, 1.5]]
var t11 = Tensor.Divide(t0, t0); // [[1, 1, 1]]
Note
This API is marked as experimental for .NET 9.
TensorPrimitives
The System.Numerics.Tensors
library includes the TensorPrimitives class, which provides static methods for performing numerical operations on spans of values. In .NET 9, the scope of methods exposed by TensorPrimitives has been significantly expanded, growing from 40 (in .NET 8) to almost 200 overloads. The surface area encompasses familiar numerical operations from types like Math and MathF. It also includes the generic math interfaces like INumber<TSelf>, except instead of processing an individual value, they process a span of values. Many operations have also been accelerated via SIMD-optimized implementations for .NET 9.
TensorPrimitives now exposes generic overloads for any type T
that implements a certain interface. (The .NET 8 version only included overloads for manipulating spans of float
values.) For example, the new CosineSimilarity<T>(ReadOnlySpan<T>, ReadOnlySpan<T>) overload performs cosine similarity on two vectors of float
, double
, or Half
values, or values of any other type that implements IRootFunctions<TSelf>.
Compare the precision of the cosine similarity operation on two vectors of type float
versus double
:
ReadOnlySpan<float> vector1 = [1, 2, 3];
ReadOnlySpan<float> vector2 = [4, 5, 6];
Console.WriteLine(TensorPrimitives.CosineSimilarity(vector1, vector2));
// Prints 0.9746318
ReadOnlySpan<double> vector3 = [1, 2, 3];
ReadOnlySpan<double> vector4 = [4, 5, 6];
Console.WriteLine(TensorPrimitives.CosineSimilarity(vector3, vector4));
// Prints 0.9746318461970762
Threading
The threading APIs include improvements for iterating through tasks, for prioritized channels, which can order their elements instead of being first-in-first-out (FIFO), and Interlocked.CompareExchange
for more types.
Task.WhenEach
A variety of helpful new APIs have been added for working with Task<TResult> objects. The new Task.WhenEach method lets you iterate through tasks as they complete using an await foreach
statement. You no longer need to do things like repeatedly call Task.WaitAny on a set of tasks to pick off the next one that completes.
The following code makes multiple HttpClient
calls and operates on their results as they complete.
using HttpClient http = new();
Task<string> dotnet = http.GetStringAsync("http://dot.net");
Task<string> bing = http.GetStringAsync("http://www.bing.com");
Task<string> ms = http.GetStringAsync("http://microsoft.com");
await foreach (Task<string> t in Task.WhenEach(bing, dotnet, ms))
{
Console.WriteLine(t.Result);
}
Prioritized unbounded channel
The System.Threading.Channels namespace lets you create first-in-first-out (FIFO) channels using the CreateBounded and CreateUnbounded methods. With FIFO channels, elements are read from the channel in the order they were written to it. In .NET 9, the new CreateUnboundedPrioritized method has been added, which orders the elements such that the next element read from the channel is the one deemed to be most important, according to either Comparer<T>.Default or a custom IComparer<T>.
The following example uses the new method to create a channel that outputs the numbers 1 through 5 in order, even though they're written to the channel in a different order.
Channel<int> c = Channel.CreateUnboundedPrioritized<int>();
await c.Writer.WriteAsync(1);
await c.Writer.WriteAsync(5);
await c.Writer.WriteAsync(2);
await c.Writer.WriteAsync(4);
await c.Writer.WriteAsync(3);
c.Writer.Complete();
while (await c.Reader.WaitToReadAsync())
{
while (c.Reader.TryRead(out int item))
{
Console.Write($"{item} ");
}
}
// Output: 1 2 3 4 5
Interlocked.CompareExchange for more types
In previous versions of .NET, Interlocked.Exchange and Interlocked.CompareExchange had overloads for working with int
, uint
, long
, ulong
, nint
, nuint
, float
, double
, and object
, as well as a generic overload for working with any reference type T
. In .NET 9, there are new overloads for atomically working with byte
, sbyte
, short
, and ushort
. Also, the generic constraint on the generic Interlocked.Exchange<T>
and Interlocked.CompareExchange<T>
overloads has been removed, so those methods are no longer constrained to only work with reference types. They can now work with any primitive type, which includes all of the aforementioned types as well as bool
and char
, as well as any enum
type.