Distributed Cache Capabilities - Data Storage
In this first installment we are going to share the data storage attribute and its capabilities. Each one of them have a description and a generic guideline based on the experience of our implementations. I have added a weight, may look arbitrary but is based on the average importance on our projects.
Capability |
Description |
Weight |
.NET Types support |
Does the cache support .NET types natively? Some caches may accept any type of data types but if they are not .NET aware they will incur in a performance penalty during the serialization process. |
5 |
Guideline |
If your clients are going to be mainly .NET you must consider using a cache that supports .NET natively. Even if 50% of your clients use .NET we recommend to select a cache that can support it as the .NET transition is the most expensive if we compare it against native objects or java objects. |
|
|
||
Capability |
Description |
Weight |
.NET Session state support |
Some caches have native support for web session states, they may provider also a native provider that you can link up to your web application, saving development time. |
4 |
Guideline |
This is an extremely useful feature if the cache is going to be used as distributed state management storage as it can provide an API to reference in your project saving a lot of plumbing code. Distributing the session state will remove the dependency to load balancer affinity models for web applications. |
|
|
||
Capability |
Description |
Weight |
.NET View state support |
Web pages view state can take extra processing time, some caches address this issue integrating a view state provider that handles, caches and distribute this raw types. |
1 |
Guideline |
Although it can be useful in certain scenarios, this can be simulated with a standard string entry on the cache. Do not weight your selection decision on this factor as is a replaceable feature that won’t require a lot of code. |
|
|
||
Capability |
Description |
Weight |
Raw types (native) |
This capability provides support for raw types, usually binary arrays that can store any type of content. Some caches have limitations on the storage and will not allow this type of writing. |
2 |
Guideline |
Consider native storage when you need to store heterogeneous data types, for example mixing .NET objects, Java objects, pictures, videos or circular arrays. |
|
|
||
Capability |
Description |
Weight |
Java types support |
Some caches support native java objects, this will allow clients to access the objects without intermediate transformation, increasing the read/write performance. |
5 |
Guideline |
If your clients are going to be mainly java then considering a cache with this capability can make a real difference. Java objects are very lightweight but there is still some processing involved when using a native storage rather than a java-aware one. |
|
|
||
Capability |
Description |
Weight |
Maximum storage size |
Each cache item may have a maximum storage size, this can limit the type of information that can be stored on it. There is another aspect of maximum storage size and is the total cache size, usually related to the CPU bitness. |
4 |
Guideline |
Some caches offer ultra-fast read/write capabilities, this is usually related to the size constrains. If you are expecting to store less than 1Mb per item then go for the fast option, as the allocation is very efficient, but note that this type of cache is usually implementing a native storage, this can affect the performance if you use .NET/Java objects. On the other hand, if you need to store large objects make sure that the cache can support it. |
|
|
||
Capability |
Description |
Weight |
Managed memory storage |
Usually caches that are written in .NET will use a managed heap to store the cache items. These objects will be governed by a garbage collection that can affect the total cache size. |
2 |
Guideline |
This can be a good capability if you are using .NET clients and .NET types, the garbage collection is designed to understand these objects and is very efficient doing so. If you are going to use other types of objects then the collector will use the pinning model, but relays on the cache to apply memory pressure to those objects (this can lead to fragmentation). |
|
|
||
Capability |
Description |
Weight |
Native memory storage |
Caches that are written using native code will use the default heap or the C runtime heap. The cache will manage its own memory and depends on its ability to properly avoid fragmentation. |
2 |
Guideline |
If your clients are going to be native or there is a scenario with mixed technologies this can improve performance, but keep in mind that the cache vendor has to manage its own memory, this usually requires different skills that some companies have not master. Using managed memory managers like the .NET or Java is a good alternative as both technologies are well mature in this area. |
|
|
||
Capability |
Description |
Weight |
Load balancing |
Some caches will load balance the storage automatically between several cache instances, this is usually achieved using a balancer in conjunction with a router. When a request arrives the router will point to the right cache, where the data is located. |
3 |
Guideline |
Load balancing technologies can be very good to reduce the cache load in certain scenarios, usually is implemented in conjunction with data partitioning, which can achieve similar results. If you are going to use a partitioning topology then this feature is not really important, only consider it if you are planning to use replication or mirrored topologies. |
The average weight for this category is 3.11