Delen via


Low Level Considerations for VS of the Future (an old memo)

I wrote this a long time ago.  It's very interesting to me how applicable this is today.  And how it is not very specific to Visual Studio at all...

Low Level Considerations for VS of the Future
Rico Mariani, Sept 12/2007

Introduction

I’ve been giving much thought to what enabling steps we have to take to make our VS12+ experience be something great.  I will not be trying to talk about specific features at all, I’m not sure we are anywhere near the point where  we could talk about such things but I do consider the prototype we’ve seen as sort of representative of what the underlying needs would be for a whatever experience we ultimately create.

So without further ado, here are some areas that will need consideration with information as to what I think we need to do, broadly, and why.

Memory Usage Reduction

Although we can expect tremendous gains in overall processor cycles available to us within the next decade we aren’t expecting similar gains in memory availability – neither L2/L0 capacity, nor total bandwidth.  Project sizes, file sizes, and code-complexity generally is going up but our ability to store these structures is not increasing.
To get past this, and enable the needed parallelism we must dramatically reduce our in-memory footprint for all key data structures.  Everything must be “virtualized” so that just what we need is resident.  In a IDE world with huge solutions in many windows vast amounts of the content must be unrealized.  Closing is unnecessary because things that are not displayed have very little cost.

Investment -> Ongoing memory cost reduction

Locality

In addition to having less overall memory usage, our system must also have very dense data structures.  Our current system trends to the “forest of pointers” direction with in-memory representations being not only larger than on-disk representations but also comparatively rich in pointers light on values.

More value oriented data-structures with clear access semantics (see below) will give us the experience we need.  The trend to expand data to “pre-computed” forms will be less favorable than highly normalized forms because extra computation will be comparatively cheaper than expanding the data and consuming memory.   Memory is the new disk.

Investment -> Drive density into key data structure, use value rich types

Transactions

Key data structures, like documents, need the notion of transactions to create unit of work needs.  This is important for many cases where rollback is critical.  There need not be anything “magic” about this – it doesn’t have to be transacted memory, it’s merely transaction support in the API.  Important to help resolve parallelism concerns, and disconnected data concerns, the notion that reads and writes might fail is important and at that point atomicity of operations is crucial.  Transacted documents may or may not imply locking or versioning but don’t have to.  See the next topic.

Investment -> Key data-structures gain a transacted API
Investment -> Key data-structures use transactions to communicate “scope of change” after edits.

Isolation

Having an isolation model means you can understand what happens when someone changes the data structure out from under you.  What changes will you see and what won’t you.   How much data can you read and expect to be totally self-consistent.

Investments -> Key data-structures must have an isolation model

A UX that is not strongly STA based

The IDE of the future must focus presentation in a single concentrated place but, in a game-like fashion, produce “display lists” of primitives, delivered to the rendering system in an “inert” fashion (i.e. as data not as API calls) so that presentation is entirely divorced from computing the presentation and the rendering system is in fact always ready to render.

This is entirely harmonious with the fact that we want all these display lists to be virtualized. You need not “close” windows – merely moving them out of frame is enough to reclaim the resources associated with the display.  The 1000s of windows you might have cost you nothing when you are not looking at them.

Investment -> A standard method for pipelining ready-to-render displays
Investment -> an STA free programming model with non-STA OS services (e.g. file open dialog)

Twin Asynchronous Models Pervasively

It is easiest to illustrate these two needs by example:

  1. Search for “foo” 
    • This can be done in a totally asynchronous fashion with different “threads” proceeding in parallel against different editor buffers or other searchable contexts.  All the “foos” light up as they are discovered.
    • Foos that are not visible need not be processed, so this sort of parallelism is “lazy”
  2. Rename “foo” to “bar”
    • We can still do this asynchronously but now there is an expectation that we will eventually do it all and of course we must do all the foos not just the visible ones
    • Generally this requires progress, and a transaction
    • It can fail, or be cancelled, these are normal things

Both of these represent two cases where we can use the “coarse” parallelism model.  Of course we also would like to do fine-grained parallelism in places and in fact searching could be amenable to this as well.  Fine-grained has the opportunity to keep locality good because you stream over the data once rather than access disparate data streams all at once but it is of course more complicated.  Enabling both of these requires the previous investments:

  • Limited memory usage in the face of many threads running
  • Ability to access the data in slices that are durable and limited in span
  • Clear boundaries for write operations that may happen while the work is going on
    • This allow allows for notifications
    • This allows for a clear place to re-try in the event of deadlocks/failures
  • Isolation so that what you see when you change or update is consistent to the degree that was promised (it’s part of the contract)
  • No STA UI so that you can present the results in parallel without any cosmic entanglements outside of the normal locking the data-structures require – no  “stealth” reentrancy

Investment -> “Lazy” asynchronous display operations
Investment -> Asynchronous change operations
Investment -> Failure tolerant retry like in good client server applications

Conclusion

With the above investments in place it becomes possible to use multi-threading APIs to create high speed experiences that allow for real-time zooming, panning, searching, background building, rich intellisense, and other algorithms that can exploit the coarse parallelism abundant in large solutions.   This can be done with a variety of threading models – I like data pipelined models myself but they aren’t necessary.  Access to key data-structures increasingly appears to be similar to how a database would be handled to customers of the data.

Even with little to no change in actual processor dispatch mechanisms we get these benefits:

  • The underlying data storage is enabled to take advantage of CPU resources available
  • Other factors which would make parallelism moot are mitigated

Under these circumstances a “game quality” UX is possible – that’s the vision for VS12.  To get there, we need to start thinking about these considerations now so that our designs begin to reflect our future needs.

Comments

  • Anonymous
    December 22, 2014
    The bloat of day to day data structures with Pointers is a problem that begins to be felt in almost all .Net software today that is above average in complexity and not just giants like VS. (Linq with the amount of delegates and closure classes it creates is probably one of the more common issues, where it's heavily used). I would be very curious if there's a followup with recommendations on how to build lighter weight and more localized data structures. That would be an interesting read.

  • Anonymous
    December 25, 2014
    Yeah, it'll be nice if there's a follow up article on the "how-tos".  And has VS 2015 improved on areas that you mentioned?  Thanks for posting the memo!  Merry Christmas!