共用方式為


Managing Writeback Cubes

NOTE This is part of a series of entries on the topic of Building Writeback Applications with Analysis Services.

In presenting on the topic of writeback applications with Analysis Services, there are a few administrative questions that seem to come up which I will attempt to address here. These tend to be focused on:

  • ·        Backup & Recovery
  • ·        Writeback Partition Maintenance
  • ·        Update Conflicts

Backup & Recovery

The writeback application is dependent on a number of assets. These include the user-interface and any supporting components, the Analysis Services database, the source database(s) and the writeback partition table. Assuming you have the user-interface and supporting components covered, I’ll focus on the AS database, source database(s) and writeback partition table which I’ll refer to collectively as the data assets.

To recover the data assets, you need to implement standard backup procedures. If you limit your cube to a single source database and keep your writeback partition table in that database, then you can backup and recovery that database without a potential loss in data consistency. That said, there is the potential for the source database (including the writeback partition table) to be inconsistent with data in the Analysis Services database. If you must recover one or the other or both, execute a full process on the Analysis Services database following recovery in order to ensure they are in sync. As Analysis Services databases associated with writeback applications tend to be relatively small, the full process should not be too burdensome.

Writeback Partition Maintenance

I cannot stress this enough. LEAVE THE WRITEBACK PARTITION TABLE ALONE! If you are concerned about performance, use the MOLAP storage option for the writeback partition as this will improve query performance even when there are large numbers of entries in this table. If there has been an invalid writeback operation, consider correcting it with another writeback operation.

If you do modify the data in the table – did I mention that I really think you shouldn’t do this? – then be sure to process the associated measure group once you are done to ensure it is in sync with the new data in the table. This goes for whether you are employing MOLAP or ROLAP for your writeback partition.

If you are bringing in additional data to your data source, such as data flowing in through an ETL process, model your data in such a way that that data is separated from the writeback entries. Some examples of how to do this include having a measure group for actuals and another for forecast where actuals enter the database through ETL and forecast enter the database through writeback. Another approach is to assign incoming data to a special scenario, e.g. an Observed, Actuals, or Historical scenario, and have writebacks take place against other scenarios. 

There are many ways you can ensure that new data coming into your data source do not invalidate the writeback entries. If you cannot then you may have to assume that with a data update (through the ETL) the writeback entries are no longer valid. In this case, you may need to delete entries from the writeback partition table (or truncate the whole table) to reset the data. If you do this, be sure to process the associated measure group in the cube to make sure it is in sync.

Finally, please be aware that making changes to a measure group with a writeback partition may require the structure of the writeback partition table to change. Once a writeback application is in use, you must carefully consider how any changes to a measure group receiving writebacks will be implemented.

Update Conflicts

When a user writes back to a cube, the changes are held in memory as part of a connection-specific writeback cache. That cache is isolated from other users so that the associated user can perform multiple cycles of What-If analysis before publishing or discarding those changes. Only when those changes are published (committed) are they potentially exposed to other users. So, how does Analysis Services deal with the potential situation within which two users are writing back to overlapping parts of the cube?

In short, it doesn’t. If one user makes changes while another user makes changes, they are isolated from one another. If one user publishes his or her changes, there is no signal to the other user that the data he or she is working with is no longer current. If the other user commits his or her changes not knowing the first user has made changes, the other user’s changes overwrite the first user’s changes and become the current state of the data.

At first this sounds terrible, but let’s think about this a little further. Writeback applications are intended to be expert systems. This assumes a limited number of users with writeback permissions. Next, can use dimension data and cell data security to control to which parts of the cube individual users can write and in doing so can ensure we have few potential overlapping writes. And finally, if we are building a custom application, we can always open a second connection and validate the state of the data prior to committal. This later step is a little extreme to me for most applications, but if you were not mitigate the problem with limited numbers of writeback users and avoidance of overlapping write-enabled portions of the cube, it’s another option.