Deep dive on the offline support in the managed client SDK
Last week we announced a new feature in the Azure Mobile Service SDK (managed code only for now): support for offline handling of data. While before all table operations required an active internet connection, with this new support the application can store table operations in a local data store, and later when connected push the changes to the mobile service (and also pull changes from the service into the local table). The local data store is defined by an interface, so you can use whatever implementation you want, but we also released a new NuGet package with a SQLite-based implementation of the store to help you get started quickly.
New table types: IMobileServiceSyncTable and IMobileServiceSyncTable<T>
One thing which we decided to do when implementing the offline support is that we wanted the developers to be aware of what kind of data would be offline and which data would be online. We thought about implementing the support in a transparent way and data would be synchronized to the server whenever a connection could be established, but almost always the “magic” that we would implement would be wrong. Instead, we released an alpha version of the SDK with offline capabilities (meaning that when managing the NuGet packages you’ll need to select the “Include prerelease” option in the combo box on top of the package names), and based on the feedback we get from all of you we’ll decide the direction to go next (which can be implementing a full “auto-sync” framework).
In practice, that means that to use the offline feature, you’ll need to use a different kind of table. There are two new methods in the MobileServiceClient class: GetSyncTable(string tableName)
and GetSyncTable<T>()
, which return instances of the IMobileServiceSyncTable
and IMobileServiceSyncTable<T>
interfaces, respectively. They’re in most ways very similar to the IMobileServiceTable and IMobileServiceTable<T> interfaces, exposing similar methods. The biggest difference is that for the local (sync) tables, there are no overloads for the CRUD operations with additional query string parameters (there’s no HTTP request going on the operation on those tables). Other differences include the fact that sync tables only work with entities with string ids (those with integer ids are not supported locally) and the shape of the response for insert and update operations (on regular tables, the responses can be any JSON value, including arrays and primitives; for sync tables they need to be objects).
One more thing which I think is important – a few people have asked me why the interface was named sync table – why not name it local table, which is : the interface name tells us something about its behavior. We’re not talking simply about a local table. It’s an object which can synchronize the state of the remote table (on Azure) with the table in the local store. We’ll get to more details on the synchronization later.
Let’s get coding. To start, let’s try to insert an item into a sync table to see what happens…
- var client = newMobileServiceClient(ApplicationUri, ApplicationKey);
- var table = client.GetSyncTable<TodoItem>();
- AddToDebug("Table: {0}", table.TableName);
- var item = newTodoItem { Text = "Buy bread", Complete = false };
- await table.InsertAsync(item);
- AddToDebug("Inserted into local store: {0}", item.Id);
When we run that, it doesn’t work. Instead, we get the following exception:
System.InvalidOperationException: SyncContext is not yet initialized.
The problem is that the SDK has no idea about where to store the local data. Before using any of the sync operations, we first need to initialize the synchronization context of the client so that those operations can start working.
The synchronization context
In addition to the two new methods to get sync table instances from the client, there’s a new property, SyncContext
, which needs to be initialized with an instance of the actual local store where the data will be saved. Before you can use any of the local operations, the sync context in the client needs to be initialized with an IMobileServiceLocalStore
object. That means that you can define whatever mechanism to store the local data, but the large majority of developers doesn’t need to go into that level of details, so we’ve also released an implementation of that interface based on a SQLite database. To access that store implementation (in the class MobileServiceSQLiteStore
) you’ll need a new NuGet package, Azure Mobile Services SQLiteStore. On to fixing the code above. When we instantiate the store, we need to define the tables which will be used to store data locally. There are two ways to define the tables that the SDK will use: you can either pass a JObject instance containing the properties which will be stored to the DefineTable
method, or even easier, you can also use the DefineTable<T>
method where you just pass the type which will be used as the generic parameter, and the SDK will deal with it.
- var client = newMobileServiceClient(ApplicationUri, ApplicationKey);
- var store = newMobileServiceSQLiteStore(StoreFileName);
- store.DefineTable<TodoItem>();
- AddToDebug("Defined table in the store");
- await client.SyncContext.InitializeAsync(store);
- AddToDebug("Initialized the sync context");
- var table = client.GetSyncTable<TodoItem>();
- AddToDebug("Table: {0}", table.TableName);
- var item = newTodoItem { Text = "Buy bread", Complete = false };
- await table.InsertAsync(item);
- AddToDebug("Inserted into local store: {0}", item.Id);
Now, if you run the code above it will work – a TodoItem
table will be created in the local store and an item will be added to it.
The synchronization context is mainly responsible for, well, synchronizing the data between the local database (represented by the local store) and the remote database (accessed via the Azure Mobile Service). This synchronization is done via an explicit push / pull mechanism which must be invoked by the developer – at this point we don’t have any “auto-sync” framework which will handle those calls automatically, but this feature may be implemented in a future version of the SDK.
So back to the synchronization context. Once we start calling operations on the local tables, those operations start getting queued up by the sync context. Those operations become “pending” and are persisted locally, so that even if the application is closed and reopened, the list of pending operations is returned. You can check the number of operations which are pending by looking at the PendingOperations
property in the synchronization context. As more operations are executed in the local tables, the queue will grow, until there is a synchronization event (which we’ll talk more in the next section). Let’s look at that property a little closer, by expanding the example above and performing additional operations in the local table.
- var client = newMobileServiceClient(ApplicationUri, ApplicationKey);
- var store = newMobileServiceSQLiteStore(StoreFileName);
- store.DefineTable<TodoItem>();
- AddToDebug("Defined table in the store");
- await client.SyncContext.InitializeAsync(store);
- AddToDebug("Initialized the sync context");
- AddToDebug("Pending operations in the sync context queue: {0}", client.SyncContext.PendingOperations);
- var table = client.GetSyncTable<TodoItem>();
- AddToDebug("Table: {0}", table.TableName);
- var item = newTodoItem { Text = "Buy bread", Complete = false };
- await table.InsertAsync(item);
- AddToDebug("Inserted into local store: {0}", item.Id);
- AddToDebug("Pending operations in the sync context queue: {0}", client.SyncContext.PendingOperations);
- item = newTodoItem { Text = "Buy milk", Complete = false };
- await table.InsertAsync(item);
- AddToDebug("Pending operations in the sync context queue: {0}", client.SyncContext.PendingOperations);
- var thingsToDo = await table.Where(t => !t.Complete).Select(t => t.Text).ToListAsync();
- AddToDebug("Things to do {0}", string.Join(", ", thingsToDo));
- AddToDebug("Pending operations in the sync context queue: {0}", client.SyncContext.PendingOperations);
- item.Complete = true;
- await table.UpdateAsync(item);
- AddToDebug("Updated item: {0}", item.Id);
- AddToDebug("Pending operations in the sync context queue: {0}", client.SyncContext.PendingOperations);
As I mentioned before, the sync tables have basically the same API as “regular” (remote) tables, so queries, updates and inserts (as well as deletes, not shown above) look just like methods on regular tables. As we run the code below (assuming that the local store was empty) we’ll get an output similar to the one shown below. It looks expected – when we insert the first item the pending operation count goes to one; when we insert another item the count is incremented once more; when we read from the local table the count is not incremented – read operations are not synchronized. But there’s one interesting aspect which happens when we update one of the items which we had just inserted: the number of operations in the queue is not changed. What the current implementation of the synchronization context does is to “merge” pending operations for the same item, so that during synchronization only one operation (in this case, insert) is sent to the server (where the value of the item is the most recent one).
Defined table in the store
Initialized the sync context
Pending operations in the sync context queue: 0
Table: TodoItem
Inserted into local store: 9e61196d-55df-4869-8b30-4a4a6eb792f2
Pending operations in the sync context queue: 1
Inserted another item into the local store: c349b30d-d603-48db-8509-b2fd170f4499
Pending operations in the sync context queue: 2
Things to do Buy bread, Buy milk
Pending operations in the sync context queue: 2
Updated item: c349b30d-d603-48db-8509-b2fd170f4499
Pending operations in the sync context queue: 2
I talked about synchronization operations without introducing them. Let’s look at them now.
Push / pull / purge
There are three basic operations which can trigger synchronization. The simplest of all is the PushAsync
method in the synchronization context. Once that method is called, the changes which were performed in the local table are sent over to the server. In the example below, there will be one more item in the server once the call to PushAsync
is completed (or maybe more, if there were other insert operations pending in the synchronization queue).
- var localTable = client.GetSyncTable<TodoItem>();
- var remoteTable = client.GetTable<TodoItem>();
- var remoteItems = await remoteTable
- .Select(i => i.Text)
- .ToListAsync();
- AddToDebug("Items from the server: {0}", string.Join(", ", remoteItems));
- var item = newTodoItem { Text = "Buy bread", Complete = false };
- await localTable.InsertAsync(item);
- AddToDebug("Inserted into local store: {0}", item.Id);
- await client.SyncContext.PushAsync();
- AddToDebug("Pushed the local changes to the server");
- remoteItems = await remoteTable
- .Select(i => i.Text)
- .ToListAsync();
- AddToDebug("Items from the server: {0}", string.Join(", ", remoteItems));
Push is executed on the whole context, not on specific tables. It’s implemented this way to support relationships between entities on the client side. For example, if you have an “Order” and an “OrderItem” table, you can insert an item in the first table, and having the id of that entity, insert the children items with the appropriate foreign key. And when the operations are sent to the server, they will be sent in order so that any FK relationships in the database will be satisfied.
The other operation which triggers a synchronization is a call to PullAsync
on the local table. That call can either pull all items from the remote table, or just pull a subset of the items. Only pulling some items from the table is often advisable, as stuffing everything from the (potentially large) remote database table into the (restricted by the device memory) local table may have some bad performance implications. You can pass an OData-formatted query to select which items to pull from the server, or you can also use the (more friendly) Linq expressions to determine the query of items to be pulled.
- var localTable = client.GetSyncTable<TodoItem>();
- var query = localTable.Where(t => !t.Complete);
- await localTable.PullAsync(query);
- var localItems = await localTable
- .Select(i => i.Text)
- .ToListAsync();
- AddToDebug("Items from the server (in the local table): {0}", string.Join(", ", localItems));
One important thing to notice regarding pull operations – if there are items in the pending synchronization queue, those items are first pushed over to the server, then the pull operation takes place. That prevents a scenario where an update is done to a local item, but a pull operation would overwrite the changes locally and potentially leave the data in an inconsistent state. That’s one the first synchronization rule: a pull triggers a push. In the example below, the insert operation for the “Buy milk” item will first be pushed to the server, then the items will be pulled into the local table.
- await client.SyncContext.InitializeAsync(store);
- AddToDebug("Initialized the sync context");
- var localTable = client.GetSyncTable<TodoItem>();
- var item = newTodoItem { Text = "Buy milk", Complete = false };
- await localTable.InsertAsync(item);
- var query = localTable.Where(t => !t.Complete);
- await localTable.PullAsync(query);
- var localItems = await localTable
- .Select(i => i.Text)
- .ToListAsync();
- AddToDebug("Items from the server (in the local table): {0}", string.Join(", ", localItems));
Another operation which triggers a synchronization event is a call to PurgeAsync
on the local table. Often we want to clear the local cache to update the data which the application doesn’t need anymore. For example, the canonical TODO app, we only display the items in the client which are not complete. In this case, there’s no need to store locally any items which have already been complete. A call to purge such items can be done as shown below.
- var localTable = client.GetSyncTable<TodoItem>();
- await localTable.PurgeAsync(localTable.Where(t => t.Complete));
- var query = localTable.Where(t => !t.Complete);
- await localTable.PullAsync(query);
- var localItems = await localTable
- .Select(i => i.Text)
- .ToListAsync();
- AddToDebug("Items from the server (in the local table): {0}", string.Join(", ", localItems));
Notice that, just like in the pull case, a call to purge will first send any pending operations to the server (the second synchronization rule: purge also triggers a push). This way, if we had marked an item as complete locally, we want to make sure that this information is in the server before we remove the item from the local store.
Handling conflict errors
Until now we’ve looked at many synchronization scenarios where everything works fine. There are cases, however, where errors happen. If there are multiple sources changing a single entity (such as a row in the database), you may have conflicts when a second update is attempted (since the version of the item will have change – for more information see this document on optimistic concurrency implementation on the server). In this case, a push operation would fail. Take the code below: the item is updated in the remote table, but when we try to push the update to the local item, since the version of the items will not match, the push operation will fail, and a MobileServicePushFailedException
will be thrown. The exception has a list of all errors which happened for the individual elements from the synchronization queue.
- var localTable = client.GetSyncTable<TodoItem>();
- var remoteTable = client.GetTable<TodoItem>();
- await localTable.PullAsync();
- var firstItem = (await localTable.Take(1).ToEnumerableAsync()).FirstOrDefault();
- var firstItemCopy = newTodoItem
- {
- Id = firstItem.Id,
- Version = firstItem.Version,
- Text = firstItem.Text,
- Complete = firstItem.Complete
- };
- firstItemCopy.Text = "Modified";
- await remoteTable.UpdateAsync(firstItemCopy);
- AddToDebug("Updated the item on the server");
- firstItem.Text = "Modified locally";
- await localTable.UpdateAsync(firstItem);
- AddToDebug("Updated the same item in the local table");
- AddToDebug("Number of pending operations: {0}", client.SyncContext.PendingOperations);
- await client.SyncContext.PushAsync();
There are scenarios where you want to catch and deal with the synchronization conflicts in the client. You can control all the synchronization operations by implementing the IMobileServiceSyncHandler
interface and passing it when initializing the context. For example, this is an implementation of a sync handler which traces all the operations which are happening.
- classMySyncHandler : IMobileServiceSyncHandler
- {
- MainPage page;
- public MySyncHandler(MainPage page)
- {
- this.page = page;
- }
- publicTask<JObject> ExecuteTableOperationAsync(IMobileServiceTableOperation operation)
- {
- page.AddToDebug("Executing operation '{0}' for table '{1}'", operation.Kind, operation.Table.Name);
- return operation.ExecuteAsync();
- }
- publicTask OnPushCompleteAsync(MobileServicePushCompletionResult result)
- {
- page.AddToDebug("Push result: {0}", result.Status);
- foreach (var error in result.Errors)
- {
- page.AddToDebug(" Push error: {0}", error.Status);
- }
- returnTask.FromResult(0);
- }
- }
And we can use this synchronization handler by passing it to the overload of InitializeAsync
in the sync context, as shown below:
- var store = newMobileServiceSQLiteStore(StoreFileName);
- store.DefineTable<TodoItem>();
- AddToDebug("Defined table in the store");
- var syncHandler = newMySyncHandler(this);
- await client.SyncContext.InitializeAsync(store, syncHandler);
- AddToDebug("Initialized the sync context");
This context implementation doesn’t do much, but we can catch the exception which is thrown by the client when the server returns a Precondition Failed (HTTP status code 412) and retry the call again after updating the version on the client.
- classMySyncHandler : IMobileServiceSyncHandler
- {
- MainPage page;
- IMobileServiceClient client;
- public MySyncHandler(IMobileServiceClient client, MainPage page)
- {
- this.client = client;
- this.page = page;
- }
- publicasyncTask<JObject> ExecuteTableOperationAsync(IMobileServiceTableOperation operation)
- {
- JObject result = null;
- MobileServicePreconditionFailedException conflictError = null;
- do
- {
- try
- {
- result = await operation.ExecuteAsync();
- }
- catch (MobileServicePreconditionFailedException e)
- {
- conflictError = e;
- }
- if (conflictError != null)
- {
- // There was a conflict on the server. Let's "fix" it by
- // forcing the client entity
- JObject serverItem = conflictError.Value;
- // In most cases, the server will return the server item in the request body
- // when a Precondition Failed is returned, but it's not guaranteed for all
- // backend types.
- if (serverItem == null)
- {
- serverItem = (JObject)(await operation.Table.LookupAsync((string)operation.Item[MobileServiceSystemColumns.Id]));
- }
- // Now update the local item with the server version
- operation.Item[MobileServiceSystemColumns.Version] = serverItem[MobileServiceSystemColumns.Version];
- }
- } while (conflictError != null);
- return result;
- }
- publicTask OnPushCompleteAsync(MobileServicePushCompletionResult result)
- {
- returnTask.FromResult(0);
- }
- }
And this is how we can resolve conflicts on the client. This sample shows another conflict handling policy (letting the user choose which version to keep), but the structure is similar to the one above. And a final note about resolving synchronization conflicts: to use the optimistic concurrency feature (which prevents two clients from overriding modifications in the same row), you’ll need to define a version column in the class used in the client.
Advanced features
Some additional information about this release which I think is interesting. Unlike on remote tables, local tables can store arbitrary types, including complex ones (for example, a “Person” class can have an “Address” property), and when stored in the local table the complex property will be stored as a JSON-ified version of its value. You won’t be able to query on those types (for example, list all people whose “Address.City” property is “Springfield”), but it can be stored and retrieved without any extra code.
When writing a sync handler (like the one used in the previous section to resolve conflicts) you can also abort the whole push operation if you find an error for which you don’t want to continue. If this is the case, you can call the AbortPush
method on the IMobileServiceTableOperation
instance passed to the ExecuteTableOperationAsync
method.
One more thing: the implementation of the local store uses the SQLite database, which is a x86/arm only binary. The project cannot be configured to “AnyCPU”, it needs to be configured to a specific architecture.
Wrapping up
In this release we introduced offline capabilities in the .NET SDK for Mobile Services. We released it as an alpha NuGet package so you can try and give us feedback on what works, what doesn’t and what we can improve. Please let us know in the comments, in our forum or via twitter @AzureMobile.
Comments
- Anonymous
April 08, 2014
Is there anything I need to setup or configure on SQL Azure or on the Mobile service in order for offline data to work? - Anonymous
April 09, 2014
This looks great. I ended up rolling my own offline capabilities when I started my Win 8 app in 2012 using SQLite and Tim Heuer's SQLite-NET library, but it will be nice to have a more streamlined (and standardized) approach like this for newer projects. - Anonymous
April 10, 2014
Michael, no, the tables you create in the mobile service should work just fine. The offline data story requires tables with string ids (i.e., "old" tables with integer ids will not work), but that's the default in the portal. - Anonymous
April 10, 2014
Thank you Carlos. Just one more question. Is disconnected data available on Windows Phone too? - Anonymous
April 11, 2014
Yes, it also works for windows phone. You need to have SQLite installed for Windows Phone (sqlite.org/download.html) to get it to work, though. - Anonymous
April 17, 2014
Thank you Carlos. I am using WAMS as backend for sencha ExtJS applications. Will it be available for Javascript in the next weeks ? What do you suggest to use offline data for a sencha application ?thanks for all - Anonymous
May 03, 2014
What about scenarios where you have a single azure mobile service serving a large number of mobile app users, each with separate client data. I am building an app with SQLite locally and azure mobile services remotely, with user authentication and server scripts that limit data to a users own data. Will this very useful offline feature work in this scenario, and if so, are there any special considerations - Anonymous
May 06, 2014
Excellent posting - thanks. I am trying to get this to work with Windows phone 8.1 but I cannot find a version of SQLitePCL which will install for a WindowsPhoneApp 8.1. Do you know if this is available anywhere yet? - Anonymous
May 18, 2014
When it comes to technology, it is important to hire permanent IT staff members, and must invest in managed IT support. Good IT support companies are very good at fixing all problems regarding to your IT support, this is the best way to improve your companies efficiency and quality. Visit here for http://www.Belnis.com - Anonymous
May 22, 2014
Hi Dean, I'm about to build an app with exactly the same scenario. One AMS, a .Net backend service, Authenticated users, UserId stored on server tables.I want to synchronize only users elements into SQLite for an offline use but also beeing able to access to full table in an online use context.To do that, on the server side, I was thinking about creating one table like CustomElements wich hold all the data and one other like MyCustomElements wich is always empty.Then I plan to modify the MyCustomElementsController so that it will get user filtered data from the CustomElements table.On the client side, I'll create only the MyCustomElements table wich synchronize with the same table on the server.Well, I didn't try it now, but I think it should work. I don't know if there's other solution like creating a custom controller (non table) and create a synchronization with it on local app table...Does anyone know how to deal with this real world synchronization scenario? - Anonymous
June 04, 2014
The comment has been removed - Anonymous
June 15, 2014
Hi Carlos,Thank you for the detailed explanation of new features. Looking forward for the final release.Is there any plans for supporting SterlingDB? If so, an estimation date? - Anonymous
March 30, 2015
Is there a way to understandhow many data we're downloading in order to have a progress bar info? - Anonymous
February 23, 2016
The comment has been removed