Share via


Composite Projects and Schema Compare

Several questions have been raised on the forum regarding use of composite projects.  This post discusses composite projects in general and then describes a technique you can use with Schema Compare for synchronizing composite projects with changes that have been made to the database.

Composite Projects 101

SSDT can combine a database project with one or more referenced database projects or dacpacs to describe a single composite database schema.   Using a composite project allows a large database to be broken down into more manageable chunks, allows different people or teams to have responsibility for different parts of the overall schema, and enables reuse of database object definitions in multiple databases.

Scenarios might include a central team defining a set of common types, tables and procedures that are released to other teams as a dacpac for inclusion in other databases.  Or an extended version of a database could be defined that adds extra test procedures and other objects to enable the referenced database to be tested – the extended test project is deployed while you are testing, and the referenced main database project is deployed otherwise.

First, some composition basics.  Schemas are composed by adding Database References to projects or dacpacs you wish to compose into another project.  A referenced project/dacpac can describe a fragment of schema that you want to compose or a complete database definition.

To compose projects and dacpacs into a single database the references must be defined as 'Same database' references on the Add Database Reference dialog (below).

References must be strictly hierarchical with no circular dependencies.  A referenced dacpac can be built from a project or extracted from a database.  The hierarchy of references can be arbitrarily deep, although you need to add references in each project to all its lower-level projects or dacpacs, whether directly or indirectly composed. 

In the sample CompositeDb solution shown below you can see several levels of composition.  First a BaseObjects project and a peer CommonUtilities project are composed into the MainDatabase project.  Then the TestDatabase, composes MainDatabase and introduces some additional objects for testing purposes.  You can see how TestDatabase references BaseObjects and CommonUtilities as well as the MainDatabase project.

Not visible in the screen shot is that the references to BaseObjects from the TestDatabase and MainDatabase projects are to the dacpac generated from the BaseObjects project.  Why reference the dacpac instead of the project?  Well, if the referenced schema is large and doesn’t change frequently, then referencing the dacpac allows you to exclude the project that builds it from the normal build.  Doing that means the dacpac is not recreated and resolved with every build or clean which can save a worthwhile amount of time.  And of course, the dacpac could have been provided by another team working from another solution, or could have been extracted from a database in which case you would likely reference it at some appropriate share.

The project dependencies and build order can also be inspected using the Project Dependencies dialog accessed from any project in the solution.  In the screenshot below you can see how the dependency on the BaseObjects dacpac does not get translated into a project dependency.  Of course if you take a dependency on a dacpac then you need to ensure it is rebuilt or refreshed from its source whenever the source is changed before rebuilding projects that reference it.

In each of the projects I defined discrete SQL schemas in which the objects are defined and used the convention of grouping objects in schema-specific folders – Base in the BaseObjects project, Utilities in CommonUtilities, and Main in the Main project.  Using schemas like this isn't required but it's going to be useful in a moment when we use Schema Compare, it helps me organize the overall composite project structure, and would work well if I want to give different groups responsibility for different part s of the database definition.

Once you have set up the database references in this manner you can refer to objects defined in a referenced project or dacpac as if they are defined locally, and get full Intellisense support.  So in TestDatabase, for example, TestProcedure1 has full visibility to any object in MainDatabase, including objects supplied by BaseObjects or CommonUtilities. 

To deploy a composite project you must set the Include composite objects option on the project you're deploying from. 

This is available as an Advanced option in Publish and on the Debug properties tab, and as a general option in Schema Compare.  Note that in Schema Compare the option only affects publishing to a target database.   Without this option set the source project will be deployed without the referenced projects or dacpacs.  If you deploy to an existing database and forget to set this option but have set options to delete objects in the target that are not in the source, you risk deleting important content from the database.  Schema Compare will highlight such delete actions very clearly, but it will be less obvious in Publish and will happen silently in Debug/Deploy.  Be careful!

If you are referencing other projects in the same solution you can disable their automatic deployment on Debug/Deploy (F5/Ctrl+ F5) with the Build > Configuration Manager dialog.  In this dialog you can uncheck Deploy for each project that you don’t intend to be deployed as a free-standing database.  Below you can see I have configured my solution to independently deploy the MainDatabase and TestDatabase projects but not the lower-level components.

Using Schema Compare with Composite Projects

Like Publish or Debug Deploy, Schema Compare can be used to deploy the composition to the target database.  A single resolved model is created from the composition which is then compared to the database and differences presented in the results grid.  While that is straightforward it’s a little trickier when working in reverse, i.e. if you need to update the projects from the database.  The difficulty is that if Schema Compare targeted a resolved composite model it would be unclear where (i.e. into which project) new objects should be added.  At this time Schema Compare does not resolve composite projects when they are the target, so you have to compare the database to each project in turn.  However, when you compare the database to a project that is describes just a part of it Schema Compare will try to add all the other objects which is not what you want.  So how do we fix that?

First, let’s take a look at the problem.  The screen shot below shows the Schema Compare results after comparing my TestDatabase with its corresponding TestDatabase project.  In this case I have added a new table and changed a test procedure.  While the change shows up clearly, the table being added (TestTable2) is lost in the noise.  Schema Compare wants to add all the objects defined in my other projects to the TestDatabase.  If I were to accept the default actions each of the projects in the composition would end up containing a full definition of the database.

While Schema Compare lets me exclude objects from the update, how do I quickly identify the right objects to exclude and do this repeatedly and reliably?  Now we will see the benefit of organizing objects in each component into discrete schemas.  If I group the Schema Compare results by schema (below) objects that belong to schemas defined in the other database projects are easily seen.  This quickly highlights the candidate actions to apply to the TestDatabase project (in this case the change and add of objects in the dbo schema).  And with the objects organized into groups like this it’s a simple matter to exclude all members of a group using the context menu option on the group (also shown below).

With the extraneous objects excluded and their schemas collapsed (below) you can concentrate on the changes that apply to the test project and decide which of these you want to take.

Having set up the Schema Comparison like this you can save the comparison definition as an .scmp file and include it in the target project.  The saved scmp file includes the source and target schema designation, the options used, and importantly, the list of exclusions.  You will need to do the same for each project in the composition, and remember to update and save these files periodically as objects are added to projects or dacpac in the composition.  Double-clicking on the scmp file in each project will now open a pre-configured Schema Compare which can be used whenever you need to bring in changes from the database to that project.

Comments

  • Anonymous
    June 26, 2012
    Nice article! I just wrote an article on SSDT Schema Comparison here: dattatreysindol.com/.../ssdt-schema-comparison-in-sql-server-using-sql-server-data-tools

  • Anonymous
    March 10, 2013
    Hmmm....that's a useful tip but its a shame that it requires the use of schemas. I'd prefer it if there were the ability ti simply include composite projects in the target. Please consider that a feature request.

  • Anonymous
    May 30, 2013
    Is there a way to include composite objects in the .dacpac built by the project? I am unable to deploy my composite project from the .dacpac, as I get missing reference errors.

  • Anonymous
    November 11, 2013
    This all almost works for me, except that all changes from the composite project cannot be included in the update. They are detected, but all greyed out and won't be included in the update script. Much like reported here social.msdn.microsoft.com/.../composite-project-schema-compare-disables-changes-from-within-the-referenced-dacpac The solution there though won't work. Not sure if this is a SQL server version issue or something else, very frustrating though as it makes composite DBs useless at the moment.

  • Anonymous
    January 21, 2014
    Thanks very much for the interesting article, how does it work when i need pre and post deployment scripts to be applied (for example in the lifecycle of the database I will have several releases which likely will need data migration etc.) To keep these changes consistent over all the environments I would need them to be applied to the base objects. However it looks as if only pre/post deployment scripts for test/main database would be created/executed.

  • Anonymous
    May 03, 2014
    @Niclas, rendering this to be useless is an underrated statement. I found it very much useful and very easy! You can workaround the schema compare issue by deploying it to local DB or a DACPAC file and then compare from there. Sure it's not perfect but this tool is great and it helps me a lot at my project! Besides i'm pretty sure MS will make it better in the future. Thanks so much MS for creating this amazing tool!

  • Anonymous
    September 07, 2014
    Not only are the SSDT very slow, and full of errors, it degrades productiveness to almost zero when you have a model anything larger than a simple demo. 100+ tables, procedures, triggers etc are unmanagble. SSDT lacks tools that help you create and check the DDL that you need to enter. You have to type everything, including all nitty gritty details repeated on every object over and over yourself and to make things even worse, you have to put all objects into separate source files, one object per file, grouped together into sub folders per object type instead of logically grouped together per table/subject or whatever logical grouping. This makes entering and maintaining your model a real nightmare, with SSDT doing nothing to aid, if not doing everything to make it harder for the developer... And where it should be helpfull when comparing models amongst each other or against a database, it has a terrible user interface that lacks any multi selection possibilities or usefull grouping, so that here again it is a major  productivity killer. We used to work with Power Designer, which has a lot of features that do aid in being more productive by generating a lot -if not all- of the code from a model you create by mostly dragging, dropping and naming your objects in a graphical environment. Also PD does help in enforcing general correctness in your model, by automatically performing all sorts of logical checks on your model. We decided to go with MS's SSDT because generating the change scripts from the model would be so much faster than it it was from PD. How wrong we were: we've lost huge amounts of time (years!) because of all the functionality we now miss in SSDT and the many things we simply can not do anymore. Going back to PD is however not an option anymore; our model is now locked in into source files proprietary to MS's tools only. If you haven't made your mind up on a data modelling toolset yet: steer clear of SSDT!

  • Anonymous
    June 09, 2015
    The comment has been removed

  • Anonymous
    September 02, 2015
    Bill,  Do you have a sample projects of something that implements this? Perhaps using the AdventureWorks2012 and AdventureWorksDW2012 databases?   That would be really helpful

  • Anonymous
    November 03, 2015
    Is there any way to turn off the automatic "Analyzing Database Schema: .... " operation that runs when you open a database project?  I am using Visual Studio 2012 shell with SSDT.

  • Anonymous
    December 14, 2015
    Hi I would like to know if this is still best practice? In cases where you need to add additional to columns to a table from a database reference what's the best way to handle that scenario (other than don't do it) ?

  • Anonymous
    June 07, 2017
    This was an awful suggestion when it was made five years ago, and it has not improved with age.