Share via


Workflow Targeting and Classes

I set out this week trying to put together a post about designing and deploying rules and monitors that utilize the SDK data sources that I talked about in the last post. Unfortunately, it's not ready yet. Among other things this week, I have been trying to write and deploy a sample management pack that would demonstrate the various techniques that we recommend for inserting operational data via the SDK to SCOM, but in the process I have run into a few issues that need resolving before presenting the information. I am definitely working on it and I'll get something up and soon as we work through the issues I have run into. If you need something working immediately, please contact me directly with questions you have so I can better address your specific scenario.

In the meantime, I wanted to discuss classes in SCOM and how they relate to workflow (take this to mean a rule, monitor or task in SCOM 2007) targeting. I think this topic is a good stepping stone for understanding the techniques I'll talk about when the aforementioned post is ready.

First, what do I mean by targeting? In MOM 2005 rules were deployed based on the rule groups they were in and their association to computer groups. Rules would be deployed irrespective of whether they were needed on a particular computer. The targeting mechanism in 2007 is much different and based entirely around the class system that describes the object space. Each workflow is assigned a specific target class and the agents will receive rules when they have objects of that particular class being managed on that machine.

Ok, so what does that all mean? Let's start with a sample class hierarchy. First, we have a base class of all classes, System.Entity (this is the actual base class for all classes in SCOM 2007). This class is abstract (meaning that there cannot be an instance of just System.Entity). Next, suppose we have a class called Microsoft.SqlServer (note this is not the actual class hierarchy we will ship, this is only for illustrative purposes). This class is not abstract and defines all the key properties that identify a SQL Server. Key properties are the properties the uniquely identify an instance in an enterprise. For a SQL Server this would be a combination of the server name as well as the computer name the server is on.  Next, there is a class Microsoft.SqlServer.2005 which derives from Microsoft.SqlServer adding properties specific to SQL Server 2005, but it adds no key properties (and in fact cannot add any). This means that a SQL Server 2005 in your enterprise would be both a Microsoft.SqlServer AND and Microsoft.SqlServer.2005 and the object that represented it, from an identity perspective, would be indistinguishable (i.e. its the same SQL Server). Lastly, SQL Servers can't exist by themselves, so we add a System.Computer class to the mix that derives from System.Entity. We now have all the classes defined that we need to talk about our first workflow, discovery.

Let's assume we already have a computer discovered in our enterprise, Computer1. In order to discover a SQL Server, we need two things:

  1. We need to define a discovery rule that can discover a SQL Server.
  2. We need to deploy and run the rule

In order to make our rule deploy and run, we'll need to target it to a type that gets discovered before SQL Server, in our case System.Computer. If we target a discovery rule to the type its discovering, you'll have a classic chicken or the egg problem on your hands. When we target our discovery rule to System.Computer, the configuration service knows that there is a Computer1 in the enterprise that is running and being managed by an agent and it will deploy any workflow targeted at System.Computer, including our discovery rule, to that machine and in turn execute the rule. Once the rule executes it will submit new discovery data to our system and the SQL Server will appear; lets call the server Sql1. Our SQL Server 2005 discovery rule can be targeted to System.Computer, or we could actually target it to Microsoft.SqlServer since it will already be discovered by the aforementioned rule. This illustrates a waterfall approach to discovery, and the recommended way discovery is done in the system. There needs to be a "seed' discovered object that is leveraged for further discovery that can generated subsequent "seeds" for related objects. In SCOM 2007 we jump start the system by pre-discovering the primary management server (also a computer) and allow manual computer discovery and agent deployment that jump starts discovery on those machines.

This example also illustrates the workflow targeting and deployment mechanism in SCOM 2007. When objects are discovered in an enterprise, they are all discovered and identified as instances of particular classes. In the previous example, Computer1 is both a System.Entity and a System.Computer. Sql1 is a System.Entity, Microsoft.SqlServer and Microsoft.SqlServer.2005. We maintain this state in the configuration service and deploy workflows to agents that are managing these instances, based on the types of instances they are managing. This ensures that workflows get deployed and executed on the agents that need them, with no need to manage targeting.

Another example of targeting would be with a task. Let's say we have a task that restarts SQL Server. This task can actually run on any Microsoft.SqlServer. Maybe we have another task that disables a specific SQL 2005 feature that only makes sense to run against objects that are in fact SQL Server 2005 instances. The first task would be targeted against Microsoft.SqlServer while the second against Microsoft.SqlServer.2005. If you select a SQL Server object in the SCOM 2007 UI that is not a SQL Server 2005, but instead is SQL Server 2000, the 2005 specific task will not be available. If you try to run it anyway via the SDK or the command shell, it will fail because the instance you are trying to run against isn't the right class and doesn't understand the task. The first task however, restarting the service, will run against both the SQL 2000 and 2005 instances and be available for both in the UI.

I hope this helps make a bit more sense out of the new class system and leveraging it for targeting. Like with anything else, there are edge cases, more complicated hierarchies and other considerations when designing and targeting workflows, but from a "pure" modeling perspective, this should give you an idea as to how things work.

Comments

  • Anonymous
    October 11, 2006
    Hi jakub, Thanks for this post, it really helped to understand more of SCOM. I would like to ask a bit more about the target classes. Is it possible to define custom target classes? And how to create them? (From authoring console? or i need to write xml MP?) As i understand, the operator's console show the icons of each entity based on this class. For example Microsoft.SQLServer will show a DB icon in the console. Now, Is it possible to have my custom icons attached to custom defined classes? Say, for example i have a Demo.AppX class, for which i would like to show  a X icon. How to define them? Thanks Regards Suresh

  • Anonymous
    October 12, 2006
    Yes, declaring custom classes is very much possible. You can do this via the authoring console or by creating your own MP using the Xml editor of your choice. I have heard that the authoring console experience in Beta 2 is not ideal, but it is being worked on. For the time being, it may be most beneficial to simply create an Xml management pack directly. In terms of the images, two things need to happen in your management pack to get the desired behavior. First, you have to define the image in the management pack, then define an image reference pointing that image to a particular class. If you export the Microsoft.Windows.Image.Library management pack, it should give you an idea of what this looks like. You need the following: In the PresentationTypes section, you need an Images section, where you place your image. <Image ID="WindowsOS16" Category="u16x16Icon" Accessibility="Public"> <ImageData>binary image data</ImageData> </Image> Then in the Presentation section, you need a sub-section called ImageReferences where you will define an image reference mapping your image to whichever class you want. <ImageReference ElementID="Windows!Microsoft.Windows.OperatingSystem" ImageID="WindowsOS16" />

  • Anonymous
    October 12, 2006
    Thanks Jakub, This is really a great feature to have custom images. But just that xml editing thingy looks a bit tedious, have to get the binary data of the images and pasting it-in from the xml editor. Is there any suggestion to get these done a bit easier way ? Can visual studio or some tools help me to do RAD of MPs(XML)?

  • Anonymous
    October 13, 2006
    I believe the tool we provided with MOM 2005 will produce the same xml format for images that is required for 2007. I am trying to track down the tool we use and see if I can post it here, although it might not be ready yet. I would fully expect this to be part of the authoring console experience when that is complete.

  • Anonymous
    October 16, 2006
    Looks like the tool should be available to download along with the RC1 bits when they are available. It will be part of the authoring console experience.

  • Anonymous
    October 16, 2006
    Thanks for the update Jakub. I'm eagerly waiting for the RC1, to try this out :-D

  • Anonymous
    April 19, 2007
    hi,i have a question now i'm now looking for a way to discover and dynamically create objects. i know i need to add some classes in the xml file then do i need to use sdk to realize it? and it seems that i need to add a special type rule to a common object to discover it? thanks in advance

  • Anonymous
    April 19, 2007
    You should try to write a discovery rule to discover your object. One common way that people discover objects is using scripts. If you can tell me more about what you are discovering, I could probably help out more.

  • Anonymous
    April 19, 2007
    thanks for your reply for example i have a class of cpu of computer and i know the real cpu number on the machine then i want to create cpu objects of the number when discover i can not use scripts for i havn't find help until now,is it?

  • Anonymous
    April 23, 2007
    Sorry for the delay. Take a look at www.authormps.com. There aren't samples yet, but there should be shortly. A coworker of mine runs that site.

  • Anonymous
    April 23, 2007
    ok,thanks:)

  • Anonymous
    July 23, 2007
    Hello! I worked on management pack and trying to insert discovery classes. Discovery classes are the tree hosting hierarchy (root hosts child1 childs1 hosts childs 2 , etc.) But unfortunatly I discover issues with it. In console I see only the elements of first and second level. Elements are discovered using a script. Script returns correct data. (I tried a simple example with only 3 hardcoded objects as well). Seems like loose something.. Or probably I should use groups? Or should i use other relationships in this case? Any help is appreciated.

  • Anonymous
    July 23, 2007
    Or it was something with email server...  Anyway, Just a couple of things about management pack.

  1. Root class has Microsoft.Windows.LocalApplication as a base class.
  2. All child classes have Microsoft.Windows.ApplicationComponent as a base class.  
  3. I use hosting relationships, thus no relationship objects in a script are created.
  4. All classes excluding root class have key properties. Am I on a rigth way? Thanks in advance
  • Anonymous
    July 24, 2007
    Try jakubo@microsoft.com. What you wrote sounds fine.

  • Anonymous
    July 24, 2007
    I sent you an email with attached testing management pack file.

  • Anonymous
    May 31, 2008
    I set out this week trying to put together a post about designing and deploying rules and monitors that utilize the SDK data sources that I talked about in the last post. Unfortunately, it's not ready yet. Among other things this week, I have been tryin

  • Anonymous
    June 06, 2008
    I set out this week trying to put together a post about designing and deploying rules and monitors that utilize the SDK data sources that I talked about in the last post. Unfortunately, it's not ready yet. Among other things this week, I have been tryin

  • Anonymous
    October 31, 2008
    How to understand what happes behind the scene for any rule/alert/discovery? In MOM you can pick a rule, look at the provider, check the criteria, reponses to know what happens on in the background such as what event id is looked in what log or what script is run at what interval etc. How can we look at the things that are occurring in the background for a sealed MP?

  • Anonymous
    October 31, 2008
    You can do the same in SCOM. You can either go to the UI and fine the rule/monitor/discovery ou are interested in, or you can export the management pack that contains it and look at the xml.

  • Anonymous
    December 18, 2008
    Hi Jakob Hopefully you can advise?  I produced a MOM 2005 management pack that contained groups of performance metrics for Win2000, XP and 2003.  Each group was set to a different sample period i.e. 5mins, 10mins etc  I then created computer groups and linked them to the relevant rule group so I could move a computer into the Windows2000_5mins group and it would deploy the performance rules at that sample period.  I've trying to replicate this approach in SCOM2007 and I've got as far as creating my management pack and targeting it at the Windows 2003 class, but I'm strugling to see how I could restrict it to a specific subset of Windows 2003 Computers? and how I could maintain multiple copies of the rules at different sample periods within the same MP. Any help you can give would be much appreciated. Rob

  • Anonymous
    December 19, 2008
    You want to look into overrides. If you want the interval overideable, you can change the interval for certain groups of computers. You can also enable and disable the various rules within various contexts using overrides.

  • Anonymous
    June 01, 2009
    PingBack from http://paidsurveyshub.info/story.php?id=71824