Archive for May 2008
As your enterprise grows, it becomes increasingly difficult to individually manage the resources in your inventory. Compatible groups mitigate this issue by providing several features and functions that can be applied to all members of the group:
- measurement data and graph averaging
- group operation execution and scheduling
- aggregate updates to plugin configurations (connection properties)
However, there still exists the problem of how to efficiently maintain your groups and their memberships. Let’s say you have a few dozen JBoss AS instances in inventory and you wanted to group them by the cluster each belongs to.
First of all, how do you even know how many clusters you have, and what each is identified by? This information is needed so you know how many compatible groups to create, and how to name each of them.
More generally, even if the cluster names were readily available, how do you find the information about which resources belong to which cluster? In well managed environments, the cluster information might be recorded in some kind of spreadsheet or well-formed xml file, or may be auto-generated into one of those formats by running some type of reporting tool against an internal database where this information is kept up-to-date.
In the best of circumstances, you still have a lot of manual, error prone work to do. You would need to take the information from your external system, manually create the groups, and manually update the membership of each.
OK, so maybe you take the blue pill and try to convince yourself this isn’t so bad; it might take a few hours to setup, but once it’s done it’s done…right? Well…no. What happens when your cluster changes? What if you have an environment that dynamically reprovisions generic machines as needed based on incoming load to each cluster? After a few provisioning iterations, the set of machines representing each cluster might be completely different from what they were a few days ago. So now you have the additional, still error prone, and even more difficult manual task of updating each of your groups to reflect this.
Granted, the above was a slightly contrived example to emphasize how daunting of a task this could be, but you’d be surprised at how often I see customers with pseudo-dynamic setups like this. Their provisioning might be fairly manual, and their changes might only occur once a week or so, but keeping this information up-to-date in multiple systems is a pain point for them nonetheless.
DynaGroups to the rescue. This construct, which has existed since the 0.1 version of the RHQ platform, tries to eliminate all of these pain points by making the process fully automated and self-updating.
Assuming you’re using the default mechanism within JBoss to setup your clusters – namely, JGroups – you can write an RHQ plugin to inject the name of the JGroups partition into the discovered resource. You might decide that it works best for you as a measurement trait, or maybe you want to put in inside the connection properties. It doesn’t really matter where it goes, DynaGroups can handle both scenarios equally well.
But let’s assume that you exposed it as a trait called ‘partitionName’, then you would create a group definition with the following expression set:
resource.type.name = JBossAS Server
That’s it. It’s that simple. There’s no trick. When you click the calculate button the DyanGroups engine will inspect the above set, find the groupBy expression, and automatically create one resource group for every unique value of partitionName that it can find across all resources that are currently in your inventory – this means one group for each of your clusters.
After that’s done, it will walk each group, and requery the inventory to find any JBossAS Server instances that match the partitionName that that group represents, and then add them to the group automatically. Oh, and get this, the name of each resource group will contain the cluster identifier / partition name that each resource member in it shares – neat, huh?
And this doesn’t just work when you’re calculating group memberships for the first time. If your inventory ever changes – new resources added, old resources deleted, resources reprovisioned and now belong to a different cluster – all you have to do is go back to the group definition and click the calculate button again. The DynaGroups engine will take care of everything necessary to create new resource groups for new clusters, delete groups for cluster names that are no longer in use, and update the memberships of each of the existing groups according to the new partition information it finds.
Amazingly, this is just one of many, many things DynaGroups can do. Stay tuned to future articles highlighting other nifty ways this construct can be used to help you more efficiently manage your inventory.
A recent chat with a colleague reminded me today how important it is to clearly distinguish between what’s in your enterprise and what’s in your inventory. There doesn’t exist an RHQ dictionary yet, so until then, the following entries will have to do:
- Enterprise – refers to all the physical machines connected by wires and power cords, installed in the racks in your data center, or plugged into the wall under your feet
- Inventory – refers to the list of logical “resources” discovered by your RHQ infrastructure via some plugin
When you fire up the web console and login, you need to keep in mind that what you’re viewing is an abstracted layer. The inventory represents the information your RHQ plugins collected and sent back up to the server. So when you want to make a change, you have to decide whether you mean to make that change on the physical or logical level.
For instance, if you just want to suppress the information that RHQ discovered (likely because it found and auto-imported much more than you need to manage/monitor right now), then your inventory – not your enterprise – is what you want to change. From the resource browser, regardless of whether you’re looking at the platforms, servers, or services tab, there is an “uninventory” button at the bottom.
Clicking it will tell RHQ to remove all information it knows about it (and all of its child resources) from its datastore. You’re effectively telling RHQ that you don’t want to manage this resource anymore. As a consequence, you will also lose any and all audit trails for that resource (and its children). Audit items could be anything from the results of operations you performed against it to the list of alerts that fired because the resource met some trigger condition. Don’t forget, audit items also include the entire set of configuration changes you’ve made to these resources since they’ve been in inventory, etc.
On the other hand, sometimes you actually want to make a change to your physical enterprise, whether it be adding some new user to an existing Postgres database, or uninstalling an old enterprise/web application archive (ear/war) from a JBoss Application Server. In both cases, you want to go to the inventory tab of the parent of the resource you want to manipulate.
To delete an item from your physical enterprise, simply select one of the children resources from the tabular set and click “delete”. This sends a request down to the agent managing that resource and performs the necessary operations required to remove that item from your enterprise. This, in turn, also removes the logical resource from your inventory, but that’s really just a convenience because RHQ knows that if the delete succeeds, the resource no longer exists, so there’s nothing left to manage/monitor about it.
Adding a new item to your enterprise is just as simple. At the bottom of the table you’ll see a combobox labeled “Create New”. It will be populated with all of the resource types the RHQ plugin managing this parent resource knows how to physically create in your enterprise. Select one of them, click the button labeled “Add”, and follow the various steps on the subsequent pages.
One last reminder…
I can’t emphasize enough how important it is to keep these two concepts separate. One deals with adding / removing meta-information from the RHQ datastore; another is basically a primitive form of provisioning. If you accidentally deleted a physical entity when you only meant to uninventory its logical resource, don’t bother asking for help on any forum because there’s nothing that can be done. The product did what you asked it to do; your data is gone. But that’s OK because you religiously keep backups…right?
Sometimes one of the most difficult things to do as a plugin developer is to bring some semblance, some order to the wide variety of features and functions that exist in software today. What may seem like a perfectly logical way to capture that information to you may seem utterly backwards to someone else. And the reason should come as no surprise – we all think about things differently.
So what’s the trick to developing a single solution that is going to make everyone happy? Well, unfortunately, there isn’t one. However, there are some steps that you can take to try and mitigate this mismatch of thought, and keep most satisfied in the end.
Do your research, and seek input from others
This is crucial. Do not let developer-ego get in the way of your plugin reaching a large audience quickly. Designing and developing an RHQ plugin is not some esoteric task in coming up with the purest design or most fascinatingly abstract model; it should be one of pragmatism. You need to balance the desire to come up with an infinitely generic solution with one that accurately models the system for as many realistic uses of it as possible.
Sometimes the software you’re writing your plugin for has only a small number of options. In this case, you might get lucky and be able to find the information you’re looking for by doing your homework and perusing the product’s documentation. For instance, some application might have a completely “portable installation” where all of its executables, configuration files, temporary files, etc are placed beneath a single root directory. Here, you might only need to prompt the user for that root in order to automatically discover everything else about it in a simple, consistent way.
However, there are plenty of pieces of software out there that aren’t so cut-and-dry. If they have tons of startup options, or have dozens of ways of being installed and configured, you might need to sleuth around public message boards or other help sites in order to gain the necessary insight to help focus your development efforts. For example, some application servers have a variety of ways you can configure how they bind to your network adapters: one or two different command line options, a configuration file parameter, a runtime service, etc. Supporting all of these, in all of their various flavors, across all the various versions of the software might be overly time-consuming and probably not even worth it for the first version of your plugin. Instead, focusing your development on the 1 or 2 most common ways will give you an excellent return on your time investment, helping to support a large number of clients now, while not necessarily discounting support for the less common bind methods in the future. Which brings me to my next point…
Design for today, not tomorrow
Software-based systems are changing all the time. Most often this is through a series of patches or, of greater consequence, an upgrade to a new minor or major version. If you attempt to write solutions to handle every conceivable modification to the software your plugin is supposed to be managing, this moving target will cause your design to remain in a constant state of flux and your plugin to remain unfinished…indefinitely.
For example, if you’re writing a plugin for JBoss AS 4.x, stick to the features and functions of JBoss AS 4.x – don’t try to design your plugin to support what may or may not be in, say, JBoss AS 5.x. Quite often different major versions of products will not only look substantially different but also operate quite differently behind the scenes:
- each version might support different specs – thus requiring different plugin code to support different service contracts
- configuration file structure and/or content is changed – thus requiring new parsing code and a different interpretation of it
- service deprecation, interface changes, and new features – thus requiring the handling code to be denser and more complex to handle the various permutations
So you really need to make the conscious decision from the start to support a specific version range of the software in question; doing this will help keep the plugin code smaller, more maintainable, and will – most importantly – get it done faster. If you absolutely need to support a wide range, but the different versions of the software are so disparate you can’t quickly or easily find a reasonable common ground, there is nothing stopping you from writing two plugins. Each one would manage and monitor a mutually exclusive slice of the entire range. For instance, to complement the example above, you would write your JBoss AS 4.x plugin separately from your JBoss AS 5.x plugin.
The key thing about all of this is to find that healthy balance between form and function that will help you get your plugin out of development and into your (or your customers’) RHQ infrastructure quickly.