Downtime is costly to the business. As developers, avoiding it can give us a ton of benefits both in terms of efficiency and for personal well-being as well. For example, when making changes that require downtime to a shared environment, I have my freedom back since I don’t have to ask or wait to do it at night. 

With the introduction of Automation Script, most of the business logic and front-end changes we need to push to production nowadays can be done without downtime. Some of them are:

  • Automation Script
  • Escalation
  • Application Design
  • Conditions
  • Workflows

However, Database Configuration changes still need Admin Mode or a restart. 

In recent years, many of us have switched to DBC script to deploy changes. This approach takes more time to prepare than compared to other methods such as using Migration Manager or doing it by hand. It proves to be very reliable and allows faster deployment with much less risk. 

Then many of us probably realized that, for small changes, we can run the DBC script directly when the system is live. But after that, we will still need a quick restart. Doesn’t matter whether it’s a small environment that takes 5 minutes to restart or a massive cluster that needs 30 minutes. A restart is downtime, and any deployment that involves downtime will be treated differently with days or weeks of planning and rounds of approval and review.

For development, a colleague showed me a trick that, instead of a restart, we can just turn on and off Admin Mode. As part of this process, Maximo’s cache is refreshed and the changes will take effect. This works quite well in a few instances. However, this is still a downtime and can’t be used for Production. On a big cluster, in many cases, turning on Admin Mode takes more time than a restart.

My other colleague hinted me a different method and this is what I ended up with. I have been using this for a while now and can report that it is quite useful. Not only my productivity has improved, but it has also proven to be valuable a few times when I don’t have to approach cloud vendors to ask for downtime or restart.

The approach is very simple, when having a change that requires restart, I’ll script it using DBC. If the change is small, I can get away with using Update/Insert SQL to update directly to the configuration tables such as:

  • MAXATTRIBUTE/MAXATTRIBUTECFG
  • MAXOBJECT/MAXOBJECTCFG
  • SYNONYMDOMAIN
  • MAXLOOKUPMAP
  • Etc.

Next, I will create a super complex automation script named refreshmaxcache (with no launch point) below:

That’s it. Every time you deploy a change, all you need to do is call the API script by using the following command to refresh the configuration

https://[MAXIMO_ROOT]/maximo/oslc/script/refreshmaxcache

Note: this is not a bulletproof approach officially recommended by IBM. As such, I suggest if you use it for Production, make sure you understand the change and its impact. I will only use it for small changes in areas that have little or no risk of users writing the data while the change is being applied. For a major deployment, for example, a change to the WORKORDER table, it’s a bad idea to apply it during business hours. For non-production, I don’t see much risk involved. 

A man who doesn’t work at night is a happy person.