When I tried to set up Cognos 11 to send notifications via Gmail, it failed because
Google blocked access from Unsecured App. Even if I tried to turn off this option
and tried again, it still failed because Google automatically turned this setting
off again. So I had to create an App Password for my Gmail account to make it
work by following the steps below:
Step 1: Configure Gmail account
Log in to my Gmail account, go to “Manage your Google Account page”, then go to “Security” section
Enable 2-Step Verification
Once 2-Step Verification is enabled, the App Passwords option will be visible under the 2-Step verification option
Click on “App Passwords” to generate a new one.
On the next page, choose “Mail” app, and “Other (Custom Name)” in the Select Device drop-down
On the next page, enter “Cognos” for the name of the app, then click on “GENERATE”
On the next page, copy the password, paste it to Notepad, then click on Done
Step 2: Configure Cognos:
Open “IBM Cognos Analytics” > “IBM Cognos Configuration” (not the one with the same name under “Framework Manager”)
Open “Notification” on the left side Explorer bar, enter
the configuration as follows:
SMTP Mail Server: smtp.gmail.com:465
Account and Password:
User ID: <account_name>@gmail.com
Password: <Generated app password
from the previous step>
Default Sender: <account_name>@gmail.com
SSL Encryption Enabled: True
Click Ok. Then Save the setting
Right-click “Notification” on the left Explorer menu again, and click on “Test” to check if the connection is working.
After the SMTP connection has been tested, to send a report from Cognos via email, we have to restart Cognos service for the change to take effect.
Occasionally, Maximo became unavailable for a short period of 5-10 minutes. Alarms were raised, IT help desk was called, and the issue got escalated to the Maximo specialist (you). You logged into the server, checked the log file, and found a Java Out-of-Memory (OOM) issue. Not a big deal, the server usually restarted itself and became available soon after that. You reported back to the business and closed the issue. Does that scenario sound familiar to you?
If such an issue has only occurred to your system once, it was probably treated as a simple problem. But since you had to search for a solution on the web and ended up here, reading this article, probably it has occurred more than once, the business requires it to be treated as a critical incident. As the Maximo specialist, you’ll need to dig deeper to report the root cause of the issue and provide a fix to prevent it from occurring again. Analyzing low level Java issue is not an easy task, and this post describes my process to deal with this issue.
Websphere Dump Files
By default, when an OutOfMemory issue occurrs, Websphere produces a bunch of dump files in the [WAS_HOME]/profiles/<ProfileName>/ folder, these files can include:
Javacore.[timestamp].txt: contains high-level details of the JVM when it crashed which should be the first place to look in a general JVM crash scenario. However, in the case if I know it is an OutOfMemory issue, I generally ignore this file.
Heapdump.[timestamp].phd: this is the dump from the JVM’s heap memory. For an OOM issue, this contains the key data we can analyse to get some further details.
Core.[timestamp].dmp: These are native memory dump. I get these as I work with Maximo running on Windows most of the time. A different operating system, such as Linux, might produce a different file. I often ignore and delete this file from the server as soon as I find there is no need for it. However, in certain scenarios, we can get some information from it to help our analysis as demonstrated in one scenario described later in this article.
IBM Heap Analyzer and Windows Debugger
In general, with an OOM issue, if it is a one-off instance, we’ll want to identify (if possible) what consumed all JVM memory. And, if it is a recurrence issue, there is likely a memory leak problem, in which case, we’ll need to identify the leak suspects. To analyse the heap dump (PHD file), there are many Heap Analyzer tools available, I use Heap Analyzer provided with the IBM Support Assistant Workbench
To read Windows dump files (DMP file), I use the Windows Debugger tool (WinBdg) that comes with Windows 10. Below are some examples of crashes I had to troubleshoot earlier, hopefully they give you some generally ideas on how to deal with such problem.
Case 1 – Server crashed due to loading bad data with MXLoader
A core dump occurred to the Integration JVM of an otherwise stable system. The issue was escalated to me from level 2 support. Using Heap Analyzer, I could see Maximo was trying to load 1.6 GB of data into memory which equals to 68% of the allocated heap size for this JVM. There was also a java.lang.StackOverflowError object which consumed 20% of the heap space.
This obviously looked weird, but I couldn’t figure out what was the problem. So, I reported this back to the support engineer, together with some information I could find from SystemOut.log that, immediately before the crash occurred, the system status looked good (memory consumption was low), and there was some high level of activities by a specific user. The support engineer picked up the phone to talk with the guy, and found the issue was due to him trying to load some bad data via MXLoader. The solution includes some further training on data loading to this user, and some tightening of Maximo integration/performance settings.
Case 2 – Server crashed due to DbConnectionWatchDog
Several core dumps occurred within a short period. The customer was not aware of the unavailability as the system is load balanced. Nevertheless, alarms were sent to our support team and it was treated as a critical incident. When heap dump was opened by Heap Analyzer, it showed a single string buffer and char[] object consumed 40% of the JVM’s heap space.
In this instance, since it is a single string object, I attempted to open the core dump file using WinBdg and view the content of this string using the “du” command on the memory address of the char[] object (Figure 3). From the value shown, it looks like a ton of error messages related to DbConnectionWatchDog was added to this string buffer. It was me who, a few days earlier, switched on the DbConnWatchDog on this system to troubleshoot some database connection leaks and deadlocks. In this case, the Maximo’s out-of-the-box DbConnWatchDog is faulty by itself and caused the problem. So, I had to switch it off.
Case 3 – Server crashed due to memory leak
A system consistently thrown OutOfMemory errors and core dumped on the two UI JVMs every 2-3 weeks. Heap Analyzer almost always showed a leak suspect which has some links to a WebClientSessions object. The log file also showed an unusual high number of WebClientSessions created versus the number of logged in users. We know that with this customer, there are a group of users that always open multiple browser tabs to use many Maximo screens at the same time. But it should not create such a disproportionately high number of WebClientSessions. Anyhow, we could not find out what caused it.
Figure 4: Memory leak suspect links to a WebClientSessionFactory object
During the whole time troubleshooting the issue, we maintained a channel with IBM support team to seek additional help on the issue. With their suggestions, we switched on various log settings to monitor the issue. The UI logging confirmed that WebClientSessions always get created when a user logged in, but never get disposed. In other words, the total number of WebClientSessions kept growing, and after a period of use, it would consume all JVM heap space and caused the OutOfMemory crash.
Some frantic, random search led me to an article by Chon Neth, author of the MaximoTimes blog, mentioning a memory-to-memory replication setting in Websphere could cause a similar behaviour. I quickly checked and confirmed this setting was enabled in this system. Memory-to-Memory replication is a High Availability setting available in Websphere, but this feature is not supported by Maximo. So, we turned this setting off, and the problem disappeared.
Figure 5: SystemOut.log showed a high number of WebClientSessions vs. number of logged in users
Conclusion
In a lot of cases, identifying the root cause of a JVM Out-of-Memory issue is not always straight forward. Most of the times, the root cause was found with a lot of luck involved. By having the right tools, approaches, and close coordination with the internal and external teams, we can improve our chance of success in solving the problem. I hope by sharing my approach, it helps some of you out there when dealing with such issues.
This post includes some of my notes on using DBC for the deployment of Maximo’s configuration. In case you wonder why using DBC, the short answer is if you’re happy with whatever method you’re using to deploy configuration, whether it is manual or using Migration Manager, ignore this post. But if you’re looking for a way to streamline the development process for a large team by collaborating and source controlling using GIT, or if you want to fully automate the deployment process, DBC is the way to go.
IBM has been using DBC script for a long time, but only recently, they published a reference guide so 3rd party consultants like us can use it. DBC Script can be used to automate most of the common configuration for Maximo. It has standard commands to create/modify common low-level objects like tables, indexes, domains etc. For many other configurations that don’t have a specific DBC command, we still can handle the deployment using the <freeform> or <insert> statement to put anything into Maximo DB. Below are some specific notes on certain types of changes:
DB Configuration and System Objects:
Operations to add/modify many low-level objects like tables, views, maxvars… are available as DBC commands. However, manually writing all of the scripts can be laborious. We can instead make the changes from Maximo’s front end, then generate a DBC script for the changes by using the ScriptBuilder.bat tool (found under tools/maximo/internal). Simply add the objects you want to generate script, then choose File > Generate Script. The script file will be created in the same folder:
Application Design
The standard method to export/import XML files using App Designer is simple enough and suitable for version control. However, if we want to fully automate the deployment process (for CI/CD) we can export the changes to DBC script using the mxdiff.bat tool (found under tools/maximo/screen-upgrade). For example, if we add a new column to the List tab of the Work Order Tracking app, we can export the XML files of the before and after versions of the app. Copy the two files into the screen-upgrade folder and execute this command:
It will produce the script as shown in the image below. (Do note that the extension for changes in app layout design should be .mxs instead of .dbc)
Automation Script
For simple manual deployment, I still prefer to use the Import/Export function as it is very convenient. Note that the permission to see the Import/Export buttons is not granted to maxadmin by default. Thus, you have to give it to the maxadmin security group first.
However, if we need to generate DBC for automated deployment, we can use the following approach. First, create an automation script called GENDBC with the source code below:
Now, whenever we need to generate a DBC file for an automation script, execute the GENDBC tool above by calling it from a browser:
The output DBC file will be created in the /script folder, under your Integration Global Directory (specified in the mxe.int.globaldir system property)
Note: I recently found out this approach doesn’t work with Oracle database. It gave me the error below. In the project I worked with, we used a tool created by someone else and I can’t share it here. If you’re using Oracle, you can try the tool created by Jason @ Sharptree.
Invalid column type: getString/getNString not implemented for class oracle.jdbc.driver.T4CBlobAccessor
Integration Artifacts
To generate DBC script for integration artifacts such as Object Structure, JSONMAP, Publish Channel etc., we can also use the GENDBC tool mentioned above. For example:
To extract Object Structure, run the script with the following parameters:
The output files for Object Structure, Publish Channel, and Enterprise Service will be in the [GlobalDir]/mea folder. And the output for JSONMAP will be in the [GlobalDir]/jsonmap folder
Other configurations
For many other configurations such as escalation, messages, workflow etc., there is no standard DBC command to create or modify those objects. However, all such configurations are stored inside Maximo’s database and if we can export and then import the correct data to the target environment, it would work well (some objects will require a Maximo restart to refresh the cache). The easiest method is to use the geninsertdbc.bat tool. To use it, we simply have to give it a table name and a where clause, it will generate the data found as DBC insert statements.
For example, to export all rows of the table MAXINTOBJECT for the object structure ZZWO we can run the command below:
Note:This tool has one problem. It generates Null value as an Empty string. Thus, it could cause errors in certain logic that requires the value to be Null such as when using mbo.isNull(“FieldName”). I found it worked most of the time for me, but it did cause me some headaches in a few instances. To fix it, we can delete these lines from the generated DBC script or add another UPDATE SQL statement to correct it.I now only this tool for simple configurations. For a more complex configuration data, I use Oracle SQL Developer or SQL Server Management Studio to generate the INSERT statement instead
The main tables that contain configuration for some common objects are listed below:
Escalation: ESCALATION, ESCREFPOINT
Cron Task: CRONTASKDEF , CRONTASKINSTANCE
Workflow: WFPROCESS, WFNODE, WFASSIGNMENT
Saved Query: QUERY
Start Center Template: SCTEMPLATE
Note: for Start Center and Result sets to be displayed correctly, there are other dependent objects that need to be migrated such as Object Structure, security permission etc.
Simple stuff but I got a few people asked me this same question, so here is how to create an automation script to send email from Maximo:
1 – Create a Communication Template:
Template ID: MY_COMM_TEMPLATE
Description: Test Communication Template
Applies To: ASSET
Send From: <your_email@address> (Note: to make this work, you must have setup smtp and able to send email from Maximo first
Subject and message: as shown below
In the “Recipients” tab, add an Email recipient pointing to your own email:
2 – Create an Automation Script
Create an autoscript with Object Launch Point on the “ASSET” object, on the SAVE (Update) event, choose Language = Python and copy/paste the following sample script:
3 – Test sending email
Open an Asset then change its status to INACTIVE, you should receive an notification in your email inbox:
I didn’t know about this new feature in Maximo 7.6 until today. Here is the problem: an user reported he’s unable to log into Maximo with “BMXAA7901E – You cannot log in at this time” error. Both the Maximo Admin and I could log in using the same userid and password without any problem. After some investigation, it turned out that the user’s IP address has been blocked.
This is a new feature in Maximo 7.6 as described by IBM here and here by Mark Robbins.
What interesting is, by looking into the default Maximo’s settings, an IP will only be blocked if there are more than 50 failed login attempts made in less than 30 seconds. So it’s not possible for a normal user to be blocked by this mechanism.
It turned out, in my case, there is an integration service being developed sending failed OSLC login attempts using this same account. It caused both the user account and the IP to be blocked. The Maximo Admin attempted to remove the block on the user account only and reset the password. So on the face of it, everything looks good as both I and him can login using the account but not the user.
So next time, if you have a similar symptom, better check if there’s any IP blocked by using the new “Manage Blocked IP Addresses” action menu in the Users application as shown in the screenshot below:
I recently had to upgrade a pretty complex system. The original environment includes Maximo and ICD, and two large customization packages, one extended by the other (let’s call them package XXX extended by package YYY). The target system is the latest Maximo + ICD 7.6.1, plus 4 add-ons which include Oil & Gas and Utilities.
Customization was written by 3 different third parties over a long period of time and the source code was lost. This posed some challenges related to preserving customization and I had to spend a bit of time to figure it out. Below are some of the gotchas I learnt after the project:
Problem 1: Ensure customization is preserved after the upgrade
After reviewing the SMP folder, I found about 300 extended Java class files, but the product.xml files only cover about 20-30% of them; worse, some data are not even up-to-date. After the initial attempt to correct these files, I decided to simply ignore them, and build new product.xml files from scratch. Below are some of the key steps I had to do:
List out all extended java class files found in the SMP folder (using the command: dir /s /b >list.txt) and put them in an Excel sheet
Use SQL to search from the DB any usage of custom Java code, put this list into another Excel sheet
Match the two sheets above to identify any missing files
Use a decompiler to view the code of each file to determine what is the original class it extends, and the type of the class (which I put into: Action, Bean, Cron, Event, Field, Object, Util)
For each of the class types, use Excel to generate an XML element as the follow examples:
After generating those elements, I put them together into newly created XML files.
Problem 2: Manipulating chain of extension without having Java source code:
The updated process took 6 hours, and after it finished and I can start Maximo. However, I freaked out when I looked into the MboSet class used by various objects. For example, in the WorkOrder object, the class used is psdi.pluss.app.workorder.PlusSWOSet. Initially, I thought the custom classes were wiped out after the upgrade. But after some investigation, I realized that due to the upgrade and installation of new add-ons, Maximo has updated the classes (through the mean of binary code injection) to modify the chain of extension like this:
After spending some time reading various IBM tech notes, I learnt that, in the past, we need to create a file with the exact name as: “a_customer.xml” to maintain metadata about customization. In the newer version (not exactly sure from what version, probably 7.5), we actually SHOULDN’T name it “a_customer.xml”. Because the file name makes it top of the list (in alphabetical order), and thus, becomes the first product to extend the core Maximo classes. For example, if you only have Core Maximo, Oil & Gas Add-on, and your custom package, if you name it a_customer.xml, the O&G package name is plusg, thus the extension chain would become: PlusGWOSet > CUSTOMWOSet > WOSet
If I like my custom class to be the last one that extends everything else, I actually should name it z_customer.xml, or anything that comes last in term of alphabetical order. So I named the product XML files for the two custom packages z1_XXX and z2_YYY.
For some unknown reasons, using just file name doesn’t give me the desirable outcome (probably due to some undocumented logic), I had to use the <depends> tag inside the two product XML files. From my experiment, by having the <depends> tag, the updatedb process will now ignore the file name rule, which means it doesn’t matter if you name it a_customer or z_customer anymore. The classes in your package will be extended after all packages listed in the <depends> tag
To illustrate this point, below is the <depends> tag of my original z2_YYY.xml file:
It means the YYY package will extend the XXX package, then extend a bunch of packages (which included in the ICD product).
I updated the <depends> tag of the z2_YYY.xml file as follows:
Notice now I inserted three other packages before z1_XXX. After I updated the files, I ran updatedb process again (Even if there’s no update, updatedb will still update the java class files. With newer Maximo versions, you can run updatedblitepreprocessor.bat to do the same). With this, the updated process displays the product installation order as below:
Checking the class files, I got the extension in the desired order:
This caused Maximo to fail when opening the app and crashed Eclipse when I tried to decompile the file. It turned out the reason is I used the wrong XML tag for the class type i.e. I should have used <Mbo….> and <MboSet…> for object class rather than <Class…> tag.
Another note is, due to some bugs or unknown logic, I had to play around a little with the product listed in the <depends> tag to get to the desired order as it doesn’t seem to work exactly as documented.
I am a freelance Maximo consultant based in Melbourne. If you enjoy reading my blog, please connect with me on LinkedIn to get updates on new posts. If you or your company need any professional assistance, please leave me a message, I'll call you back.