Category: Maximo (Page 5 of 12)

Mess around with Azure: Migrate Maximo to Cloud

Last week, while attending a call to discuss an integration interface between Maximo and an Azure SQL database, the other consultant mentioned a few Azure terms like “blob storage”; I realized I didn’t know anything about Azure despite the fact that the cloud platform has been adopted by the large number of the clients I work with. So today, I decided I should play around a bit with it by trying to migrate a Maximo instance from my local VM to Azure.
 
Before I go into the technical details, for those of you non-technical readers, I like to sum up the whole experience I had with this platform in one word: magical. That’s it, I went to the website, signed up for a trial account, followed some wizards or clicked on a few buttons or menus that I think were features I need, and boom, I got a running Maximo instance on the cloud in just a few hours. No training, no muss, no fuss, everything just works. Not much else to say. I’m in love now. I think will spend the next couple of weeks, months or even years to learn more about Microsoft stuff.

Below are
the steps that I took:
 
1.1. Sign up for a trial account. After created the account, I was given $260 AUD to spend on paid services
 
1.2. I went straight to the portal and tried to create a new VM. Being cheap as I am, I went to the lowest option of 1 CPU + 3.5GB RAM. I know to install Maximo, I’ll need at least 8GB, but in this case, I just wanted to see if I can fit Windows, Websphere and Maximo into a box this size
1.3. Next page, I gave the VM a 256GB SSD data disk
1.4. For network setting, I gave it a Public IP as I want to access Maximo from public Internet
1.5. On the next few pages, I left the default settings and proceeded to create the VM. After about 5 minutes, the VM was created. I clicked on “Connect”, and it downloaded an RDP shortcut which will open the Remote Desktop app on my laptop and pre-populated it with the IP address of the machine. I struggled at this step a bit because I forgot that I created a Window user account in the 1st step. Thus, I tried to use my Microsoft account instead and it failed. After a few failed attempts and a quick google search, I remembered the details and entered MaxDM76AzureUser for account name, could login with that. The machine is extremely fast and responsive, feels like working on a local server rather than some remote sessions on the cloud (Now I remember how frustrating I was having to deal with the sluggish Maximo SaaS Flex DEV server). Download files gave me a rate of 50-70MB/s (that’s megabytes per second not Mbps)
 
<This screen was
captured after the fact as I forgot to take a screenshot of this step>
2.1 Next step, I tried to create an SQL DB. I put it in the same Resource Group as the Maximo App Server VM Because, naturally, I think all resources for an application should be put in one Resource Group as the name suggests. To be honest, I still have no idea what it is now. On the “Server” drop-down, since it is empty, I had to click on “Create New” and created a new server. I think we cannot remote access to this one like a VM, just a logical name to manage an SQL instance.
  
2.2 On the Networking page, I chose Public Endpoint for Connectivity Method because I thought I need to access the DB from SQL Studio on my Laptop, and Allow Azure services and resources to access this server because I think this option lets Maximo on App Server VM to access the DB. It turned out I couldn’t figure out how to access the DB using SQL Studio on my laptop (more on this later), but the second option works exactly as it supposed to do.
2.3 Default settings for the next few pages then I proceeded to create the DB.
2.4 I attempted to connect to the DB using SQL Studio on my laptop but it didn’t work. However, I tried to install SQL Studio on the Azure VM and it worked on the first try.
3.1 Now I had a server and a DB, I proceeded with Maximo migration. I backed-up the SQL DB in my VM to a DMP file, uploaded it to Azure VM, and tried to Restore it to Azure SQL. It turned out on-premises SQL Server is different to Azure SQL, and I can’t do that. A quick google search revealed that I had to export the DB into a *.bacpac file, then import it to Azure.
3.2 The whole export/import of bacpac file was straight-forward with a minimum number of clicks, and no confusion. After it’s done, I realized the import process restored the DB from my VM, and I ended up with two DBs, so I had to delete the empty one I created in the earlier step.
3.3 I struggled a bit trying to create a new ‘maximo’ user. Turned out the way security logins and users are managed in Azure SQL is a bit different, so after a bit of googling, I found some scripts I could use. So here is the notes I wrote down for this step
– Right click the “Database Folder”, to add a new Query window:  
CREATE LOGIN maximo WITH PASSWORD =
‘SuperSecret!’
– Right click the ‘maxdm7609’ to add a new Query window:
  CREATE USER maximo FOR LOGIN maximo WITH
default_schema = dbo;
  ALTER ROLE db_owner ADD MEMBER maximo;
4.1 Next step is to install Websphere, a quick check before installing showed I had 2.3 GB RAM left, I thought could make Websphere to take 1GB and Maximo 1GB, so I proceeded with the installation.
4.2 It turned out that although I can install Websphere successfully, the automated process to prepare the environment for Maximo failed due to Out-of-Memory error. I know we could manually configure WebSphere  and make it works, but what’s the point in wasting time doing that? So I uninstalled Websphere, upgrade the VM to the next lowest option which has 2 CPU and 8GB RAM, then attempted to install Websphere again, which worked this time.
 
4.3 I uploaded a pre-built maximo.ear file from my local laptop to Azure VM, deployed it to Websphere, then updated the connection string to point to the Azure SQL DB. Connection string is almost exactly the same as connecting Maximo to a local SQL DB. And it worked immediately after I tried to start MXServer after deployment.
(Installing Websphere and Deploying maximo is not the topic of this post, so I’m not going to put the details of those steps here)
 
5.1 Now I got Maximo running on the VM, I wanted to access it from the Web, so I went to the VM’s Networking page, and opened port 80 inbound for the VM, but it still didn’t work. After a bit of fussing around, I remembered I had to open the port on the VM’s Windows Firewall as well. After opened port 80 on both two firewalls, it worked like a charm, I could access Maximo from my laptop using the VM’s public IP  
5.2 I wanted to have a DNS to access it as well, so in the VM’s IP setting, I set it to Static (which I think will prevent it to change even if I shutdown the VM), and put a DNS name label.
5.3 Right after I clicked on Save, I can access Maximo using a DNS alias. Mission accomplished
I have a fair share of experience with various software and platforms ranging from great, good, bad, or horrible. I can live with bad software, but it irritates me to the bone. I feel good when playing with good software. In this case, I think Microsoft has done everything right designing the Azure platform, I felt nothing but joy throughout this whole learning process. My next step is probably to recommend anyone I meet to put Maximo on Azure if they are considering moving to cloud.

BMXAA4017E Error when running custom code

The “Object cannot be saved” error

A client hired me to do a bit of house-cleaning and provide minor enhancements for their Maximo system. It has some pieces of buggy Java code and I took the opportunities to de-customize the Java code by rewriting the logic with Automation Script. For one piece of logic, it worked well in the Development environment. However, when deployed to Production, it did not work.

The automation script is triggered when the user saves a record, but it executes at the post commit launch point. Thus, this script does not give any error on the frontend. However, there is an error “BMXAA4017E object cannot be saved with the data that was provided” in the SystemOut.log file. The error came from the mbo.checkQualifiedRestriction method

Error BMXAA4017E object cannot be saved with the data that was provided

Background Information

The BMXAA4017E error is a common error. It often occurs because of a QUALIFIED rule setup in security restriction preventing an user to interact with data he/she should not have access to.

However, this client has an interesting environment with four satelite Maximo servers located in Antarctica synchronized to a central server via satellite using a data sync solution provided by SRO. Although the number of active users is low, they have 500-600 users across nearly 20 logical sites. There are many data restriction rules configured at various levels: global, site, security group etc.

This issue gave me a chill down the spine during the deployment to Production because I simply couldn’t think of where to start looking. When I was almost announcing it a failed deployment, a random thought came up and it helped me solve the problem.

Problem and Solution

In short, the problem is, to update a record, I fetched the MboSet from the database using the MXServer.getMboSet() method. I passed in the SystemUserInfo to this method as I always do, thinking it is linked to the MAXADMIN account, and it has the ultimate access right. Below is the code:

The getSystemUserInfo() method is the cause of the problem

That is where the problem comes about, that “SystemUser” profile does not belong to any Security Group or listed in any security restriction rules. One of those rules must have failed and caused the error. To fix it, I changed the code to use the current logged in user’s security profile as follows:

Changing to getUserInfo fixed the issue

To be honest, I still haven’t figured out why the same code using SystemUserInfo
works in the DEV and TEST environments, and not works in PROD. But as an engineer, not a scientist, I guess it is ok to not knowing the Why sometimes. As long as it works, everyone is happy 😊

Setup Cognos 11 to send email with Gmail (2020)

When I tried to set up Cognos 11 to send notifications via Gmail, it failed because
Google blocked access from Unsecured App. Even if I tried to turn off this option
and tried again, it still failed because Google automatically turned this setting
off again. So I had to create an App Password for my Gmail account to make it
work by following the steps below:
Step 1: Configure Gmail account
  • Log in to my Gmail account, go to “Manage your Google Account page”, then go to “Security” section
  • Enable 2-Step Verification
  • Once 2-Step Verification is enabled, the App Passwords option will be visible under the 2-Step verification option
 
 

  • Click on “App Passwords” to generate a new one. 
  • On the next page, choose “Mail” app, and “Other (Custom Name)” in the Select Device drop-down
  • On the next page, enter “Cognos” for the name of the app, then click on “GENERATE”
  • On the next page, copy the password, paste it to Notepad, then click on Done
 
 
Step 2: Configure Cognos:
  • Open “IBM Cognos Analytics” > “IBM Cognos Configuration” (not the one with the same name under “Framework Manager”)
  • Open “Notification” on the left side Explorer bar, enter
    the configuration as follows:

    • SMTP Mail Server: smtp.gmail.com:465
    • Account and Password:
      • User ID: <account_name>@gmail.com
      • Password: <Generated app password
        from the previous step>
    • Default Sender: <account_name>@gmail.com
    • SSL Encryption Enabled: True
  • Click Ok. Then Save the setting
  • Right-click “Notification” on the left Explorer menu again, and click on “Test” to check if the connection is working.
  • After the SMTP connection has been tested, to send a report from Cognos via email, we have to restart Cognos service for the change to take effect.

How to troubleshoot Maximo JVM Out-of-Memory error with Heap Analyzer?

The OutOfMemory Error

Occasionally, Maximo became unavailable for a short period of 5-10 minutes. Alarms were raised, IT help desk was called, and the issue got escalated to the Maximo specialist (you). You logged into the server, checked the log file, and found a Java Out-of-Memory (OOM) issue. Not a big deal, the server usually restarted itself and became available soon after that. You reported back to the business and closed the issue. Does that scenario sound familiar to you?

If such an issue has only occurred to your system once, it was probably treated as a simple problem. But since you had to search for a solution on the web and ended up here, reading this article, probably it has occurred more than once, the business requires it to be treated as a critical incident. As the Maximo specialist, you’ll need to dig deeper to report the root cause of the issue and provide a fix to prevent it from occurring again. Analyzing low level Java issue is not an easy task, and this post describes my process to deal with this issue.

Websphere Dump Files

By default, when an OutOfMemory issue occurrs, Websphere produces a bunch of dump files in the [WAS_HOME]/profiles/<ProfileName>/ folder, these files can include:

  • Javacore.[timestamp].txt: contains high-level details of the JVM when it crashed which should be the first place to look in a general JVM crash scenario. However, in the case if I know it is an OutOfMemory issue, I generally ignore this file.
  • Heapdump.[timestamp].phd: this is the dump from the JVM’s heap memory. For an OOM issue, this contains the key data we can analyse to get some further details.
  • Core.[timestamp].dmp: These are native memory dump. I get these as I work with Maximo running on Windows most of the time. A different operating system, such as Linux, might produce a different file. I often ignore and delete this file from the server as soon as I find there is no need for it. However, in certain scenarios, we can get some information from it to help our analysis as demonstrated in one scenario described later in this article.

IBM Heap Analyzer and Windows Debugger

In general, with an OOM issue, if it is a one-off instance, we’ll want to identify (if possible) what consumed all JVM memory. And, if it is a recurrence issue, there is likely a memory leak problem, in which case, we’ll need to identify the leak suspects. To analyse the heap dump (PHD file), there are many Heap Analyzer tools available, I use Heap Analyzer provided with the IBM Support Assistant Workbench

To read Windows dump files (DMP file), I use the Windows Debugger tool (WinBdg) that comes with Windows 10. Below are some examples of crashes I had to troubleshoot earlier, hopefully they give you some generally ideas on how to deal with such problem.

Case 1 – Server crashed due to loading bad data with MXLoader

A core dump occurred to the Integration JVM of an otherwise stable system. The issue was escalated to me from level 2 support. Using Heap Analyzer, I could see Maximo was trying to load 1.6 GB of data into memory which equals to 68% of the allocated heap size for this JVM. There was also a java.lang.StackOverflowError object which consumed 20% of the heap space.

This obviously looked weird, but I couldn’t figure out what was the problem. So, I reported this back to the support engineer, together with some information I could find from SystemOut.log that, immediately before the crash occurred, the system status looked good (memory consumption was low), and there was some high level of activities by a specific user. The support engineer picked up the phone to talk with the guy, and found the issue was due to him trying to load some bad data via MXLoader. The solution includes some further training on data loading to this user, and some tightening of Maximo integration/performance settings.

Figure 1: Analyzer shows a huge 1.6Gb of SqlServer TDSPacket object

Case 2 – Server crashed due to DbConnectionWatchDog

Several core dumps occurred within a short period. The customer was not aware of the unavailability as the system is load balanced. Nevertheless, alarms were sent to our support team and it was treated as a critical incident. When heap dump was opened by Heap Analyzer, it showed a single string buffer and char[] object consumed 40% of the JVM’s heap space.

Figure 2: Analyzer shows a char[] object consumed 1.6Gb of heap space

In this instance, since it is a single string object, I attempted to open the core dump file using WinBdg and view the content of this string using the “du” command on the memory address of the char[] object (Figure 3). From the value shown, it looks like a ton of error messages related to DbConnectionWatchDog was added to this string buffer. It was me who, a few days earlier, switched on the DbConnWatchDog on this system to troubleshoot some database connection leaks and deadlocks. In this case, the Maximo’s out-of-the-box DbConnWatchDog is faulty by itself and caused the problem. So, I had to switch it off.

Figure 3: Using the DU command in WinBdg to show the content of a memory address

Case 3 – Server crashed due to memory leak

A system consistently thrown OutOfMemory errors and core dumped on the two UI JVMs every 2-3 weeks. Heap Analyzer almost always showed a leak suspect which has some links to a WebClientSessions object. The log file also showed an unusual high number of WebClientSessions created versus the number of logged in users. We know that with this customer, there are a group of users that always open multiple browser tabs to use many Maximo screens at the same time. But it should not create such a disproportionately high number of WebClientSessions. Anyhow, we could not find out what caused it.

Figure 4: Memory leak suspect links to a WebClientSessionFactory object

During the whole time troubleshooting the issue, we maintained a channel with IBM support team to seek additional help on the issue. With their suggestions, we switched on various log settings to monitor the issue. The UI logging confirmed that WebClientSessions always get created when a user logged in, but never get disposed. In other words, the total number of WebClientSessions kept growing, and after a period of use, it would consume all JVM heap space and caused the OutOfMemory crash.

Some frantic, random search led me to an article by Chon Neth, author of the MaximoTimes blog, mentioning a memory-to-memory replication setting in Websphere could cause a similar behaviour. I quickly checked and confirmed this setting was enabled in this system. Memory-to-Memory replication is a High Availability setting available in Websphere, but this feature is not supported by Maximo. So, we turned this setting off, and the problem disappeared.

Figure 5: SystemOut.log showed a high number of WebClientSessions vs. number of logged in users

Conclusion

In a lot of cases, identifying the root cause of a JVM Out-of-Memory issue is not always straight forward. Most of the times, the root cause was found with a lot of luck involved. By having the right tools, approaches, and close coordination with the internal and external teams, we can improve our chance of success in solving the problem. I hope by sharing my approach, it helps some of you out there when dealing with such issues.

How to generate DBC Script for Maximo? (2023)

This post includes some of my notes on using DBC for the deployment of Maximo’s configuration. In case you wonder why using DBC, the short answer is if you’re happy with whatever method you’re using to deploy configuration, whether it is manual or using Migration Manager, ignore this post. But if you’re looking for a way to streamline the development process for a large team by collaborating and source controlling using GIT, or if you want to fully automate the deployment process, DBC is the way to go.

IBM has been using DBC script for a long time, but only recently, they published a
reference guide so 3rd party consultants like us can use it. DBC Script can be used to automate most of the common configuration for Maximo. It has standard commands to create/modify common low-level objects like tables, indexes, domains etc. For many other configurations that don’t have a specific DBC command, we still can handle the deployment using the <freeform> or <insert> statement to put anything into Maximo DB. Below are some specific notes on certain types of changes:

DB Configuration and System Objects:

Operations to add/modify many low-level objects like tables, views, maxvars… are available as DBC commands. However, manually writing all of the scripts can be laborious. We can instead make the changes from Maximo’s front end, then generate a DBC script for the changes by using the ScriptBuilder.bat tool (found under tools/maximo/internal). Simply add the objects you want to generate script, then choose File > Generate Script. The script file will be created in the same folder:

Application Design

The standard method to export/import XML files using App Designer is simple enough and suitable for version control. However, if we want to fully automate the deployment process (for CI/CD) we can export the changes to DBC script using the mxdiff.bat tool (found under tools/maximo/screen-upgrade). For example, if we add a new column to the List tab of the Work Order Tracking app, we can export the XML files of the before and after versions of the app. Copy the two files into the screen-upgrade folder and execute this command: 

mxdiff.bat -bWOTRACKOLD.XML -mWOTRACKNEW.XML -t001.mxs

It will produce the script as shown in the image below. (Do note that the extension for changes in app layout design should be .mxs instead of .dbc)

Automation Script

For simple manual deployment, I still prefer to use the Import/Export function as it is very convenient. Note that the permission to see the Import/Export buttons is not granted to maxadmin by default. Thus, you have to give it to the maxadmin security group first.

However, if we need to generate DBC for automated deployment, we can use the following approach. First, create an automation script called GENDBC with the source code below:

Now, whenever we need to generate a DBC file for an automation script, execute the GENDBC tool above by calling it from a browser:

https://[MAXIMO_HOST]/maximo/oslc/script/gendbc?source=SCRIPT&name=[YOUR_SCRIPT_NAME]

The output DBC file will be created in the /script folder, under your Integration Global Directory (specified in the mxe.int.globaldir system property)

Note: I recently found out this approach doesn’t work with Oracle database. It gave me the error below. In the project I worked with, we used a tool created by someone else and I can’t share it here. If you’re using Oracle, you can try the tool created by Jason @ Sharptree.

Invalid column type: getString/getNString not implemented for class oracle.jdbc.driver.T4CBlobAccessor

Integration Artifacts

To generate DBC script for integration artifacts such as Object Structure, JSONMAP, Publish Channel etc., we can also use the GENDBC tool mentioned above. For example:

  • To extract Object Structure, run the script with the following parameters:
https://[MAXIMO_HOST]/maximo/oslc/script/gendbc?source=MEA&name=[OBJECT_STRUCTURE_NAME]
  • To extract Publish Channel:
https://[MAXIMO_HOST]/maximo/oslc/script/gendbc?source=MEA&pubChannel=[PUBLISH_CHANNEL_NAME]
  • To extract Enterprise Service:
https://[MAXIMO_HOST]/maximo/oslc/script/gendbc?source=MEA&entService=[ENT_SERVICE_NAME]
  • To extract JSON Mapping:
https://[MAXIMO_HOST]/maximo/oslc/script/gendbc?source=JSONMAP&name=[JSONMAP_NAME]

The output files for Object Structure, Publish Channel, and Enterprise Service will be in the [GlobalDir]/mea folder. And the output for JSONMAP will be in the [GlobalDir]/jsonmap folder

Other configurations

For many other configurations such as escalation, messages, workflow etc., there is no standard DBC command to create or modify those objects. However, all such configurations are stored inside Maximo’s database and if we can export and then import the correct data to the target environment, it would work well (some objects will require a Maximo restart to refresh the cache). The easiest method is to use the geninsertdbc.bat tool. To use it, we simply have to give it a table name and a where clause, it will generate the data found as DBC insert statements.

For example, to export all rows of the table MAXINTOBJECT for the object structure ZZWO we can run the command below:

geninsertdbc.bat -tMAXINTOBJECT -w"INTOBJECTNAME='ZZWO'" -fOUTPUT

The output file will look like below:

Note:This tool has one problem. It generates Null value as an Empty string. Thus, it could cause errors in certain logic that requires the value to be Null such as when using mbo.isNull(“FieldName”). I found it worked most of the time for me, but it did cause me some headaches in a few instances. To fix it, we can delete these lines from the generated DBC script or add another UPDATE SQL statement to correct it. I now only this tool for simple configurations. For a more complex configuration data, I use Oracle SQL Developer or SQL Server Management Studio to generate the INSERT statement instead

The main tables that contain configuration for some common objects are
listed below:

  • Escalation: ESCALATION, ESCREFPOINT
  • Cron Task: CRONTASKDEF , CRONTASKINSTANCE
  • Workflow: WFPROCESS, WFNODE, WFASSIGNMENT
  • Saved Query: QUERY
  • Start Center Template: SCTEMPLATE

Note: for Start Center and Result sets to be displayed correctly, there are other dependent objects that need to be migrated such as Object Structure, security permission etc.

Send email from automation script

Simple stuff but I got a few people asked me this same question, so here is how to create an automation script to send email from Maximo:

1 – Create a Communication Template:

  • Template ID: MY_COMM_TEMPLATE
  • Description: Test Communication Template
  • Applies To: ASSET
  • Send From: <your_email@address> (Note: to make this work, you must have setup smtp and able to send email from Maximo first
  • Subject and message: as shown below
  • In the “Recipients” tab, add an Email recipient pointing to your own email:

2 – Create an Automation Script

Create an autoscript with Object Launch Point on the “ASSET” object, on the SAVE (Update) event, choose Language = Python and copy/paste the following sample script:

3 – Test sending email

Open an Asset then change its status to INACTIVE, you should receive an notification in your email inbox:

« Older posts Newer posts »