Page 13 of 18

How to modify (almost) any Maximo data with no database access

Being Maximo consultants, we often come into a scene where the client gives us MAXADMIN access to the system but access to the database is an absolute No-No. This is usually the case with companies that have a clear separation of the App Admin and DB Admin roles. This is also one of the key restrictions with Maximo as a Service.
 
If you have been doing a bit of admin and config activities, you will surely understand the limitation of having no database access. It’s like having to work with tied hands. Luckily, we can use MXLoader to query/update almost any data tables in Maximo. Below is an example on how to do it.
 
Let’s say we’re working with a Maximo SaaS and IBM only gives us front-end admin access, but no access to the back-end. Sometimes, Maximo is stuck while applying DB config, and we cannot turn off Admin Mode. This is usually not a big problem when we have full DB access. To fix this, we simply have to update the CONFIGURING flag in MAXVARS table to 0. However, with Maximo on the cloud, under standard procedure, we’ll need to raise a support ticket asking for IBM to update the flag. This could take a few days to resolve. It’s a horrible situation since we cannot do anything while Maximo is in Admin Mode.
 
Without DB access, we can update the MAXVARS table using MXLoader using the steps below:
 
1 – Create an Object Structure for the MAXVARS table:
  • OS Name: DBMAXVARS
  • Consume By: Integration
  • Add a new row, set Object Name: MAXVARS
 
 2 – Use MXLoader, add
a new ‘PEOPLE’ sheet:
 
 
 
With the new ‘People’ sheet added, replace ‘MXPERSON’ object structure with ‘DBMAXVARS’ and ‘PERSON’ object with ‘MAXVARS’ object. To identify what columns are available MAXVARS table we can open the ‘DBMAXVARS’ object
structure, and open the ‘Exclude/Include fields’ dialog from the Select Action
menu. In this case, I only need two fields: VARNAME and VARVALUE
 
3 – Set operation method to ‘Query’ and where clause to VARNAME
= ‘CONFIGURING’
, then execute the query by clicking on the ‘Run’ button to retrieve
the record.
 
 
 
4 – After we get the record for ‘CONFIGURING’ variable, set VARVALUE to 0, then change the operation method to ‘Sync-Change’ then execute the operation. We will now be able to turn off Admin Mode.
 
 
 
Some other examples of using MXLoader to update the system
tables include:
 
  • MAXMENU – To move/add an app to a different module menu
  • APPFIELDDEFAULTS – To set a default value when inserting new
    record
  • MAXSESSION – To release user sessions that got stuck in Admin
    mode and the user cannot log in
  • MAXDOMAIN – To create a new Synonym Domain
In case we do not have permission to use the Object Structure application, another method is to create a new single-page application using Application Designer to expose data of the object on GUI. It would enable us to achieve the same result and only take a few minutes to do. However, I like MXLoader a lot more as we can use it to bulk insert/update normal app data, and it doesn’t leave a bunch of junk apps in the system which is quite difficult to remove.
 
The policy to restrict DB access is usually there for a good reason. And MXLoader is an extremely powerful tool. I find it a lot easier to make mistakes updating the wrong data or to the wrong environment. So please be very careful when using it and you should always have a backup of the before and after versions of the data set that you work on.

[Update 2023]: an easier and more powerful way to achieve this task is using API Automation script

OutOfMemory error when deploying Maximo in Websphere

For every client I work with, I always keep a copy of the SMP folder if I can. And every time I need a standard demo instance with the same configuration as the client, I simply deploy it on my local Websphere and Oracle DB.

My current Websphere version is 8.5.5.3, and I had problems whenever I deploy a Maximo 7.6 instance with lots of add-ons. I gave up the attempt the last few times as I didn’t really need it.

Recently, as I was working with Maximo Flex (on the cloud), having a local instance is necessary as Maximo on the cloud has lots of limitations in terms of what you can do. As such, when I had the same problem deploying Maximo, I had to investigate and fix the issue.

OutOfMemory when deploying big Maximo.ear file to Websphere

Turned out it is an OutOfMemory issue during the deployment process and it took me quite a bit of time to fix it, so I leave the note here as I’m sure many others will have similar problems.

  • PROBLEM: when deploying Maximo 7.6 with many add-ons, Websphere keeps running or stops with a failure.
  • TROUBLESHOOT: if it takes more than 1 hour, I restart the system and re-deploy Maximo. In many cases, it immediately returns a failure. If it returns a failure, I look at the SystemOut.log file under
[Websphere Home]/AppServer/profiles/ctgDmgr01/logs/dmgr/
  • SOLUTION: if the problem is an OutOfMemory error, it can be tricky as there are many different processes in Websphere using Java. I ended up employing the spray and pray approach:
    • Increase heap size for WAS Admin console to 2GB: edit the wsadmin.bat under [Websphere Home]/AppServer/bin to update this parameter: PERFJAVAOPTION=-Xms512m -Xmx1024m –Xquickstart
    • Increase Application Server heap size: in Websphere admin console, go to Servers section (on the left side menu) -> Server Types -> Websphere Applicaton Servers -> [MXServer] -> Java Process Management (link on the right side menu) -> Process Definition -> Java Virtual Machine. Set both Initial and Maximum heap size to 2048 MB
    • Increase deployment manager heap size: in Websphere admin console, go to System Administration section (on the left side menu) -> Deployment manager -> Java and Process management -> Process definition -> Java Virtual Machine. Set both initial and maximum heap size to 2048 MB
    • Increase heap for deploy tool: edit the ejbdeploy.bat file under [Websphere Home]/AppServer/deploy/tool/itp to update this parameter: EJBDEPLOY_JVM_HEAP=-Xms512  -Xmx1024m

With these updates, I managed to deploy the EAR file. But Maximo and Websphere alone consume 6GB of my laptop memory. I had to remind myself to reduce them back to probably 1024MB each to save valuable real estate.

Solving problems with Update DB process when install or upgrade Maximo

When upgrading Maximo or installing new add-ons or fix packs, new source files will be copied to SMP folder which include Java classes and DBC (database configuration) script files. After that, the installer will run the UpdateDB process to update the database, run the BuidMaximoEar process to build the EAR file, and then deploy the EAR file to Websphere.
 
The DBC script files contain incremental changes to the Maximo database which add changes to GUI, update data, and modify DB configuration objects. Most of the problems you get when installing fix packs or upgrading Maximo come from the UpdateDB process which execute these DBC files in a set order.

Thus, if you have a problem with this process, you can follow the below general steps to troubleshoot and fix the problem:

 
  • When you have an error with UpdateDB process, note down the name of the log file that contains the error message. It has this format: [Process Name][YYYYMMDDhhmiss].log
  • If you have a problem with the UpdateDB process when running the installation program and don’t know the log file name, locate the [SMP]/maximo/tools/maximo folder and run the updatedb.bat (.sh) tool
    again. It will produce the same error
  • Open the log file which is located under
    [SMP]/maximo/tools/maximo/log folder to get the detailed error message.
  • Fix the problem then run the UpdateDB tool again from the command line. It will continue from the last success point.
 
Following is an example on how I got a problem when trying to create a demo database from an SMP copy I received from my client just a few hours ago. When installing demo instance by executing maxinst.bat, Maximo will create a standard initial  database by running a bunch of SQL statement from [SMP]/maximo/tools/maximo/en/maxdemo.ora for Oracle (or maxdemo.db2, or maxdemo.sqs for DB2 and SQL Server respectively). Then it will call the UpdateDB process to apply incremental changes to this database to install add-ons/fix packs which are already installed in this SMP folder).
 
During this process, I got the follow error:
 
 
To get more details about the error, I locate the Updatedb20180712173523.log in folder: [SMP]/maximo/tools/maximo/log. It gives me more details about the problem:
 
 
The problem occurred while it tried to execute the file V7110_03 script for the Oil & Gas add-on. In this case, if I scrolled up several lines, I can quickly see the problem, which is caused by an ORA-01000 – maximum open cursors exceeded. Basically, I forgot to set open_cursors parameter to 1000 as recommended by IBM, leaving it with default value = 300 when creating a new database. Thus, all I have to do in this case is increase it to 1000, then run the UpdateDB tool again. With real production database, sometimes I have to increase it to 5000 or 10000 in order to apply some fix packs.
 
In many cases, the problem is not easily identified by just looking at this log file. So we have to scroll up to a bit further, to identify which statement is causing the issue. Then, we will have to open the DBC script file which can be found under [SMP]/maximo/tools/maximo/en/[add-on folder]/[dbc script file]. For example, in this case, it is …/en/oilandgas/V7110_03.ora
 
 
 
There are a number of script file types:
 
  • Files with .ora, .sqs, .db2 extensions: are just normal SQL files applicable for Oracle, SQL Server, and DB2 respectively. You can run the SQL statement from this file against the database to check for problems directly.
  • Files with .msg extension: are used to
    add/modify existing messages. You normally don’t have a problem with these.
  • Files with .mxs, and .dbc extensions: are used to
    modify application GUI design, and apply data updates or database
    configuration changes.
 
Most of the problems come from .dbc files. To understand DBC script and identify what each statement does, you can refer to this DBC XML Format Technical Reference 
 
Some common problems we have when running UpdateDB include:
 
  • Error when creating a unique index: in this
    case, we have to query and update the data to remove or fix records with
    duplicated keys
  • Error when creating existing objects: most of the
    time, we can drop the existing object as a new object will be created by the
    script anyway
  • Missing script files, or script file doesn’t follow naming convention, which have ordered number. If there’s a missing script file, you can create an empty file by copy/paste from an existing dbc file, and rename it to have a missing ordered number. For files which don’t follow naming convention, I got get this java.lang.ClassCastExeption a few times when updatedb with Linear add-on, turns out there is a file named V7500_linear.dbc which updatedb expects to have a number after V7500, so all I have to do is change it to V7500_02.dbc and it doesn’t complain any more
  • DB operation fails due to maximum open_cursors or processes exceeded (in Oracle): increase the open_cursors, processes parameter and try to run UpdateDB again.
  • Java NullPointerException: this one is quite difficult to figure out. We have to look at the DBC file to understand what it’s trying to do. This is often caused by bad data, such as orphan records referencing a non-existence parent.
  • Error with certain DB operations like when it is unable to create a new Workorder table after Drop the table (when there’s a change to the table
    structure): I’m not exactly sure what mechanism causes this, but I found the
    problem goes way if I run the UpdateDB process again (and again). I only
    attempt to fix it if the problem persists after the 3
    rd run.
 
I hope this helps you when the going gets tough.
 

How to bulk upload images via the Integration Framework?

In the previous post, I provided an example of how we can customise Object Structure to enable import/export of binary data via MIF. It is achieved with Java customisation. From Maximo version 7.6, the automation scripting framework has been greatly extended to support integration. With this update, we can enable import/export of binary data with a simple automation script. Below is an example of how we can configure Maximo 7.6 to bulk upload images to Item Master application:

Step 1: Add a new Object Structure

  • In the Object Structures application, Open and duplicate the MXITEM object structure to create a new one. Name it MXLITEM
  • Under “Source Objects for MXITEM”, delete all child objects, and add a new child object IMGLIB as shown below

Step 2: Add an Integration Automation Script

  • In the Automation Script application, choose Action > Create > Script for Integration
  • In the pop-up, enter the following  details:
    • Select “Object Structure”, choose “MXLITEM” for Object Structure
    • Select “Inbound Processing
    • Language: Python
    • Copy/paste the piece of code below to the  Source Code area.

Step 3: Upload images to Item Master

Download the Excel VBA tool in this GitHub repo. Use it bulk upload images to Item Master:

  • Upload the Settings page with the URL and the username/password to connect to Maximo
  • Put all the image files in the same folder as the Excel file
  • Update the Data sheet with the list of Item Number and the name of the image file.
  • Click on “Upload Images”

Notes & Updates

Note 1: be careful with photos taken from newer cameras, the files usually have high resolution and thus can be quite big. I’ve seen an eager team of engineers upload images for all assets and inventory items in a power plant, and overnight, the database grew from 1GB to 20 GB. Thus make sure you resize the images before uploading.

One quick and simple method in Windows is to select multiple files, then right-click, and choose Send To > Mail Recipient. Windows will give you a pop-up to resize the files, choose the smallest size (640×480). Windows will resize the files, then attach them to Outlook, in Outlook, select all the files, and copy/paste them to a different folder. These files will be much smaller than the original, full-resolution files.

Update Aug/2023: I have posted a new article on how to automatically rescale the image to a smaller size

Creating high performance service using MaximoCache

Sometimes in our application, we need to build custom services that run when Maximo starts. We can extend the psdi.server.AppService class and register it with MXServer by inserting a new entry into the MAXSERVICE table. If the service executes slow-running queries, it is a good idea to cache the data in memory to improve performance. We can implement MaximoCache interface for this purpose. By doing this, we can initialize the service when MXServer starts and pre-load all data required by the service into JVM memory. When the service is called, it will only use cached data to provide instant response which gives a much better user experience. 
 
Below are the steps to create a sample service that loads all Location’s descriptions into memory. The service will provide a function to check if an input string matches a location’s description or not. We will call this check when the user enters an Item’s description and it will throw an error whenever the input matches with the description of any existing Location. This is not a very good use case. But for the sake of simplicity, I hope it gives you an idea of how it can be implemented.

1. Create a LocationCache class, implementing psdi.mbo.MaximoCache
interface:
  • For the getName() function, set a name for the cache, such as “LOCATION_CACHE”.
  • Create method loadData() to query and load all locations’
    description into a HashSet
  • Call the loadData() method in the init(), reload(), reload(String)
    function overridden from the MaximoCache interface
  • Create function isLocationExist(String) to check if an
    input String matches with a description contained in the HashSet

2. Create a LocationCheck class extending the psdi.server.AppService
class. Override the init() method to initialize a new LocationCache object and
add it to the MXServer’s cache.

 
3. Insert a new entry into the MAXSERVICE table to register with MXServer to initialize the service when Maximo starts. Sample SQL statement to insert the record as follows:

4. Extend the LocationSet class to override the fireEventsAfterDBCommit() method. Essentially this method will reload the cache whenever there is a change to location data. This will ensure that the information stored in the cache will always be in sync with the new update.

5. Create a FldDescription class to override the validate()
method, and associate it with the Item.Description field. In the validate()
method, we will check if the description entered is the same as a location’s description stored inside the cache and throw an error dialog if there is a
match.

You can view or download the full sample code from this GitHub repository: link

After deploying the code and associating the LocationSet to
the Location object and the FldDescription class the Description field of the
Item object, to test if the code works, open an Item and enter a description
that matches the description of a location, it should throw an error
message as in the image below:

To check if the cache is reloaded properly, create a new location or update an existing one, enter the description of the updated location to an item’s description, it should also throw an error message. 

MboSet performance, Memory Cache, and DB Call

 

I recently had to look at ways to improve the performance of a custom-built operation in Maximo. Essentially, it is one of the many validation operations take place after a user uploads a PO with a few hundred thousand lines.  The team here already built a high-performance engine to handle the process, but even with it, this particular operation still takes around 15-17 milliseconds to process each line which is too slow for their current standard. Imagine processing 200,000 PO lines, it will take nearly an hour just for this operation alone. There are a few dozen of these operations that need to be executed, plus other standard basic Maximo operations like status change or save, the whole process takes up many hours.
 
With this millisecond operation, many assumptions or standard recommendations for improving performance may not work. In some instances, following the standard recommendations actually make it slower. For example, many Maximo blogs suggest setting a MboSet to DISCARDABLE to reduce memory usage, but if the MboSet is re-used repeatedly, it will need to query from the DB again and again which can be costly. In this case, don’t set the MboSet to DISCARDABLE will allow Maximo to cache data in memory and thus make it faster. 
 
Usually, for performance tuning, a database query is the most expensive operation, and with Maximo, it has something to do with the getMboSet method. Therefore, a better insight into how it works will help us to devise the best strategy for writing code for high-performing operations. In this post, I’ll list out the findings of some of the experiments that I did around MboSet, hopefully will help save you some time if you have to deal with the same problem.

1. mboSet.count() versus getSize() versus isEmpty()
 
If we need to retrieve a MboSet via relationship such as itemSet = asset.getMboSet(“ALL_ITEM”), and we need to check if the result set is empty or not, we can use either itemSet.count() or .getSize() or .isEmpty(). The differences between them are:
  • count() will fire off a “Select count(*) From ITEM Where…” query; isEmpty() will fire off a “Select * From ITEM Where…” query.
  • getSize() will never fire off a DB query. It only returns the size of the mboSet which has been initialised in memory. Thus, if after the itemSet = asset.getMboSet(“ALL_ITEM”), we call cnt = itemSet.getSize(), we will get cnt = 0. After itemSet.moveFirst() is called, getSize will return cnt = 2 (because Maximo initialised one object ahead), and after we have looped through the whole itemSet using itemSet.moveNext(), calling itemSet.getSize() will return the actual size of the whole set.
  • isEmpty(), together with moveFirst(), or moveNext() or getMbo(i) function will call a “Select * From ITEM Where…” query if the itemSet never been initialized. But if the mboSet is already initialised, any subsequent call to those functions will just access the object in memory and thus doesn’t trigger another DB query.

Considering the following piece of code:

With this code, the mboSet.count() method will fire off a “Select count(*)” query. Then, when getMbo() method is called the first time, it will fire off a “Select * From…” query to initialise the mboSet. Although inside DB, the Select count(*) cost much less; from Maximo JVM, with a small table, it takes about the same amount of time (a few milliseconds) to execute the mboSet.count() and the mboSet.getMbo(i) method.  As such, the above piece of code will take about twice the amount of time to execute when compared to a loop that doesn’t use the count() method. This is consistent with the recommendation by Bruno in his blog.

However, considering a scenario when our logic only needs to check if a result set returned is empty or not, with a massive table, calling count() can be much faster than calling isEmpty().

2. Retrieve MboSet via a MXServer object versus via Relationship

When we retrieve a mboSet via relationship, the data will be cached in Maximo’s JVM memory. Thus, if the same relationship on the same mbo object is used again, it doesn’t fire off another DB query, thus improving performance significantly. However, if in our code, instead of using a relationship, we retrieve the mboSet via mxserver.getMboSet(), it will query the DB every time, thus could impact performance if the same mboSet can be re-used (when there is no change to the DB query). Therefore, in this case, it is advisable to retrieve the data via relationship instead. But be aware that in the case the same relationship is used, but with a different where clause, Maximo will have to call the DB to execute a new query, thus there will be no difference in terms of performance.

This leads to a scenario where you don’t want to add a new relationship to the object every time you need some specific operation. In this case, we can define a temporary relationship in our code as in the following sample:

itemSet = asset.getMboSet(“$TEMP_RELATIONSHIP”, “ITEM”, “status = ‘ACTIVE’”)

This will add a new temporary relationship during runtime, and as long as the where clause doesn’t change, the itemSet data is cached and thus the next time you call the getMboSet using this temporary relationship, data will be accessed from memory instead of being queried from the DB.

Sometimes, you will want to shorten your code using the dot notation in mbo.getString(), getInt(), or getBoolean() method etc., Maximo will try to use cached data instead of querying the database, thus considering the following piece of code:

The first getString statement will initialise the mboSet retrieved by the “ASSET_ITEM” relationship. Thus, subsequent getString statements don’t query the DB again. Because of that, in terms of performance, it will be similar to the following piece of code:

item = asset.getMboSet(“ASSET_ITEM”).getMbo(0) if (“ACTIVE”.equals(item.getString(“STATUS”))) { desc = item.getString(“DESCRIPTION”); orderunit = item.getString(“ORDERUNIT”) }

Using a relationship will help cache the data in memory, but if the data is only used once, all it does is increase memory usage and thus could potentially degrade performance instead of giving any benefit. Thus, you should be considering whether data can be re-used. If not, setting the mboSet to DISCARDABLE will ensure memory is freed up after use. So my recommendation for you to deal with performance issues is to always run your own test and make sure the right strategy is used for your scenario.

« Older posts Newer posts »