Category: Maximo (Page 1 of 12)

The performance effect of the Download button in Maximo

Introduction

If you have been working with Maximo for a while, you already know about the Download button on the top left of every table in Maximo. With one click, it will export everything we see displayed in the table into an Excel file. This is great for doing further data analysis and reporting from that data. It is so simple and convenient, right? Not quite.

Performance Degradation

The danger with the Download button is that, since it is too convenient, everyone is using it for every data requirement. One frequent problem is the user keep asking us to add more columns to the List tab of the key applications like Work Order Tracking. Most of the time, the Maximo Administrator will comply to such requests in a blink (another problem of Maximo being too easy to customize). Often, many of those columns are retrieved via relationships. One additional column usually does not really make much difference. But if there is a high number of records, and as the amount of user activities increases, it will create a snowball effect in degrading the overall system performance and people will start complaining about Maximo being slow. 

However, the real problem with the Download button is that, by default, there is no limitation set for it. Usually after Maximo was first implemented, it worked great. After several years, the amount of data grew, and people start using this method to download data for various reporting requirements. The Download button can significantly affect system performance.

Measure the impact

For a client that I recently worked with, many users frequently use the Download button to extract a large amount of data (e.g. all work orders in one year) to create their own reports in Excel. This led to a tremendous amount of stress on the servers.

First, we must realize that the output of this “Download” function is an XML file (although the extension is XLS), Maximo consumes a lot of processing power and memory to generate the XML file. To fully understand how it affects the server, I did a small test by setup a copy of the client’s database a local VM. I opened the Work Order Tracking app and try to download all active work orders (15k records). That took around 15 minutes to generate and download the file. Then I tried to download all work orders reported in the last year (this includes both closed and active work orders). It took 50 minutes. And during this whole time, the VM’s CPU and memory utilization was saturated the whole time.

CPU and Memory usage while Maximo is processing the download request
CPU and Memory usage after the processing completed

Crashing the server

To test the worst-case scenario, I opened the SR application, and click on “All Records”, then clicked on the Download button to download all 600 thoudsand records. Users can easily make this “mistake”, and once they clicked on the Download button there is no option for them to cancel the process.

At first, the process saturated CPU and memory utilization for more than an hour, after that, the session expired. However, in the background, SQL Server process continued running. It was consuming 30-40% CPU utilization for the entire day. I left it to run for about 6-7 hours until I got fed up and had to restart SQL Server to kill that process. Theoretically, since I was the only user in the system, when the process was running and it reached the maximum JVM heap size, Garbage Collector would try to clean the other previous MBOs which it has used, and able to free up some memory.

However, in the production environment, as it happened to our client, if the server load is high, sometimes, the Garbage Collector couldn’t free up memory quick enough, it resulted in the OutOfMemory error and crashed the server.

What can we do about this problem?

To reduce the impact of the Download function, we should set a limit on the number of records the user can download by setting a value to the webclient.maxdownloadrows property. There are already some tech notes by IBM that talked about it.

However, the next question is, once we have set a limit, what is the alternative method for the user to download the data they need? I can think of a few methods like building a simple BIRT report. It allows the users to choose XLS as the output format. We can also setup Application Export function with a flatfile output format. But my favourite option would be using the “Create Report” function. By default, when we create and run a report in the “Preview” mode, it exports the exact same number of columns on the List tab of an application, then we can “Export Data” from that report. The process takes a few clicks, but processing time is usually less than one minute compared to 10 or 20 minutes. That’s a quick win. Also, once the user got used to it, they can extract any data that they like. That means less work for the Maximo Administrator.

How to generate INSERT script for Maximo data with SQL Server?

When dealing with a large amount of data or migrating workflow configuration, we might need to generate and use INSERT script. This article describes how to do it with SQL Server database.

Why do we need to generate INSERT script?

When deploying changes, the preferred method is DBC script. There are different tools to generate DBC scripts for many types of configurations. However, in certain cases, such as when migrating workflow configuration, we still need to use SQL script.

For deploying a data package, it is a good practice to script and deploy the change as part of the DBC deployment process. It is easier to track the changes with version control system such as GIT or SVN).

As an example, in a recent project, the users used Maximo in DEV to enter and validate the data. After they finished, I extracted the data using the method described in this post and loaded it to Production as part of the DBC deployment process.

Do you know the UpdateDB or RunScriptFile tool can deploy SQL files directly?  It is not required to put SQL statements into a DBC file with the FREEFORM tag.

What is the limitation with SQL Server Management Studio?

With Oracle database, generating INSERT statements is quite simple with SQL Developer. All we need to do is right clicking on a table, then choose Export Data > Insert. However, with SQL Server, a similar function is buried deep in the drop-down menu and is hard to find. There are at least two other features with similar purpose in the same menu which causes confusion and makes people thought this is not possible with SQL Server. These functions don’t work for this requirement. They are:

  • Right-click on the table, then choose Script Table As > INSERT to
  • Right-click on the database, then choose Tasks > Export Data

For this requirement, we need to right-click on the database (not table), then choose Tasks > Generate Scripts…

However, this approach has some limitations:

  • It does not let you define a WHERE clause. You’ll need to generate script for the whole table.
  • With Maximo, all tables have the ROWSTAMP field which you need to exclude. Otherwise, the INSERT script does not work.

How to generate INSERT Script?

To address the limitations described above, we can create a VIEW or a staging TABLE from the source table. This allows us to set a WHERE clause to filter the data we want to extract and exclude the ROWSTAMP column. A VIEW is more convenient in a development project because we can define any rule to manipulate the data such as changing the ID field to a new range to avoid conflict. However, note that there is a bug with older versions of SQL Servers that it cannot generate script correctly for the Data Only mode. In such case, we can use a staging table instead.

To provide an example, below are the detailed steps to generate INSERT script for all “Fire Extinguisher” records in the ASSET table:

Step 1: Create a temporary table:

  • Use one of the two SQL statements below to get a comma-separated list of column names from the table. The first query uses the MAXATTRIBUTE table, and it only works for Maximo database. The second query works for any generic SQL database, but you will need the SELECT permission to the Information_Schema
SELECT string_agg(column_name, ', ') 
WITHIN GROUP (ORDER BY column_name) 
FROM information_schema.columns 
WHERE table_name = 'asset' AND column_name != 'rowstamp'
SELECT string_agg(attributename, ', ') 
WITHIN GROUP (ORDER BY attributeno) 
FROM maxattribute WHERE objectname = 'asset' 
AND persistent = 1
  • Copy/Paste the list of columns to write a CREATE TABLE statement below, then execute it to create the table. In this step, we can also transform or map any ID fields (such as ASSETID and ASSETUID in this example) to avoid conflict.
DROP TABLE IF EXISTS tmp_asset;

SELECT [Copy/Paste the column names here]
INTO tmp_asset
FROM asset
WHERE description = 'Fire Extinguisher';

Step 2: Generate INSERT Script from the temp table:

  • Right-click on the database, choose Tasks > Generate Scripts…
  • In the Generate Scripts wizard, choose “Select specific database objects”, then select the tmp_asset table we created in the step above.
  • In the next page, “Set Scripting Options”, click on the “Advanced” button, under the General group, change the “Types of data to script” option from Schema Only to Data Only
  • Back to the Set Scripting Options, either choose to save the output as a SQL file or open it in a new query window.

What are the alternative methods?

If the method described in this post does not work for you, two alternative methods you can try are

How to integrate ChatGPT with Maximo?

Ever since ChatGPT’s release, I’ve been contemplating how to leverage large language models (LLM) to enhance legacy applications like Maximo. Given the ability to engage in a conversation with the machine, an obvious application is to facilitate easy access to information through semantic search in the Q&A format. To allow a generic LLM model to respond to inquiries about proprietary data, my initial thought is fine-tuning. However, this approach comes with several challenges including complexity and cost.

A more practical approach is to index organisational data and store it in a vector database. For instance, attachments (doclinks) can be divided into chunks, indexed, and kept in a vector database. When asked a question, the application would retrieve the most relevant pieces of information and feed them to an LLM as context. This enables the model to provide answers with actual details obtained from the application. The key advantages of this approach include:

  • Low cost
  • Realtime data access
  • Traceability

Last month, OpenAI introduced the function calling feature to its API, providing ChatGPT with an additional mean of accessing application data. By furnishing it with a list of callable functions, ChatGPT can determine whether to answer a question directly or execute a function to retrieve relevant data before responding. This powerful feature has generated some buzz among the development community. After trying it out, I was too excited to ignore it. As a result, I developed an experimental Chrome extension that enables us to talk with Maximo. If you’d like to give it a try, you can find it on the Chrome Web Store under the name MaxQA.

How it works:

  • This tool is purely client-based, meaning there is no server involved. It directly talks with Maximo and OpenAI. To use it, you will need to provide your own OpenAI API key.
  • I have defined several basic functions that OpenAI can call. They work with Maximo out of the box. 
  • You can define new functions or customize existing ones to allow it to answer questions specific to your Maximo instance. To do this, right-click on the extension’s icon and open the extension’s Options page.
You can your own functions for ChatGPT to query Maximo and answer your question
  • The app uses OpenAI’s “gpt-3.5-turbo-0613” model, which is essentially Chat GPT 3.5. As a result, you can ask it any questions. For general inquiries, it will respond like ChatGPT 3.5. However, if you ask a Maximo-specific question, OpenAI will direct the app to execute the appropriate function and provide the necessary input parameters. The data response from Maximo will be fed back to OpenAI, which will then generate an answer based on that data.

sequence of integration between OpenAI ChatGPT and Maximo

Through this exercise, I have gained a few insights:

  • Hallucination: while the inclusion of actual data reduces the likelihood of hallucination, there are still occasional instances where it provides convincing false answers. We can address this with prompting techniques such as instructing it to not make up an answer if it does not know the answer. Nonetheless, this remains an unsolved problem with this new technology.
  • Fuzzy logic: consistent formatting of answers is not guaranteed for identical questions asked multiple times. This can be considered unacceptable in an industrial setting.
  • The 4k token limit: the API’s 4k token limit proved to be quite restrictive for the results of certain queries. The screenshot below is a response file that’s almost hitting the limit. The file contains about 10k characters.
a file with 10k characters which nearly reach the 4k token limit
  • The importance of description: more detailed description improves the accuracy of the model when selecting which function to call. For instance, for the function that provides asset details, I initially described it as “Get details of an asset record by AssetNum”, OpenAI correctly call this function when asked: “What is the status of asset 11450?”. However, it couldn’t answer the question: “Are there any active work orders on this asset?”. Until I updated the description of the function to “Get details of an asset record by AssetNum. The result will show a list of all open/active work orders on this asset and a list of spare parts”, after which it was able to answer correctly.

In conclusion, despite several limitations, I believe integrating LLM with an enterprise application offers immense potential in addressing various use cases. I am eager to hear your thoughts on potential applications or suggestions for new experiments. Please feel free to share your thoughts in the comments. Your input is greatly appreciated.

The curious case of the MIA work orders?

F5 - Redirect users to a maintenance page

Working in IT, we deal with strange issues all the time. However, every once in a while, something would come up that leaves us scratching our heads for days. One such issue happened to us a few years back. It came back to me recently and this time, I thought to myself I should note it down.

Summary

  • Maximo – TechnologyOne integration error. Work orders went missing.
  • There are no traces of problem. Everything appears to be working fine
  • The problem is due to the F5 Load Balancer returns a Maintenance Page with a HTTP Code 200. This leads Maximo to think the outbound message was received successfully by WebMethods.

The mysterious missing work orders

The issue was first reported to us when the user raised a ticket about missing work orders in TechnologyOne, the Finance Management System used by our client. Without work orders created in TechOne, the users won’t be able to report actual labour time or other costs. Thus, this is considered a high-priority issue.

Web Archiving 101 | Archives and Special Collections
F5 maintenance page for integration should not have HTTP 200 OK status

Integration Background

TechOne is integrated with Maximo using WebMethods, an enterprise integration platform. Unlike direct integration, these types of problems are usually easy to deal with when an enterprise integration tool is used. We simply look at the transaction log, identify the failed transactions and what caused them, fix the issue, and then resubmit the message. All good integration tools have such fundamental capabilities.

In this case, we looked at WebMethods’ transaction history and couldn’t find any traces of the missing work orders. We also spent quite some time digging through all of the log files of each server on the cluster to find any errors but couldn’t find anything relevant. Of course, that is the case because if there is an error, it should have been picked up and the system should raise alarms and email notifications to a few overlapped monitoring channels we set up for this client.

Clueless

On the other hand, when we looked at Maximo’s Message Tracking and log files, everything looked normal with work orders published to WebMethods correctly without interruption. In other words, Maximo said it had sent the message, while WebMethods said it never received anything. This left us in limbo for a few days. And of course, when we had no clue, we did what we application people do best, we blamed the network guys.

The network team couldn’t find anything strange in their logs either. So, we let the issue slip for a few days without any real progress. During this, users kept reporting new missing work orders not knowing that I didn’t really do any troubleshooting work. I was staring at the screen mindlessly all day long.

Light at the end of the tunnel

Then of course, when you stare at something long enough, the problem will reveal itself. With enough work orders reported, it became clear that all of the updates only went missing during a period between 9 to 11 PM regardless of the type of work orders or data entered. When this pattern was mentioned, it didn’t take long for someone to point out that this is usually the time when IT do their Windows patching.

When a server is being updated, IT would set the F5 Load Balancer to re-direct any user requests to a “Site Under Maintenance” page, which makes sense for a normal user accessing a service via the browser. The problem is that when Maximo published an integration message to WebMethods, it received the same web page, which is ok, as it doesn’t process any response. However, the status of the response is HTTP 200 which is not ok in this case. Since it’s an HTTP 200 OK status, Maximo thought the message had been accepted by WebMethods and thus marked it as a successful delivery. WebMethods, on the other hand, never received such a message.

Lesson Learned

The recommendation in this case is to set the status of the Maintenance page to something other than HTTP 2xx. When Maximo receives a status other than 2xx, it marks the message as a delivery failure. This means the administrator shall be notified if monitoring is set up. The failed message will be listed as an error and can be resubmitted using the Message Reprocessing app.

Due to the complex communication chain involved, I never heard back from the F5 team on what exactly was done to rectify the issue. However, from a quick search, it looks like it can be achieved easily by updating the rule in F5.

This same issue recently came back to me, so I had to note it down to my list of common issues with Load Balancer. I think it is also fun enough to deserve a separate post. This is a lengthy story, if you made it this far, I hope at least it will be useful to you at some point.

How to deploy change for Maximo without downtime?

Downtime is costly to the business. As developers, avoiding it can give us a ton of benefits both in terms of efficiency and for personal well-being as well. For example, when making changes that require downtime to a shared environment, I have my freedom back since I don’t have to ask or wait to do it at night. 

With the introduction of Automation Script, most of the business logic and front-end changes we need to push to production nowadays can be done without downtime. Some of them are:

  • Automation Script
  • Escalation
  • Application Design
  • Conditions
  • Workflows

However, Database Configuration changes still need Admin Mode or a restart. 

In recent years, many of us have switched to DBC script to deploy changes. This approach takes more time to prepare than compared to other methods such as using Migration Manager or doing it by hand. It proves to be very reliable and allows faster deployment with much less risk. 

Then many of us probably realized that, for small changes, we can run the DBC script directly when the system is live. But after that, we will still need a quick restart. Doesn’t matter whether it’s a small environment that takes 5 minutes to restart or a massive cluster that needs 30 minutes. A restart is downtime, and any deployment that involves downtime will be treated differently with days or weeks of planning and rounds of approval and review.

For development, a colleague showed me a trick that, instead of a restart, we can just turn on and off Admin Mode. As part of this process, Maximo’s cache is refreshed and the changes will take effect. This works quite well in a few instances. However, this is still a downtime and can’t be used for Production. On a big cluster, in many cases, turning on Admin Mode takes more time than a restart.

My other colleague hinted me a different method and this is what I ended up with. I have been using this for a while now and can report that it is quite useful. Not only my productivity has improved, but it has also proven to be valuable a few times when I don’t have to approach cloud vendors to ask for downtime or restart.

The approach is very simple, when having a change that requires restart, I’ll script it using DBC. If the change is small, I can get away with using Update/Insert SQL to update directly to the configuration tables such as:

  • MAXATTRIBUTE/MAXATTRIBUTECFG
  • MAXOBJECT/MAXOBJECTCFG
  • SYNONYMDOMAIN
  • MAXLOOKUPMAP
  • Etc.

Next, I will create a super complex automation script named refreshmaxcache (with no launch point) below:

That’s it. Every time you deploy a change, all you need to do is call the API script by using the following command to refresh the configuration

https://[MAXIMO_ROOT]/maximo/oslc/script/refreshmaxcache

Note: this is not a bulletproof approach officially recommended by IBM. As such, I suggest if you use it for Production, make sure you understand the change and its impact. I will only use it for small changes in areas that have little or no risk of users writing the data while the change is being applied. For a major deployment, for example, a change to the WORKORDER table, it’s a bad idea to apply it during business hours. For non-production, I don’t see much risk involved. 

A man who doesn’t work at night is a happy person.

How to run SQL query in Maximo without database access?

With the introduction of the Maximo Application Suite, I have had to deal with more and more Maximo environments on the cloud. This often means there is no access to the backend such as the database or the Websphere/Openshift console. Sometimes, to troubleshoot issues, it is critical to be able to run queries on the database. In this post, I will introduce a new approach to accessing the database using Automation Script.

From Maximo version 7.6.0.9, we can now build custom API using automation script. This is a powerful new feature yet it looks to be underutilized by the community.

The first obvious use case is it gives us the freedom to build any API we want without being restricted by the limitations of the Maximo Integration Framework. For example, we can create an API that returns data in CSV or binary format. Or we can use it to upload data and bypass the business layer.

Since it allows us to use the browser to interact with Automation Script, and the script framework has access to all Java functions of the MBO layer, We can exploit it to execute all sorts of weird operations.

https://[MAXIMO_ROOT]/maximo/oslc/script/[SCRIPT_NAME]

In an article I posted a few days ago, I used API script to call a Java function to refresh Maximo sequence and avoid a restart. Using the same approach, we can do a database configuration and deployment without downtime

In this post, I’ll demonstrate how we can use API script to run SELECT, UPDATE, DELETE SQL statements to Maximo database without direct DB access. This can come in handy when DB access is restricted. Of course, we can use MXLoader to achieve the same result. However, this method is a lot more convenient.

Creating an API script is very simple, we just need to create a script without a launch point. Then we can call it by accessing this URL on the browser:

If you’re already logged in and have a session. That is all it takes. Otherwise, to authenticate the request, you can pass in username and password parameters like you normally do when calling the REST API.

https://[MAXIMO_ROOT]/maximo/oslc/script/[SCRIPT_NAME]?_lid=[USERNAME]&_lpwd=[PASSWORD]

To run a SELECT query on the database, I created a script named RUNSQL with the code below:

To use the script to run a query, I typed the SQL query directly in the URL in the sql parameter as below.

https://[MAXIMO_URL]/maximo/oslc/script/runsql?method=SELECT&sql=SELECT top 10 assetnum,description,status FROM asset WHERE status = 'OPERATING'

In this case, the data is returned to the browser in the CSV format.

To execute a query that does not return data (INSERT/UPDATE/DELETE):

https://[MAXIMO_URL]/maximo/oslc/script/runsql?method=DELETE&sql=DELETE maxsession WHERE userid = 'maxadmin'

Note: I have tested this against SQL Server. Haven’t got a chance to test it against DB2 and Oracle database.

« Older posts