This post just contains a couple of my own notes setting up Maximo with a Load Balancer which I learnt the hard way:
Property mxe.system.useLoadBalancer
This setting should be set to 1. If not enabled, Maximo thinks the IP address of the Load Balancer’s IP is the client’s and blocks it when the number of requests exceeds a certain threshold (by default is 50 per 3 seconds). For more details about the IP blocking function, refer to this previous post Maximo 7.6. feature – Denial of Service attack
Load Balancer session timeout
I recently worked with a client who set up Citrix NetScaler as the LB for Maximo. Its default session timeout is 2 minutes. A load balancer (LB) should have session stickiness, meaning it should keep forwarding all requests from one user to the same Maximo application server throughout the whole user’s session. When the LB session times out while Maximo session is still active, a new request from the same user will be directed to a new application server that doesn’t manage the current active Maximo session, it forces the user back to the login screen.
To fix this, we need to increase the LB session timeout to be more than the timeout setting in Maximo (by default, it’s 30 minutes).
Please note: if you have LDAP integration, the LTPA token is a different login session maintained by Websphere and is not related to this. LTPA token will expire after a fixed duration from logging in regardless of whether the user is active or not. This caused the users to be randomly logged out. Thus, I usually set LTPA token timeout to a very high value (e.g. 10-15 hours)
Property mxe.int.webappurl
if Maximo is only accessible via the LB’s URL, we should update the few webappurl properties pointing to that URL. If the URL values set there are wrong, it leads to an incorrect URL returned in the integration responses. This in turn can lead to error BMXAA5798E when deploying Web Services or generating XML schemas. We can set the mxe.int.verifywebappurl to 0 to avoid the issue when generating XML Schema
Websphere rewriting port 80/443 to application server ports 9080/9443
If the Load Balancer is set up to distribute load directly to the application servers without the HTTP Webserver sitting in between, you can get this issue. To fix this, for each application server (JVM), add two Web Container custom properties below and set them to True:
com.ibm.ws.webcontainer.extractHostHeaderPort
Trusthostheaderport
Missing integration message due to Load Balancer’s Maintenance Page
During the maintenance window of the external application’s servers (e.g. software patching), if Load Balancer is set to display a maintenance page, and the HTTP Response Status Code of the page is 2xx, any integration message Maximo sent to the server via the LB shall be marked as successful while the target system never receives them. This can cause missing data in the target system. Refer to this post for more details: The curious case of the MIA work orders
I sometimes have issues with message engine not running. Usually I’ll just try to restart the whole system and hope that it goes away.
If it doesn’t work, in most cases, it is caused by a corrupted file store used by the message engine and the suggestion from the Internet is to delete these files, which seems to work fine.
Sometimes, with the message engine uses a database store, I had a very similar issue. I find it quite hard to find out the exact root cause. So I chose the easier path by simply deleting the whole message engine, create a new one, giving a new schema name for the data store. This ensures it creates new tables when message engine is initialized the first time.
Creating a new message engine and re-assigning bus destinations usually take less than 5 minutes, and it seems a lot easier than troubleshooting and finding the root cause of the issue.
Most Maximo settings or Java code can be deployed by copy/paste the file directly to the installed folder in Websphere without having to rebuild and redeploy the application. However, with web.xml, it doesn’t work that way. Sometimes, we need to update this file to increase the timeout setting or enable/disable LDAP integration
Sure, we can directly modify the file in Websphere without redeployment, but we will also have to update the file in a few temporary folders for which, I find the process quite tedious.
To avoid having to rebuild and redeploy the whole maximo.ear file, which can take a lot of time, we can just redeploy the single web.xml file instead. Below is the process:
Update the web.xml file with new settings
Log in to the Websphere console, open Applications > Application Types > WebSphere Enterprise Applications: select “MAXIMO” application by ticking on the checkbox next to it, click on the Update button
In “Application update options“, select “Replace or add a single file” option
In the textbox below “Specify the relative path….“, specify: maximouiweb.war/WEB-INF/web.xml
In the “Specify the path to the file“, choose “Local file system“, and click on “Choose file” to browse and select the updated web.xml file, click Next.
Click OK on the next screen to deploy. Click Save when the deployment process is completed.
Wait for a minute for the new settings to be propagated to all nodes, then restart Maximo.
I want to post a simple JSON message to an external system and do not want to add any external library to Maximo as it would require a restart.
In the past, I used the Java HTTPClient library that comes with Maximo, but it would require half a page of boilerplate Jython code. Recently, I found a simpler solution below.
Step 1
First I use WebHook as a mock service for testing. Go to webhook.site, it will give us a unique URL to send request to:
Step 2
Go to the End Point application in Maximo to create a HTTP Endpoint
End Point Name: ZZWEBHOOK
Handler: HTTP
Headers: Content-Type: application/json
URL: <copy/paste the Unique URL from webhook>
HttpMethod: POST
Step 3
To ensure it’s working, Use the Test button, write some simple text, then click on “TEST”. Maximo should show a successful result. In Webhook, it should also show a new request received. (Note: if it gives an error related to SSL, it’s because Websphere doesn’t trust the target website. You’ll need to add the certificate of the target website to Websphere trust store)
Step 4
To send a request, just need two lines of code as follows:
One of our clients undertook a massive IT transformation program which involved switching to a new financial management system, upgrading and rebuilding a plethora of interfaces among several systems, both internal and external to the business. Kronos (now UKG) was chosen to replace an old timesheet software and there was the need to integrate it with other systems such as Maximo and TechnologyOne.
WebMethods was used as the integration tool for this IT ecosystem. This is my first experience working with Kronos. The project took almost two years to finish. As always, when dealing with something new, I had quite a bit of fun and pain during this project. As it is approaching the final stage now, I think I should write down what I have learned. Hopefully, it will be useful for people out there who are doing a similar task.
The REST API
Kronos provides a fairly good reference source for its REST API. Theoretically, it offers the advantage of supporting real-time integration and enables seamless workflow. However, we don’t have such a requirement in this project. On the other hand, this has two major limitations.
API throttling limit: it restricts the number of calls you can make based on the license purchased.
Designed for Internal Use: it is obvious to me that the API was built for internal use of the application. It is not built for external integration.
No one told us about this when we first started. As a result, we hit several major obstacles along the way:
First, most API calls will need to be method specific. For example, Cost Center requests need to be either Create New, Update, or Move. There is no Sync or Merge operation. The Update and Move requests will accept Kronos’ Internal ID only. When sending an Update or Move request, we need to send another request first to retrieve the Internal ID of a record.
Cost Center has a simple structure with a few data fields. However, to get it to work, we had to build some complex logic to query Kronos to check whether the record exists (and the parent) to send in the appropriate Create New, Update or Move request.
This is not a major problem until the API Limit is added to the equation. If Kronos receives more than certain number of requests over a given period, it will stop processing other requests. In other words, the whole integration system is out-of-service. We had to build a caching mechanism to pre-load and refresh the data at a suitable time so that the number of requests sent to Kronos is kept at minimal. This adds a lot of complexity to an otherwise simple interface.
With a more complex data structure, such as the Employee master data, if we use the REST API, it is impossible to build an interface that is robust enough for a large scale, high-volume system. We had to build complex business logic in WebMethods to handle all sort scenarios and exceptions that could occur. The process to create a new employee record can result in more than a dozen different requests to check existing data and lookup for Internal ID of different profiles such as Security, Schedule, Timesheet, Holiday, and Pay Calculation Profiles, then send in the Create New/Update requests in the correct order, ensure proper handling of exception and roll back if one request fails due to various reasons.
The Report API
Kronos provides a REST API to execute reports. Besides from the out-of-the-box capability, it is possible to build custom API for reporting too. This is useful to alleviate some of the problems with the API throttling limit.
For example, we have an interface to send organisation hierarchy (departments and job positions) to Kronos as Cost Centers. The source system, TechnologyOne in this case, would periodically export its whole data set to a CSV file. We only need to query Kronos to determine if the record exists to either send a Create or an Update request. If the record has a new change, we need to send an Update and/or a Move requests. In this case, we used the Report API to retrieve the full set of Cost Center in one single call rather than having to make thousands of individual cost centre detail requests.
The Import API
The Import API turned out to the best way to send data to Kronos. We learnt it the hard way. It has some minor limitations such as:
Some APIs use description to identify a record instead of an ID
Documentation sometimes is not accurate.
However, the Import API provides some powerful capability for sending external data to Kronos:
Support bulk upload operation
Auto matching with existing records – does not require querying for Internal ID
Support “merge” operation – automatically decide whether to create new or update depending on whether a record already exists or not
Since this is an asynchronous operation, and the time it takes to process inbound data depends on the volume. We need to build a custom response handler to continuosly checking with Kronos after a deliver to retrieve the status of an import job to handle Success or Failure result. This custom response handling takes some extra effort to build, but it is reuseable for all import endpoints.
As an example, with the Employee interface above, at some point, it became too complex and a maintenance nightmare for us. We had to rebuild it from scratch using the Import API and we were glad that we did. It greatly simplified the interface, and the business is now very confident of its robustness.
Conclusion
If I need to build a new integration interface with Kronos now, for retrieving data from Kronos and sending it to another system, I will start with using Reports (via the Report API) to identify new updates, then use the REST API to retrieve details of individual records if it is required. For sending data to Kronos, I would look at the Import API first. I will only go for the REST API if the Import API cannot do what I want and only if the request is simple and low volume.
I am a freelance Maximo consultant based in Melbourne. If you enjoy reading my blog, please connect with me on LinkedIn to get updates on new posts. If you or your company need any professional assistance, please leave me a message, I'll call you back.