Page 2 of 18

How to use custom endpoint for dynamic delivery?

The challenges

Enterprise application integration needs to be fast and fault-tolerance. As a result, when it comes to integration with Maximo, we almost always opt for asynchronous, near real-time message delivery. This method of delivery has certain challenges which is not a big problem when using an external enterprise integration tool. There are dozens of these tools available on the market.

However, running an enterprise application integration system is expensive. Many smaller companies often opt for direct integration. Maximo Integration Framework offers an arsenal of tools to handle all kind of requirement. Yet, the framework is still lacking in two major areas:

  • Data mapping
  • Dynamic delivery

Previously, I have provided an example on how to use XSLT. In this article, I will discuss on how to address the dynamic delivery requirement.

Dynamic delivery with a custom automation script endpoint

When is dynamic delivery required?

In many cases, we need to build an integration interface with one or more of the following requirements:

  • Different URL for each Add/Update/Delete action: this is a common pattern for the REST API nowadays. For example, the ServiceNow API has 3 different URL formats to Add, Update, and Cancel a Service Order.
  • The target application needs an Internal ID for the Update request: this is also a common pattern. The Maximo OSLC REST API itself is an example.
  • The need for retrying and reprocessing failed deliveries: message delivery is expected to fail due to many external factors. Thus, this is a mandatory requirement for almost all integration interfaces.

In general, the logic to address the above requirement should be handled during delivery, not during publishing and transformation of the integration message.

Why using this approach?

When the external system requires an Internal ID for the Update action, one common strategy is to store the ID in the EXTERNALREFID field. For example, when a new PO is created, it is published to an external ERP system. Upon successful delivery, the receiving system responds with the Internal ID of the newly created record. We can store this value in the PO’s EXTERNALREFID field or a custom field. The next time when the PO needs to be published, if this field has a value, we know that the PO has already been created in ERP and thus, will send an Update request instead of a Create New request.

But this strategy often does not work well with asynchronous integration. For example, if the first request to create a new PO takes a while to process due to the external system being busy, a subsequent update to the PO will send another Create New PO request, resulting in duplicated POs in the target system.

However, if we handle the logic to determine whether to send a Create New or Update during delivery, this should not be an issue. In the case the first request failed, and the second request will be determined as a Create New. In the case both messages failed, and later they are reprocessed successfully, the first one that reaches ERP will be determined as a Create New, and the second one will be delivered as an Update.

Two key benefits of this approach are:

  • No need for manual intervention
  • Can use continuous outbound queue instead of sequential queue

Real-world Implementation

Over the last 2-3 years, I have built a few interfaces using this approach. So far, they have been working well. The latest one was an interface between Maximo and Myob Greentree ERP. In this case, Maximo handles the procurement process and Greentree handles the payable process. The functionality of Greentree API is somewhat limited. It does not support Sync operation and requires Internal ID for both the PO and the POLINE records. To address this requirement, the interface is implemented with the following key points:

  • Data mapping is handled by XSLT
  • Delivery is handled by an automation script endpoint:
    • On delivery, the endpoint will query Greentree to identify the Internal ID and status of the PO and PO Lines. If the PO already exists, it will determine to Update or Add New PO lines depending on the status of each line.
    • The original payload is manipulated during delivery to match with the current status of the PO in Greentree. Which means if it fails, the delivery will retry and come-up with a new payload to match with the new status of the record in Greentree at that point of time.
  • An autoscript crontask is added to send error alerts to the user in Excel format. But it will only send email in the case there is a least one error. The email is delivered directly to the user most of the errors come from missing or incorrect GL Account.

Automation Script example

To implement the requirement above, we can create a custom endpoint with automation script. The endpoint can be associated with the publish channel and external system like other standard endpoints. That means the interface is fully compatible with other MIF functionalities including:

  • Message Queues
  • Message Tracking
  • Message Reprocessing

Below is an example piece of python code for the end point:

The basic example above does a similar job like a standard HTTP endpoint:

  • The published message is passed to the script in the implicit variable requestData
  • Any failure/exception at any line of the code will be equal to a failed delivery:
    • The message is put back to the error queue .
    • Error is captured and can be seen in the Message Reprocessing application.

However, by doing this, we have full control of what we want to happen during delivery such as:

  • Query external system to determine if a record already exists, and if yes, get the Internal ID
  • Raise an Exception to tell Maximo that the delivery has failed. This can be quite handy in given scenarios such as when an external system rejects a message but it still responses with a HTTP 200 code, and only provides an error message inside the XML response body.

Below is an example piece of code which does the following:

  • Query an external system to identify if a PO record exists
  • If the record exists, update the XML payload with Internal ID before delivering it to the system as an Update request instead of a Create.

How to remove HTML tags in Long Description field

Introduction

The Long Description field contains data in richtext format. When sending this data to an external system, the special characters in HTML tags often cause trouble to the integration interface. It can also result in unreadable text displayed in the receiving application. This post provides intructions on how we can easily strip the Maximo’s Long Description field from the richtext format to keep the plain text.

Common problems with Long Description

The Long Description field is used by many key objects such as Service Request, Purchase Order, or Work Log. In Maximo, perhaps, Service Request and the Work Log’s Details are the most common place where this field is used.

When raising a ticket or recording some logs, users often copy the information from other sources such as email. This does not usually cause much problem to Maximo from the front-end because most of the format if copied from standard applications like Words, Outlook or the browser are retained in this field.

However, when it comes to integration, it is a different story. For system integration, there are two common problems with the Long Description data due to the HTML tags of the richtext format:

  • Many integration tools have trouble espcaping the special characters contained in the field data. This often result in integration failure. Even if the tool can handle these special characters graciously, the additional number of characters added can exceed the field length limit of the receiving application.
  • External application does not support this format, as such, the rich-text content is displayed as-is with a lot of HTML tags, making it unreadable to the users.

In many cases, retaining the format of the text is not desirable. The customer might prefer to keep the data as plain-text. In such case, Disabling the Rich Text Editor is a better solution. However, if we want to retain the formating and only remove it when sending the data to an external system, the following section descibes how to achieve it.

How to strip rich-text tags from Long Description?

Requirement

Below is an example of an integration interface between IBM Maximo and Gentrack Unify CRM in a water utility. When carrying out maintenance work, if there are delays, the field workers put the Work Order on Hold, and enter the detail of the delay as a Work Log entry. This information must be sent to Unify CRM so that, if the customer calls up, the call center staff would be able to answer on why the work has not completed.

Setup a simple interface

To provide a simplified example, in a Maximo vanilla instance, we will setup the following integration objects:

  • Object Structure: ZZWORKLOG – make sure to include the LONGDESCRIPTION field
  • Publish Channel: ZZWORKLOG – make sure to enable event listener and message tracking
  • Exteral System: WEBHOOK – I added a HTTP endpoint to webhook.site for testing

Sample text with richtext format

To test the code, we will create new Work Log entries and use the same text with some formating and hyperlinks as follows:

XML Output without tripping

Without any customisation, Maximo will publish an XML message to Webhook as shown in the image below. As you can see, it contains a lot of XML tags for which, in turn, the special characters have been escaped to be compatible with XML format.

Add UserExit logic with Automation Script

For this requirement, we can use the utility class psdi.util.HTML in the Maximo library which is available out-of-the-box. To strip the richtext tags in the Long Description field before sending data to an external system, we can create an UserExit processing logic with Automation Script as follows:

  • From the Automation Scripts application, choose Actions > Create Script for Integration
  • Choose Publish Channel option, select the ZZWORKLOG pulish channel
  • Choose User Exit option
  • Choose Before External Exit
  • Language: Python
  • Source Code:

XML Output after removing formating

If we create a new Work Log entry using the same text above, the XML output will only contain plaintext as shown below. The text is much more readable now and it still contain the hyperlinks which can be an important piece of information.

Other Notes

There is another frequent problem when sending the Long Description data to an external system. It often exceeds the field length limit of the external system. While you are at it, double check the length limit of the receiving field. With the Description and Long Description, it is a great idea to always truncate it to fit the target field. In the case of Long Description, we might want to split the long text into multiple records to avoid integration failure in the future.

In the case we want to strip the richtext formating when it is first entered in Maximo, so that only the plain text content is saved to the database. We can use the same toPlainText function in Automation script when saving the record.

The performance effect of the Download button in Maximo

Introduction

If you have been working with Maximo for a while, you already know about the Download button on the top left of every table in Maximo. With one click, it will export everything we see displayed in the table into an Excel file. This is great for doing further data analysis and reporting from that data. It is so simple and convenient, right? Not quite.

Performance Degradation

The danger with the Download button is that, since it is too convenient, everyone is using it for every data requirement. One frequent problem is the user keep asking us to add more columns to the List tab of the key applications like Work Order Tracking. Most of the time, the Maximo Administrator will comply to such requests in a blink (another problem of Maximo being too easy to customize). Often, many of those columns are retrieved via relationships. One additional column usually does not really make much difference. But if there is a high number of records, and as the amount of user activities increases, it will create a snowball effect in degrading the overall system performance and people will start complaining about Maximo being slow. 

However, the real problem with the Download button is that, by default, there is no limitation set for it. Usually after Maximo was first implemented, it worked great. After several years, the amount of data grew, and people start using this method to download data for various reporting requirements. The Download button can significantly affect system performance.

Measure the impact

For a client that I recently worked with, many users frequently use the Download button to extract a large amount of data (e.g. all work orders in one year) to create their own reports in Excel. This led to a tremendous amount of stress on the servers.

First, we must realize that the output of this “Download” function is an XML file (although the extension is XLS), Maximo consumes a lot of processing power and memory to generate the XML file. To fully understand how it affects the server, I did a small test by setup a copy of the client’s database a local VM. I opened the Work Order Tracking app and try to download all active work orders (15k records). That took around 15 minutes to generate and download the file. Then I tried to download all work orders reported in the last year (this includes both closed and active work orders). It took 50 minutes. And during this whole time, the VM’s CPU and memory utilization was saturated the whole time.

CPU and Memory usage while Maximo is processing the download request
CPU and Memory usage after the processing completed

Crashing the server

To test the worst-case scenario, I opened the SR application, and click on “All Records”, then clicked on the Download button to download all 600 thoudsand records. Users can easily make this “mistake”, and once they clicked on the Download button there is no option for them to cancel the process.

At first, the process saturated CPU and memory utilization for more than an hour, after that, the session expired. However, in the background, SQL Server process continued running. It was consuming 30-40% CPU utilization for the entire day. I left it to run for about 6-7 hours until I got fed up and had to restart SQL Server to kill that process. Theoretically, since I was the only user in the system, when the process was running and it reached the maximum JVM heap size, Garbage Collector would try to clean the other previous MBOs which it has used, and able to free up some memory.

However, in the production environment, as it happened to our client, if the server load is high, sometimes, the Garbage Collector couldn’t free up memory quick enough, it resulted in the OutOfMemory error and crashed the server.

What can we do about this problem?

To reduce the impact of the Download function, we should set a limit on the number of records the user can download by setting a value to the webclient.maxdownloadrows property. There are already some tech notes by IBM that talked about it.

However, the next question is, once we have set a limit, what is the alternative method for the user to download the data they need? I can think of a few methods like building a simple BIRT report. It allows the users to choose XLS as the output format. We can also setup Application Export function with a flatfile output format. But my favourite option would be using the “Create Report” function. By default, when we create and run a report in the “Preview” mode, it exports the exact same number of columns on the List tab of an application, then we can “Export Data” from that report. The process takes a few clicks, but processing time is usually less than one minute compared to 10 or 20 minutes. That’s a quick win. Also, once the user got used to it, they can extract any data that they like. That means less work for the Maximo Administrator.

How to generate INSERT script for Maximo data with SQL Server?

When dealing with a large amount of data or migrating workflow configuration, we might need to generate and use INSERT script. This article describes how to do it with SQL Server database.

Why do we need to generate INSERT script?

When deploying changes, the preferred method is DBC script. There are different tools to generate DBC scripts for many types of configurations. However, in certain cases, such as when migrating workflow configuration, we still need to use SQL script.

For deploying a data package, it is a good practice to script and deploy the change as part of the DBC deployment process. It is easier to track the changes with version control system such as GIT or SVN).

As an example, in a recent project, the users used Maximo in DEV to enter and validate the data. After they finished, I extracted the data using the method described in this post and loaded it to Production as part of the DBC deployment process.

Do you know the UpdateDB or RunScriptFile tool can deploy SQL files directly?  It is not required to put SQL statements into a DBC file with the FREEFORM tag.

What is the limitation with SQL Server Management Studio?

With Oracle database, generating INSERT statements is quite simple with SQL Developer. All we need to do is right clicking on a table, then choose Export Data > Insert. However, with SQL Server, a similar function is buried deep in the drop-down menu and is hard to find. There are at least two other features with similar purpose in the same menu which causes confusion and makes people thought this is not possible with SQL Server. These functions don’t work for this requirement. They are:

  • Right-click on the table, then choose Script Table As > INSERT to
  • Right-click on the database, then choose Tasks > Export Data

For this requirement, we need to right-click on the database (not table), then choose Tasks > Generate Scripts…

However, this approach has some limitations:

  • It does not let you define a WHERE clause. You’ll need to generate script for the whole table.
  • With Maximo, all tables have the ROWSTAMP field which you need to exclude. Otherwise, the INSERT script does not work.

How to generate INSERT Script?

To address the limitations described above, we can create a VIEW or a staging TABLE from the source table. This allows us to set a WHERE clause to filter the data we want to extract and exclude the ROWSTAMP column. A VIEW is more convenient in a development project because we can define any rule to manipulate the data such as changing the ID field to a new range to avoid conflict. However, note that there is a bug with older versions of SQL Servers that it cannot generate script correctly for the Data Only mode. In such case, we can use a staging table instead.

To provide an example, below are the detailed steps to generate INSERT script for all “Fire Extinguisher” records in the ASSET table:

Step 1: Create a temporary table:

  • Use one of the two SQL statements below to get a comma-separated list of column names from the table. The first query uses the MAXATTRIBUTE table, and it only works for Maximo database. The second query works for any generic SQL database, but you will need the SELECT permission to the Information_Schema
SELECT string_agg(column_name, ', ') 
WITHIN GROUP (ORDER BY column_name) 
FROM information_schema.columns 
WHERE table_name = 'asset' AND column_name != 'rowstamp'
SELECT string_agg(attributename, ', ') 
WITHIN GROUP (ORDER BY attributeno) 
FROM maxattribute WHERE objectname = 'asset' 
AND persistent = 1
  • Copy/Paste the list of columns to write a CREATE TABLE statement below, then execute it to create the table. In this step, we can also transform or map any ID fields (such as ASSETID and ASSETUID in this example) to avoid conflict.
DROP TABLE IF EXISTS tmp_asset;

SELECT [Copy/Paste the column names here]
INTO tmp_asset
FROM asset
WHERE description = 'Fire Extinguisher';

Step 2: Generate INSERT Script from the temp table:

  • Right-click on the database, choose Tasks > Generate Scripts…
  • In the Generate Scripts wizard, choose “Select specific database objects”, then select the tmp_asset table we created in the step above.
  • In the next page, “Set Scripting Options”, click on the “Advanced” button, under the General group, change the “Types of data to script” option from Schema Only to Data Only
  • Back to the Set Scripting Options, either choose to save the output as a SQL file or open it in a new query window.

What are the alternative methods?

If the method described in this post does not work for you, two alternative methods you can try are

Some problems with authentication in MAS and how to work around

The new Maximo Application Suite is still new to me. It took me a bit of time to get over some of the basics. This post contains a few quick notes, hopefully, it can save you a bit of time when facing the same problem. This is pure personal observation. The solution is based on trial and error. I haven’t dug through the official documentation to have a good understanding of how it works. Please don’t quote me if the information provided here is incomplete or if it doesn’t work for you.

Slow Login Process

In Maximo 7, opening the Maximo URL and getting past the login screen (with a saved password) usually takes 2-3 seconds. However, with MAS, the login screen is much slower to load. It is likely due to the SAML integration which we cannot do much about. After logging in, the Suite Navigator screen also takes a long time to load. The whole process takes about 15-20 seconds.

Once the screen is loaded, I reckon, 99% of the time, people will go to the Manage application (which is the old Maximo app).

To access Maximo faster, after logging in, I bookmarked the manage screen. The next time I can access Manage using the bookmark. Although I still have to log in, it will skip the Suite Navigator screen and get me straight to the Manage application. This saves me about 10 seconds.

REST API URL:

In Maximo 7.6.x, to retrieve a list of assets, I can send a request below:

https://[MAXIMO_ROOT]/maximo/oslc/os/mxasset?oslc.select=assetnum,location,description,status&oslc.pageSize=10&lean=1

It will result in something like this:

However, in MAS (Maximo 8.x), when I sent a similar request using POSTMAN with the same URL format, I got an HTML response with the error message: “You need to enable JavaScript to run this app.

As it turned out, with MAS now, to access this same API endpoint, we will need to change the /oslc/ part to /api/ as follows:

https://[MAS_MANAGE_ROOT]/maximo/api/os/mxasset?oslc.select=assetnum,location,description,status&oslc.pageSize=10&lean=1

API Authentication

It appears to me that with POSTMAN both the Native authentication and the LDAP authentication method don’t work with MAS. The only method that seems to work is using API Key by sending in the apikey header as follows

On a side note, I noticed that, with the new API Key application, anyone who has access to it can see and use the API Key of any other users, including the maxadmin account. This seems like a security concern to me. Once have access to the keys, you can use integration tools like MXLoader to manipulate data while pretending to be someone else. Anyway,

How to suppress attribute logic with the NOSETVALUE flag?

Introduction to the NOSETVALUE flag

Over the last 5 or 6 years, I haven’t had to write any new custom Java classes. Most requirements today can be done without Java coding. Despite that, some logic is hard to achieve using automation script. In this post, I’ll talk about using the NOSETVALUE access modifier to deal with one of such scenarios.

According to the MboConstants API document, the NOSETVALUE is “Similar to the READONLY, except that no exception is thrown and the setValue() calls are suppressed”. This means when a field has the NOSETVALUE flag, it is not editable, but it also doesn’t throw an error if other logic tries to update it.

Requirement

In many places, when a field is updated, the Maximo will update some other fields. We want to retain the standard logic except for a certain field we don’t want it to overwrite. Below are two examples I have come across:

  • With the HSE add-on, when a user updates the priority of a work order, the target dates are updated according to the prioritization matrix. The client does not want the target dates to be overwritten.
  • When assigning a workgroup to a work order, Maximo automatically populates the Lead field. However, the client wants this mandatory field to be left blank to force the user to specify the correct person.

Solution

For the requirement above, in Java, we can store the original value of the affected field in a variable, call the super.action() function to run the out-of-the-box Maximo logic, then set the field with the original value again.

However, with automation script, we do not have such control. To prevent Maximo from overwriting a field, we can switch on the NOSETVALUE flag on the affected field before the execution of the action method of the trigger field. After that, we need to switch it off to make the field editable again.

The execution order of attribute events is as follows:

  1. Java – validate
  2. Autoscript – validate
  3. Java – action
  4. Autoscript – action

The out-of-the-box logic to update any fields is executed in step #3. Thus, with automation script, we can switch on the NOSETVALUE  flag at step #2, and then release it at step #4.

Implementation

To give an example, if we want to prevent Maximo from overwriting the Lead field when specifying the WORK GROUP (persongroup) field, we can create an automation script below:

  • Script Name: WO_PERSONGROUP
  • Launch Point 1:
    • Launch Point: WORKGROUP_VALIDATE
    • Object: WORKORDER
    • Attribute: PERSONGROUP
    • Events: Validate
  • Launch Point 2:
    • Launch Point: WORKGROUP_ACTION
    • Object: WORKORDER
    • Attribute: PERSONGROUP
    • Events: Action
  • Code:

Since the script is very simple, I combined two launch points into one script. This means, when the PERSONGROUP field is updated, it will be called twice. The first execution triggered by the Validate event will lock the LEAD field. After the out-of-the-box Java logic is executed, the second execution triggered by the Action event will release the lock to make the field editable again.

Happy coding.

« Older posts Newer posts »