Working in IT, we deal with strange issues all the time. However, every once in a while, something would come up that leaves us scratching our heads for days. One such issue happened to us a few years back. It came back to me recently and this time, I thought to myself I should note it down.
Summary
Maximo – TechnologyOne integration error. Work orders went missing.
There are no traces of problem. Everything appears to be working fine
The problem is due to the F5 Load Balancer returns a Maintenance Page with a HTTP Code 200. This leads Maximo to think the outbound message was received successfully by WebMethods.
The mysterious missing work orders
The issue was first reported to us when the user raised a ticket about missing work orders in TechnologyOne, the Finance Management System used by our client. Without work orders created in TechOne, the users won’t be able to report actual labour time or other costs. Thus, this is considered a high-priority issue.
Integration Background
TechOne is integrated with Maximo using WebMethods, an enterprise integration platform. Unlike direct integration, these types of problems are usually easy to deal with when an enterprise integration tool is used. We simply look at the transaction log, identify the failed transactions and what caused them, fix the issue, and then resubmit the message. All good integration tools have such fundamental capabilities.
In this case, we looked at WebMethods’ transaction history and couldn’t find any traces of the missing work orders. We also spent quite some time digging through all of the log files of each server on the cluster to find any errors but couldn’t find anything relevant. Of course, that is the case because if there is an error, it should have been picked up and the system should raise alarms and email notifications to a few overlapped monitoring channels we set up for this client.
Clueless
On the other hand, when we looked at Maximo’s Message Tracking and log files, everything looked normal with work orders published to WebMethods correctly without interruption. In other words, Maximo said it had sent the message, while WebMethods said it never received anything. This left us in limbo for a few days. And of course, when we had no clue, we did what we application people do best, we blamed the network guys.
The network team couldn’t find anything strange in their logs either. So, we let the issue slip for a few days without any real progress. During this, users kept reporting new missing work orders not knowing that I didn’t really do any troubleshooting work. I was staring at the screen mindlessly all day long.
Light at the end of the tunnel
Then of course, when you stare at something long enough, the problem will reveal itself. With enough work orders reported, it became clear that all of the updates only went missing during a period between 9 to 11 PM regardless of the type of work orders or data entered. When this pattern was mentioned, it didn’t take long for someone to point out that this is usually the time when IT do their Windows patching.
When a server is being updated, IT would set the F5 Load Balancer to re-direct any user requests to a “Site Under Maintenance” page, which makes sense for a normal user accessing a service via the browser. The problem is that when Maximo published an integration message to WebMethods, it received the same web page, which is ok, as it doesn’t process any response. However, the status of the response is HTTP 200 which is not ok in this case. Since it’s an HTTP 200 OK status, Maximo thought the message had been accepted by WebMethods and thus marked it as a successful delivery. WebMethods, on the other hand, never received such a message.
Lesson Learned
The recommendation in this case is to set the status of the Maintenance page to something other than HTTP 2xx. When Maximo receives a status other than 2xx, it marks the message as a delivery failure. This means the administrator shall be notified if monitoring is set up. The failed message will be listed as an error and can be resubmitted using the Message Reprocessing app.
Due to the complex communication chain involved, I never heard back from the F5 team on what exactly was done to rectify the issue. However, from a quick search, it looks like it can be achieved easily by updating the rule in F5.
This same issue recently came back to me, so I had to note it down to my list of common issues with Load Balancer. I think it is also fun enough to deserve a separate post. This is a lengthy story, if you made it this far, I hope at least it will be useful to you at some point.
I needed to send an external system a file import request. The external system would take some time to process the file before the import result can be queried. Making a status query immediately after the import request would always return an “import process is still running”. It’s best to wait for a few seconds before making the first attempt to query the import status.
It took quite a bit of time to look up the web for a “wait” or “sleep” function. Some posts suggested using Java flow, some recommended complex processes or involved an external library.
The easiest method I finally settled with is to use Repeat as follows:
Essentially, the flow would repeat 1 time in 5 seconds before getting to the next step (Main Mapping). The repeat loop does nothing other than just writing a line in the server log to make troubleshooting a bit easier.
One of our clients undertook a massive IT transformation program which involved switching to a new financial management system, upgrading and rebuilding a plethora of interfaces among several systems, both internal and external to the business. Kronos (now UKG) was chosen to replace an old timesheet software and there was the need to integrate it with other systems such as Maximo and TechnologyOne.
WebMethods was used as the integration tool for this IT ecosystem. This is my first experience working with Kronos. The project took almost two years to finish. As always, when dealing with something new, I had quite a bit of fun and pain during this project. As it is approaching the final stage now, I think I should write down what I have learned. Hopefully, it will be useful for people out there who are doing a similar task.
The REST API
Kronos provides a fairly good reference source for its REST API. Theoretically, it offers the advantage of supporting real-time integration and enables seamless workflow. However, we don’t have such a requirement in this project. On the other hand, this has two major limitations.
API throttling limit: it restricts the number of calls you can make based on the license purchased.
Designed for Internal Use: it is obvious to me that the API was built for internal use of the application. It is not built for external integration.
No one told us about this when we first started. As a result, we hit several major obstacles along the way:
First, most API calls will need to be method specific. For example, Cost Center requests need to be either Create New, Update, or Move. There is no Sync or Merge operation. The Update and Move requests will accept Kronos’ Internal ID only. When sending an Update or Move request, we need to send another request first to retrieve the Internal ID of a record.
Cost Center has a simple structure with a few data fields. However, to get it to work, we had to build some complex logic to query Kronos to check whether the record exists (and the parent) to send in the appropriate Create New, Update or Move request.
This is not a major problem until the API Limit is added to the equation. If Kronos receives more than certain number of requests over a given period, it will stop processing other requests. In other words, the whole integration system is out-of-service. We had to build a caching mechanism to pre-load and refresh the data at a suitable time so that the number of requests sent to Kronos is kept at minimal. This adds a lot of complexity to an otherwise simple interface.
With a more complex data structure, such as the Employee master data, if we use the REST API, it is impossible to build an interface that is robust enough for a large scale, high-volume system. We had to build complex business logic in WebMethods to handle all sort scenarios and exceptions that could occur. The process to create a new employee record can result in more than a dozen different requests to check existing data and lookup for Internal ID of different profiles such as Security, Schedule, Timesheet, Holiday, and Pay Calculation Profiles, then send in the Create New/Update requests in the correct order, ensure proper handling of exception and roll back if one request fails due to various reasons.
The Report API
Kronos provides a REST API to execute reports. Besides from the out-of-the-box capability, it is possible to build custom API for reporting too. This is useful to alleviate some of the problems with the API throttling limit.
For example, we have an interface to send organisation hierarchy (departments and job positions) to Kronos as Cost Centers. The source system, TechnologyOne in this case, would periodically export its whole data set to a CSV file. We only need to query Kronos to determine if the record exists to either send a Create or an Update request. If the record has a new change, we need to send an Update and/or a Move requests. In this case, we used the Report API to retrieve the full set of Cost Center in one single call rather than having to make thousands of individual cost centre detail requests.
The Import API
The Import API turned out to the best way to send data to Kronos. We learnt it the hard way. It has some minor limitations such as:
Some APIs use description to identify a record instead of an ID
Documentation sometimes is not accurate.
However, the Import API provides some powerful capability for sending external data to Kronos:
Support bulk upload operation
Auto matching with existing records – does not require querying for Internal ID
Support “merge” operation – automatically decide whether to create new or update depending on whether a record already exists or not
Since this is an asynchronous operation, and the time it takes to process inbound data depends on the volume. We need to build a custom response handler to continuosly checking with Kronos after a deliver to retrieve the status of an import job to handle Success or Failure result. This custom response handling takes some extra effort to build, but it is reuseable for all import endpoints.
As an example, with the Employee interface above, at some point, it became too complex and a maintenance nightmare for us. We had to rebuild it from scratch using the Import API and we were glad that we did. It greatly simplified the interface, and the business is now very confident of its robustness.
Conclusion
If I need to build a new integration interface with Kronos now, for retrieving data from Kronos and sending it to another system, I will start with using Reports (via the Report API) to identify new updates, then use the REST API to retrieve details of individual records if it is required. For sending data to Kronos, I would look at the Import API first. I will only go for the REST API if the Import API cannot do what I want and only if the request is simple and low volume.
In WebMethods, the most basic way to write a string “IN”
operator is to use Branch as follows:
Another way to reduce the number of lines of code is by combining
the conditions using “OR”:
These approaches works well if the number of options is
small or if the variable name is short. If there are more than a few options,
or in the case of a very long variable name, the code will look very messy, and thus, difficult to maintain. For example:
To implement the string “IN” logic, we can use /^value$/.
For example, the code below would evaluate to true if $input is exactly one
or two or three
To implement the string “CONTAINS” logic, we can use /value/.
For example, the below would return true if $input string contains one:
With Regex, it is also simple to check if the variable
contains one of several values:
My favourite Martin Fowler’s quote is “Any fool can
write code that a computer can understand. Good programmers write code that humans can understand”. This trick helps me to keep most of my code fits in
a single screen.
When writing a piece of software, we are in total control of the quality of the product. With integration, many elements are not under our control. Network and firewall are usually managed by IT. With external systems, we usually don’t know how they work, or many times, not given access. Yet, any changes to these elements can cause our interfaces to fail.
For synchronous interfaces, the user would receive instant feedback after each action is taken (e.g. Maximo GIS integration), we don’t usually need to set up alarms. For asynchronous interfaces, which don’t give instant feedback, when failure occurs, it usually goes unnoticed. In many cases, we only find out about failures after it has caused some major problem.
A good interface must provide an adequate ability to handle failures, and in the case of async integration, proper alarms and reports should be set up so that failures are captured and handled proactively by the system administrators.
On the one hand, it is bad to have no monitoring. On the other hand, way too much alarm is even worse. It leads to the receivers of these alarms completely ignore them including the critical issues. This is usually seen in larger organisations. Many readers of this blog won’t be surprised when they open the Message Reprocessing app and find thousands of unprocessed errors in there. It’s likely that those issues have been accumulated and not dealt with for years.
It is hard to create a perfect design from day one and build an interface that works well in the first release. There are many kinds of problems an external system can throw at us, and it is not easy to envision all possible failure scenarios. As such, we should expect and plan for an intensive monitoring and stabilizing period of one to a few weeks after the first release.
As a rule of thumb, an interface should always be monitored and raise alarms when a failure occurs. It should also allow resubmission or reprocessing of a failed message. More importantly, there shouldn’t be more than a few alarms raised per day on average from each interface, no matter how critical and high volume it is. If there are more than a few alarms per day, it will become too noisy, and people will start ignoring them. In that case, there must be some recurring patterns and each of them must be treated as a systemic issue. The interface should be rebuilt or updated to handle these recurring issues.
It is easier said than done, and every interface is a continuous learning and improvement process for me. Below are some examples of the interfaces I built or dealt with recently. I hope you find it entertaining to read.
Case #1: Integration of Intelligent Transport System to Maximo
An infrastructure construction company built and is now operating a freeway in Sydney. They use Maximo to manage maintenance works on civil infrastructure assets. An external provider (Kapsch) provided toll point equipment and a traffic monitoring system. Device status and maintenance work from this system are exported daily as CSV files and sent to Maximo via SFTP. On the Maximo side, the CSV files are imported using a few automation scripts triggered by a cron task.
The main goal of the interface is to maintain a consolidated database of all assets and maintenance activities in Maximo. It is a non-critical integration because even if it stops working for a day or two, it won’t cause a business disruption. However, occasionally, Kapsch stopped exporting CSV files for various reasons. The problem was only found out after a while, like when someone tried to look up a work order but couldn’t find it, or when a month-end report looked off. Since we didn’t have any access to the traffic monitoring system managed by Kaspch, Maximo had to handle the monitoring and alarms of this integration.
In this case, the difficulty is, when the interface on Kapsch’s side fails, it doesn’t send Maximo anything, there would be no import, and thus no errors or faults seen by Maximo to raise any alarm. The solution we came up with is having a custom logging table in which we write each import as an entry with some basic statistics including import start time, end time, total records processed and the number of records that failed. The statistics are displayed on the Start Center.
For alarm, since this integration is non-critical, an escalation is set to monitor whether there has been no new import within the last 24 hours, Maximo will send out an email to me and the people involved. There are actually a few different interfaces in this integration, such as for device list and preventive maintenance work coming from TrafficCom, or corrective work on faults coming from JIRA. Thus, sometimes, when a system stopped running for various planned or unplanned reasons, I would receive multiple emails for a couple of days in a row, which is too much. So, I tweaked it even further by sending only one email on the first day if one or more interfaces stopped working, and another email reminding me a week later if the issue had not been rectified. After the initial finetuning period, the support team on Kapsch and Maximo’s side is added to the recipient list, and after almost two years now, the integration has been running satisfactorily. In other words, there have been a few times files were not received on the Maximo side and the support people involved were always informed and able to take corrective action in a timely manner before the end-users could notice.
Case #2: Integration of CRM and Maximo
A water utility in Queensland uses Maximo for managing infrastructure assets, tracking, and dispatching work to field crews. When a customer calls up requesting a new connection or reporting a problem, the details are entered to a CRM system by the company’s call centre. The request will then be sent to Maximo as a new SR, and then turned into work orders. When the work order is scheduled and a crew has been dispatched, these status updates are sent back to CRM. At any time, if the customer calls up to check on the status of the request, the call centre should be able to provide an answer by looking up the details of the ticket in CRM only. Certain types of problems have high priority such as major leaks or water quality issues. Some issues have SLA with response time in minutes. As such, this integration is highly critical.
WebMethods is used as a middleware to handle this integration, and as part of the steps for sending new SR from CRM to Maximo, the service address will also need to be cross-checked with ArcGIS for verification and standardization. As you can see, there are multiple points of failure with this integration.
This integration was built several years ago and there has been some level of alarms set up in CRM on a few points where there is a high risk of failure such as when a Service Order is created but not picked up by WebMehods or picked up but not sent to Maximo. Despite this, the interface would have some issues every few weeks, and thus, needed to be rebuilt. In addition to existing alarms coming from CRM, several new alarm points were added in Maximo and Webmethods:
When WM couldn’t talk with CRM to retrieve a new Service Order
When WM couldn’t send a status update back to CRM
When WM couldn’t talk to Maximo
When Maximo couldn’t publish messages to WM
These apply to individual messages coming in and out of Maximo and CRM and any failure would result in an email sent to the developer and the support team.
In the first few days after this new interface was released to Production, the team received a few hundred alarms each day. My capacity to troubleshoot was about a dozen of those alarms a day. Thus, instead of trying to solve them. We tried to identify all recurring patterns of issues and address them by modifying the interface design, and business process, or fixing bad data. A great deal of time was also spent on trying to improve the alarms, such as for each type of issue, detailed error messages, or in many cases, the content of the XML message itself is attached to the email alarm. A new “fix patch” was released to Production about two weeks after the first release, and after that, the integration only produced a few alarms per month. In most cases, the support person can immediately tell what the cause of the problem is by just looking at the email before even logging in to the client’s environment. After a year now, all the possible failure points that we envision, no matter how low of a chance it can occur, have failed, and raised alarms at least once, and the support team has always been on top of it. I’m glad that we had put in all those monitoring in the first place. And as a result, I haven’t heard of any issues that have not been fixed before the end-users become aware of it.
Case #3: Interface with medium criticality/frequency
Of the two examples above, one is low frequency/low criticality; the other is high frequency and high criticality. Most interfaces are somewhere in the middle of that spectrum. Those interfaces that are highly critical but don’t run frequently or don’t need short response time can also be put into this category. In such cases, we might not need to send individual alarms in realtime. Even an experienced developer cannot troubleshoot more than a few issues per day. As a rule of thumb is, if an interface raises a few alarms per day, it is too much. As developers, if we can’t handle more than a few alarms a day, we shouldn’t do that to the support team (sending them alarms all day long). For the utility company mentioned above, when WebMethods was first deployed, the WM developer configured a bi-daily report that lists all failed transactions that occurred in the last 12 hours. Thus, for most interfaces, we don’t need to set up any specific alarms. If there are a few failures, they will show up in the report and will be looked at by technical support at noon or at the end of the day. This appears to work well, even for some critical interfaces such as bank transfer orders or invoice payments.
Case #4: Recurring failure resulting in too much alarm
For the integration mentioned in #1 and #2, the key to getting them to work satisfactorily is to spend some time after the first release to monitor the interfaces and finetune both the interface itself and the alarms. It is important to have alarms raised when failure occurs, but it is also important to ensure there aren’t too many alarms raised. Not only people will ignore it if they receive too many alarms, it also makes it hard to tell the critical issues apart from other less important ones. From my experience, dealing with those noisy alarms is easy. Most of the time, the alarms come from a few recurring failures. When people first look at it, they can easily be overwhelmed by the high number of issues and feel reluctant to deal with it. The strategy is simply deal with each alarm/failure one by one, and carefully document the error and the solution for each problem on an Excel spreadsheet. Usually, after going through a few issues, a few recurring patterns can be identified. By addressing them
Example: a water utility in Melbourne uses an external asset register system, and the asset data is synchronized to Maximo in near realtime. The interface produces almost 1GB of SystemOut.log file each day causing the logs to be useless. I looked at each error and documented them one by one. After about two hours, it was clear that 80% of the errors came from locations missing in Maximo. When the interface creates new assets under these locations, Maximo produces a lot of error trace to SystemOut log file. I did a quick scan and wrote down all of the missing locations and quickly added them to Maximo using MXLoader. After that, the amount of error was reduced significantly. By doing occasional checks on the log files in the following few days, I was able to list all missing locations (there were about 30 of them) and able to remove all errors caused by this. The remaining errors found in the log files were easily handled separately. Some critical issues only came under the radar of the business after that.
Manipulating string is probably the most frequent operation we need to do when transforming data. Thus, I’d like to talk a bit about string concatenation in WebMethods. The most basic way to add two strings is to use the pub.string.concat service in the WmPublic package as shown below:
Image 01. pub.string.concat service
However, this approach is way too limited. For example, to combine a Unique ID string which consists of a Prefix, an auto ID number, a Suffix with separator in between such as: WO-10012-CM, we’ll need to use at minimum 4 lines of code, and a number of temporary variables. That’s crazy.
An alternative approach is to use variables substitution when assigning values to a variable as shown below. In this case, we can build a new string from an unlimited number of variables in just one line of code.
Image 02. using variables substitution
This approach doesn’t work if one of the input variables can have a Null value. In that case, it will give us an unwanted result as shown below:
Image 03. Bad result when input is Null
With Tundra library, we have a much more flexible tundra.string.concatenate service. It allows us to add unlimited number of strings in one line, and at the same time, have some other capabilities such as adding separator.
Image 04. Use tundra.string.concatenate
In the example above, I can achieve the same result in one line of code, and the tundra service is smart enough not to add the second separator when the $suffix variable is Null.
However, beware of the annoying bug with WebMethods that it doesn’t have ability to set the order of input variables. Thus, when you make an update to one input, for example, in the example below, I changed the input variable str2 to map it to a different variable, it will move to the bottom of the list, and thus, messed up the final output.
Image 05. Last updated variable is moved to the bottom
A workaround is to update the other variables manually to correct the order. But from my experience, anything having more than 3 inputs will be a maintenance nightmare.
Image 06. A “quick” update messed up the order of concatenation
The more robust approach is to write our own concatenate service as part of a common service package. It can take a bit of time, but if we have a big project, it’s worth the effort in building some common service library to be reused across the entire environment. Ten minutes spending on building this service can save the whole team countless number of hours in development and maintenance.
Edit: After reading this post, Lachlan Dowding, Tundra library’s author, told me we can use String List to workaround the WebMethods’s bug on changing the order of input strings when using the Tundra’s concatenate.
I am a freelance Maximo consultant based in Melbourne. If you enjoy reading my blog, please connect with me on LinkedIn to get updates on new posts. If you or your company need any professional assistance, please leave me a message, I'll call you back.