Category: SQL Server

My failed attempt to get Maximo to work with Azure SQL database

Recently, I started playing with Azure by attempting to migrate a Maximo instance from my local VM to Azure platform. Although the attempt was a success. I didn’t realize SQL Server and Azure SQL are different databases (or more correctly, two different versions). There were a few issues during the process, but I figured out how to work around them and got Maximo running on Azure VM and Azure SQL. After sharing the result on LinkedIn, there were some comments that Maximo couldn’t be installed on Azure SQL and IBM doesn’t support it, so I spent a bit more time digging and thought I should share the details and some of my opinions on this matter.

First, let us be clear, Azure is a big cloud platform which offers many different services. I’m not a cloud expert, but from what I understand, we are talking about two main services:

  • Azure VM: which is IaaS and it lets us run a virtual machine on the cloud. You can run many different Windows or Linux versions on it; it is transparent to the standard business applications (Maximo). In other words, there shouldn’t be any issue running Maximo on Azure VM as long as you stick to the OS versions supported by Maximo. For example, when referring to Maximo 7.6.1.1 Compatibility matrix, Microsoft Hyper-V 2012 and 2012 R2 are supported, and with them, Windows Server 2012 and 2016 are supported as Guest OS. In fact, many companies large and small are running Maximo on Azure VM.
  • Azure SQL: is a cloud version of SQL Server. It has the core engine of SQL Server database, but certain aspects of it has been modified to work on a cloud environment and to support multi-tenancy. Current version of Maximo doesn’t support Azure SQL, and IBM don’t have any plan to support it in the near future with Maximo 7.6.1.x and 8 (according to their 2020 Roadmap).

Even with that knowledge, I was still curious on whether I can make it work from technical perspective and did a few experiments:

A – Installing Maximo 7.6.1 from scratch on Azure VM and Azure SQL:

People mentioned that Maximo cannot be installed on Azure SQL, so I did just that, just to know what exactly the issue is (and secretly, I hoped I could fix
it). I followed the standard Installation process and managed to install
Websphere, prepare the environment, and install Maximo SMP.

After SMP was installed, I couldn’t get the Configuration tool to connect to Azure DB because it couldn’t resolve the DB Alias provided by Azure.

I attempted to execute this step manually using command line by setting up the
maximo.properties file, then run the maxinst.bat tool. However, DBC script failed when it ran an undocumented DBCC ‘CALLFULLTEXT’ command:

To be honest, installing a fresh Maximo DB on Azure SQL shouldn’t be a problem from technical perspective, because we can always install it on an on-premise SQL
Server, then put it on to Azure SQL (using the process I described in my previous post). However, this is definitely a failure.

B – Upgrading Maximo from 7.6.0.6 to 7.6.0.9

It is critical to be able to upgrade and install updates, fix packs, or industry add-ons. From technical stand point, all of these activities are essentially the same process; it involves running a large number of DBC scripts, which mostly consists of SQL commands. I wanted to see if it works on Azure SQL.

I started out with a simple task first: upgrading from Maximo 7.6.0.6 to 7.6.1. To do this, I simply replaced the existing 7.6.0.6 SMP folder installed previously with a new 7.6.1 SMP folder, then run the updatedb tool. On the first run, I had several issues related to incompatible collation between “Latin1_General_CI_AS” (default setting of on-premise SQL Server) and “SQL_Latin1_General_CP1_CI_AS” (the default value of Azure SQL’s databases, which can be changed, and the ONLY supported collation of master and tempdb).

Changing a database collation is a complex job which involves updating the value at all three levels: database, table, and column.  To do it, we have to drop all indexes and constraints on the tables, then run the update script, then re-create the indexes. I only wanted to find out if upgrade is possible, so I tried again with a fresh Demo DB created with “SQL_Latin1_General_CPI1_CI_AS” collation. The upgrade is successful this time. So, that is a small success.

C – Install Add-ons

Next, I tried installing some serious industry add-ons (Scheduler 7.6.7, Oil and Gas 7.6.1, Utilities 7.6.0.4, Control Desk 7.6.1) on top of the working instance from the previous step. These are probably the most complex add-ons we could get for Maximo. As such, if it worked, there wouldn’t be any problem to get it to work with any future upgrade/update.

The process failed when it tried to run some scripts belong to the ICD add-ons. This time, it had an issue when trying to access object spt_values in the master DB which will never be given in a multi-tenancy cloud environment. There were a few posts on the web showing us how to work around this issue. But I stopped that this, and that is the end of my attempt. There’s no point in trying to spend too much time in getting something to work knowing it is not supported by IBM (unless there are some clients willing to pay for my time doing that in the future)

Conclusion: the result of this process confirms what others have reported, Maximo doesn’t work with Azure SQL database. Although the issue with collation can be worked around (with a lot of effort), there are more than one issue with various DBC scripts requiring access to the master DB which is not possible in Azure DB. For people with extensive knowledge on DBC scripting and custom DBC handler, they might be able to modify the problem scripts to get it to work, but that is definitely not supported by IBM and the effort cannot be justified in my opinion. As support for Azure DB is not on the road map. At this stage, our only option is to use on-premise DB version installed on the cloud as a separate DB server.

Mess around with Azure: Migrate Maximo to Cloud

Last week, while attending a call to discuss an integration interface between Maximo and an Azure SQL database, the other consultant mentioned a few Azure terms like “blob storage”; I realized I didn’t know anything about Azure despite the fact that the cloud platform has been adopted by the large number of the clients I work with. So today, I decided I should play around a bit with it by trying to migrate a Maximo instance from my local VM to Azure.
 
Before I go into the technical details, for those of you non-technical readers, I like to sum up the whole experience I had with this platform in one word: magical. That’s it, I went to the website, signed up for a trial account, followed some wizards or clicked on a few buttons or menus that I think were features I need, and boom, I got a running Maximo instance on the cloud in just a few hours. No training, no muss, no fuss, everything just works. Not much else to say. I’m in love now. I think will spend the next couple of weeks, months or even years to learn more about Microsoft stuff.

Below are
the steps that I took:
 
1.1. Sign up for a trial account. After created the account, I was given $260 AUD to spend on paid services
 
1.2. I went straight to the portal and tried to create a new VM. Being cheap as I am, I went to the lowest option of 1 CPU + 3.5GB RAM. I know to install Maximo, I’ll need at least 8GB, but in this case, I just wanted to see if I can fit Windows, Websphere and Maximo into a box this size
1.3. Next page, I gave the VM a 256GB SSD data disk
1.4. For network setting, I gave it a Public IP as I want to access Maximo from public Internet
1.5. On the next few pages, I left the default settings and proceeded to create the VM. After about 5 minutes, the VM was created. I clicked on “Connect”, and it downloaded an RDP shortcut which will open the Remote Desktop app on my laptop and pre-populated it with the IP address of the machine. I struggled at this step a bit because I forgot that I created a Window user account in the 1st step. Thus, I tried to use my Microsoft account instead and it failed. After a few failed attempts and a quick google search, I remembered the details and entered MaxDM76AzureUser for account name, could login with that. The machine is extremely fast and responsive, feels like working on a local server rather than some remote sessions on the cloud (Now I remember how frustrating I was having to deal with the sluggish Maximo SaaS Flex DEV server). Download files gave me a rate of 50-70MB/s (that’s megabytes per second not Mbps)
 
<This screen was
captured after the fact as I forgot to take a screenshot of this step>
2.1 Next step, I tried to create an SQL DB. I put it in the same Resource Group as the Maximo App Server VM Because, naturally, I think all resources for an application should be put in one Resource Group as the name suggests. To be honest, I still have no idea what it is now. On the “Server” drop-down, since it is empty, I had to click on “Create New” and created a new server. I think we cannot remote access to this one like a VM, just a logical name to manage an SQL instance.
  
2.2 On the Networking page, I chose Public Endpoint for Connectivity Method because I thought I need to access the DB from SQL Studio on my Laptop, and Allow Azure services and resources to access this server because I think this option lets Maximo on App Server VM to access the DB. It turned out I couldn’t figure out how to access the DB using SQL Studio on my laptop (more on this later), but the second option works exactly as it supposed to do.
2.3 Default settings for the next few pages then I proceeded to create the DB.
2.4 I attempted to connect to the DB using SQL Studio on my laptop but it didn’t work. However, I tried to install SQL Studio on the Azure VM and it worked on the first try.
3.1 Now I had a server and a DB, I proceeded with Maximo migration. I backed-up the SQL DB in my VM to a DMP file, uploaded it to Azure VM, and tried to Restore it to Azure SQL. It turned out on-premises SQL Server is different to Azure SQL, and I can’t do that. A quick google search revealed that I had to export the DB into a *.bacpac file, then import it to Azure.
3.2 The whole export/import of bacpac file was straight-forward with a minimum number of clicks, and no confusion. After it’s done, I realized the import process restored the DB from my VM, and I ended up with two DBs, so I had to delete the empty one I created in the earlier step.
3.3 I struggled a bit trying to create a new ‘maximo’ user. Turned out the way security logins and users are managed in Azure SQL is a bit different, so after a bit of googling, I found some scripts I could use. So here is the notes I wrote down for this step
– Right click the “Database Folder”, to add a new Query window:  
CREATE LOGIN maximo WITH PASSWORD =
‘SuperSecret!’
– Right click the ‘maxdm7609’ to add a new Query window:
  CREATE USER maximo FOR LOGIN maximo WITH
default_schema = dbo;
  ALTER ROLE db_owner ADD MEMBER maximo;
4.1 Next step is to install Websphere, a quick check before installing showed I had 2.3 GB RAM left, I thought could make Websphere to take 1GB and Maximo 1GB, so I proceeded with the installation.
4.2 It turned out that although I can install Websphere successfully, the automated process to prepare the environment for Maximo failed due to Out-of-Memory error. I know we could manually configure WebSphere  and make it works, but what’s the point in wasting time doing that? So I uninstalled Websphere, upgrade the VM to the next lowest option which has 2 CPU and 8GB RAM, then attempted to install Websphere again, which worked this time.
 
4.3 I uploaded a pre-built maximo.ear file from my local laptop to Azure VM, deployed it to Websphere, then updated the connection string to point to the Azure SQL DB. Connection string is almost exactly the same as connecting Maximo to a local SQL DB. And it worked immediately after I tried to start MXServer after deployment.
(Installing Websphere and Deploying maximo is not the topic of this post, so I’m not going to put the details of those steps here)
 
5.1 Now I got Maximo running on the VM, I wanted to access it from the Web, so I went to the VM’s Networking page, and opened port 80 inbound for the VM, but it still didn’t work. After a bit of fussing around, I remembered I had to open the port on the VM’s Windows Firewall as well. After opened port 80 on both two firewalls, it worked like a charm, I could access Maximo from my laptop using the VM’s public IP  
5.2 I wanted to have a DNS to access it as well, so in the VM’s IP setting, I set it to Static (which I think will prevent it to change even if I shutdown the VM), and put a DNS name label.
5.3 Right after I clicked on Save, I can access Maximo using a DNS alias. Mission accomplished
I have a fair share of experience with various software and platforms ranging from great, good, bad, or horrible. I can live with bad software, but it irritates me to the bone. I feel good when playing with good software. In this case, I think Microsoft has done everything right designing the Azure platform, I felt nothing but joy throughout this whole learning process. My next step is probably to recommend anyone I meet to put Maximo on Azure if they are considering moving to cloud.

Convert Australian GDA94 Easting/Northing values to Latitude/Longitude in SQL Server

As I play around with the Cognos Analytics map controls, it appears to me it only understands Latitude/Longitude values, but the sample dataset I have is in UTM format. So, I have to find ways to convert Easting/Northing values to Lat/Long. Since I use SQL Server, attempting to create a column in “Geometry” data type from Easting/Northing value, then convert it to a “Geography” data type (Lat/Long) does not seem to work for me.

After some scrambling on the web, the below SQL Functions seem to work for me to convert Easting/Northing values (on the Australian GDA94, Zone 56 standard). The key to get it to work is setting the correct datum details in the constant section. A quick test of this function does seem to give me the correct Lat/Long coordinates of Sydney Habour Bridge:

Coordinate of the Sydney Harbour bridge for testing

Calling the utm2lat and utm2lon functions to get the result

The two pieces of T-SQL codes below are the same, the only difference is one returns Latitude, the other one returns Longitude value

Maximo with Oracle’s InMemory (Part 2) – Huge Performance Gain

Last week, I played around with Oracle’s new toy: the InMemory feature available in Enterprise Edition. Although it made Maximo runs 1.25x faster, but it didn’t meet my expectation which was from 2x to 5x. This has bothered me for the whole week and I kept thinking about it.

If you’ve read my previous blog post, the one thing I pointed out which could lead to no performance improvement is that I ran the test on a tiny demo version. It has only a few hundred assets and less than a thousand work orders. So, any heavy processes or poorly written queries couldn’t make the database 1 second slower. This week, I set out to do a more elaborate test with a setting that looks more similar to a real production environment.
At first, I setup my system as follows:

  • For Maximo: v7.5 with several industry solutions and add-ons. The data includes 4M work order records, 200k Assets & Locations, 200k Issue Transactions, and 1M Labor Transactions. Total size of the DB is more than 50GB.
  • For DB Server
    • HP EliteBook 840: Core i7-6600U 2.7 Ghz, 16GB RAM, SSD 512. 
    • Oracle 12c Enterprise: 3GB allocated to PGA, 11 GB allocated to SGA (which includes 6GB allocated to InMemory data store)
  • For App Server: Virtual Box running on Mac OS host: Core i5 2.3 GHz, 16GB RAM, SSD 512GB
    • VBox configuration: Windows 2012 x64, 12GB RAM
    • Websphere: Cluster with 3 JVMs x 3GB each.

I got one problem though. The two machines connected wirelessly to each other via a home wifi router which gave a latency of between 5ms to 20ms, sometimes it went up to 100ms or even timeout. This resulted in the system ran with unpredictable performance. In practice, network connection between Application Server and DB Server usually have a latency of  < 1 ms. So I abandoned the idea of having the system runs on two separate boxes, and returned to a configuration with everything runs on one laptop as follows:

  • Database: PGA – 3G, SGA – 8G (5G of which is InMemory)
  • Only one MXServer JVM with 3 GB

I intended to test and compare following operations:

  • Run a Saved Query on Work Order Tracking app
  • Generate 7 PM Work orders at once
  • Change status of 27 Work orders at once
  • Run a simple Report created by “Create Report function” which joins two tables: Work Order and Asset
  • Run a more complex report which joins a few tables including Work Order and Actual Labor and Actual Material

For each of the operations above, I ran test three times to get the average execution time (I ignored the first run as it is usually longer since data hasn’t been populated and cached to memory yet). Below is the result:

For the running saved query, I don’t understand why it took longer to load (result was consistent, not random)

For other expensive operations, Maximo ran 2-3 times faster.

For the more complex report, it never finishes. I tried several times and with the longest one I waited for 50 minutes before killing the process (I checked and Oracle was still processing, not hang). Even after turning on InMemory, it took more than 20 minutes, then I killed it as I was too tired to wait.

For the simple report, it is 5 times faster. If we pay close attention here, the BIRT report window took more than a second to open already, which means DB query returned result almost instantly. This made me believe that we could get a 10-15x performance gain for reports that take several minutes to run.

Since I had to abandon my original setting with two laptops, and also, because I was too lazy to build a new report which will take around 5-20 minutes to run on this environment, I couldn’t test a heavier load scenario with several expensive read/write operations like PM Gen, WO Change Status, and  running several slow reports at once. This kind of load is what we normally see in production and usually slow down the system significantly because operations and queries have to wait for each other to release resource lock. But this situation is exactly where InMemory will make a huge difference.

Anyway, even with this result, I am happy, and from now on, I will be all out selling “memory upgrade” to whomever Maximo users I meet.


An afterthought note: I used to consider DB2 a second choice for Maximo due to the difficulty of finding a good DB2 DBA, and thus, usually only suggested it to small clients with low budget. But now considering the significant cost of Enterprise version of Oracle or SQL if users want to implement this feature, the free bundled DB2 license offered with Maximo is an attractive option. For large enterprise client, I guess I will now present DB2 as an option for them to consider too.

Test Oracle InMemory Database with Maximo

For the last few years, SAP has been pushing hard on its HANA
InMemory data platform and everybody talks about it. For me it makes sense
because SAP’s ERP is such a huge system usually used by super large enterprises
and is both a data intensive and mission critical system.
 
 
Maximo on the other hand is usually much less data intensive
and for most clients I work with in Vietnam, they have small systems with
databases of less than 10-20GB. Thus, I believe InMemory database is not a big
deal for Maximo users. As I recently moved to Australia and got a chance to
work with a much bigger client. Their Maximo runs on a cluster of more than two
dozen JVMs yet somehow is still a bit slow considering the number of active
users that they have. I suspect (since I don’t have visibility to their DB server)
the bottle neck is the database in this case. Besides from the standard
suggestions of looking at disk storage/SAN, network, memory allocation etc., I
also mentioned they can consider implementing InMemory. Then I realized I never
seen it implemented with Maximo, it would be a huge embarrassment if they look
at it and find out that it doesn’t work.
 
This week I have some free time, so I decided to play around
with InMemory database for Maximo to (1) confirm if it is possible and (2) see
if it gives any real performance gain for Maximo.
 
Here is my system configuration:
 
  • Host laptop: Processor Core i7-6600U – 2 cores x
    2.6 Ghz, 16GB RAM, SSD 500GB. Windows 7
  • VMWare: I gave it 10G Ram, 4 cores, Windows 7
  • Database: Oracle 12c R2 x64 Enterprise Edition.
    I gave it 1G for PGA, 4G for SGA (of which, 2G is given to INMEMORY store)
  • Maximo 7.6.0.0 – Demo database. I setup only one
    MXServer and gave it 2G heap size
 
 
 
As you can see, this is very different to a real production
environment and thus the result found in this test may not reflect what you will
find if implemented in production. Some key elements that I can think of that
could lead to differences in results of this setup and real-world production
system include:
 
  • Standard Maximo’s demo database is super small,
    less than 500MB. Thus, InMemory may not lead to any improvement at all
  • Maximo App and Database deployed on the same box.
    Thus, there’s no network latency between App and DB server as usually seen in production
    environment where DB server and App server are placed in two different subnets,
    and can have a firewall in between. This makes resource retrieval process much more
    expensive.
  • This has only one session instead of hundreds if
    not thousands of concurrent sessions as can be seen in production.
  • DB servers on production usually use superfast storage
    systems through SAN/RAID configuration, thus InMemory may not improve
    processing time in that case.
 
 
 
As mentioned, Maximo demo system is very small and most
operations will complete almost instantly (less than .5 second). Thus, I
decided to test two operations:
 
  • Generating Work Order from PM
  • Changing Work Order status
 
The data I chose is the Job Plan: “PMBULKTR – Bulk Trailer PM
Servicing”
, which applies to 5 trailers with Assetnum “44416x” (belong to ‘FLEET
site). The reason I chose this job is because it has 70 tasks. It is a common
problem for Maximo users when changing status of a Work Package with large
number of child WO or tasks is that this operation is very resource intensive
and can take a lot of time to complete. I have seen in many cases, such as in
plant turn-around, changing WO status took so much time that the browser’s
session timed out. So I decided to carry out a test following these steps: Select
the 5 PMs, generate WO at once. Do it three times to measure the time it takes
to complete. After I got 15 new WOs, I move to the Work Order Tracking app, and
select all 15 of them, then try to change status to WAPPR, then to APPR, then
WAPPR again to measure the time it takes each time.
 
After I finished timing the above steps, I will turn on
INMEMORY for the whole ‘MAXIMO’ tablespace by issuing a command: ‘ALTER
TABLESPACE maximo DEFAULT INMEMORY;’
Restart the system, then do the same test again
to measure execution time with INMEMORY enabled.
 
Anyway, my expectation for this configuration would be a 2x
to 5x in term of improved processing speed. The reason is when we built an
add-on for inventory cataloguing, there was a process to compare technical specification
of one item with the spec of thousands of other items to find similarity. This
process took a lot of time to execute. The developer then cached the data in
memory and run the operation with cached data only. This resulted in a boost of
more than 100x in performance. So instead of hours, it takes only a few seconds
to complete. Thus, I think an expectation of 2x to 5x performance gain in this case
is reasonable.
 
Below is the result I got from the test:
 
 
For each operation, I got only about 20% reduction in
execution time. I tried a few other quick tests with other processes while
switching on/off the INMEMORY feature and the result is consistent. Obviously
with this result, my expectation of 2x – 5x improvement is proven to be
unrealistic.
 
Thinking of putting this into practice, if your organization
already have Oracle Enterprise Edition, setting up would be really simple.
Throw in some extra bars of memory, select a few key tables that got queried
the most such as the Work Order – related tables to populate them into INMEMORY
area. If it can give you 20% reduction in execution time, I believe it is still
very attractive to consider.
 
For Microsoft SQL Server, what I found on the web, we need
SQL Server 2016 or 2017 Enterprise Edition which works with Maximo. (I tested
installing Maximo 7.6.0.9 on SQL 2017 and it works well). The free Developer or
Evaluation edition can also be used to test at home. However, I found the configuration
steps seem to be more complex for Maximo as you have to review data structure
to modify/exclude columns with specific datatypes such as XML, BLOB etc. Thus,
if my company is using Maximo with SQL Server, I would think twice about implementing
INMEMORY feature.
 
(Note: I don’t provide detailed steps to configure INMEMORY
feature in Oracle 12c here because it is super simple, all you have to do is
set INMEMORY_SIZE parameter to something other than 0 such as 2G or 4G. Make
sure your SGA is larger than this, e.g. if INMEMORY_SIZE is 2G, SGA should be 3G
or more. Otherwise, you will not be able to start the database service. Once
INMEMORY_SIZE is allocated, issue the ALTER TABLESPACE or ALTER TABLE command
to enable/populate the tablespace or table into INMEMORY area. Also make sure that
you have Oracle 12c Enterprise Edition)