Author: Viet Tran (Page 4 of 4)

Auto-resize images when uploading attachments with Automation Script

Problem with large images

In recent years, there has been a surge in the adoption of mobile solutions for Maximo. For many companies, the use of mobile apps is no longer restricted to the work execution process. Processes like raising service requests or carrying out field inspections have become mainstream. These use cases often involve uploading a lot of photos taken directly from the phone with high-resolution cameras. This leads to a high demand for attachment storage. The time and bandwidth required to view large files via mobile network is also a concern.

Approaches

Often, high-resolution photo is not needed, and we want to resize the file to address this problem. Unfortunately, Maximo doesn’t have this functionality supported out of the box.

Asking the end-user to resize large photos before uploading is not practical. It is our job to make it easier for the user, not make it harder. I have seen different clients having different approaches to keeping file sizes small. But they often involve Java customization which I don’t like.

The best approach is to resize the photo in the mobile application before uploading. But it is dependent on whether the mobile solution has the functionality or can be customized to do it.

Auto-resize images when upload with Automation Script

The simplest solution I have is to use an automation script to resize a photo when uploading. All we have to do is create an Automation script on the Save event of the “DOCLINKS” object with the bit of code below:

Hope this helps.

How to integrate ChatGPT with Maximo?

Ever since ChatGPT’s release, I’ve been contemplating how to leverage large language models (LLM) to enhance legacy applications like Maximo. Given the ability to engage in a conversation with the machine, an obvious application is to facilitate easy access to information through semantic search in the Q&A format. To allow a generic LLM model to respond to inquiries about proprietary data, my initial thought is fine-tuning. However, this approach comes with several challenges including complexity and cost.

A more practical approach is to index organisational data and store it in a vector database. For instance, attachments (doclinks) can be divided into chunks, indexed, and kept in a vector database. When asked a question, the application would retrieve the most relevant pieces of information and feed them to an LLM as context. This enables the model to provide answers with actual details obtained from the application. The key advantages of this approach include:

  • Low cost
  • Realtime data access
  • Traceability

Last month, OpenAI introduced the function calling feature to its API, providing ChatGPT with an additional mean of accessing application data. By furnishing it with a list of callable functions, ChatGPT can determine whether to answer a question directly or execute a function to retrieve relevant data before responding. This powerful feature has generated some buzz among the development community. After trying it out, I was too excited to ignore it. As a result, I developed an experimental Chrome extension that enables us to talk with Maximo. If you’d like to give it a try, you can find it on the Chrome Web Store under the name MaxQA.

How it works:

  • This tool is purely client-based, meaning there is no server involved. It directly talks with Maximo and OpenAI. To use it, you will need to provide your own OpenAI API key.
  • I have defined several basic functions that OpenAI can call. They work with Maximo out of the box. 
  • You can define new functions or customize existing ones to allow it to answer questions specific to your Maximo instance. To do this, right-click on the extension’s icon and open the extension’s Options page.
You can your own functions for ChatGPT to query Maximo and answer your question
  • The app uses OpenAI’s “gpt-3.5-turbo-0613” model, which is essentially Chat GPT 3.5. As a result, you can ask it any questions. For general inquiries, it will respond like ChatGPT 3.5. However, if you ask a Maximo-specific question, OpenAI will direct the app to execute the appropriate function and provide the necessary input parameters. The data response from Maximo will be fed back to OpenAI, which will then generate an answer based on that data.

sequence of integration between OpenAI ChatGPT and Maximo

Through this exercise, I have gained a few insights:

  • Hallucination: while the inclusion of actual data reduces the likelihood of hallucination, there are still occasional instances where it provides convincing false answers. We can address this with prompting techniques such as instructing it to not make up an answer if it does not know the answer. Nonetheless, this remains an unsolved problem with this new technology.
  • Fuzzy logic: consistent formatting of answers is not guaranteed for identical questions asked multiple times. This can be considered unacceptable in an industrial setting.
  • The 4k token limit: the API’s 4k token limit proved to be quite restrictive for the results of certain queries. The screenshot below is a response file that’s almost hitting the limit. The file contains about 10k characters.
a file with 10k characters which nearly reach the 4k token limit
  • The importance of description: more detailed description improves the accuracy of the model when selecting which function to call. For instance, for the function that provides asset details, I initially described it as “Get details of an asset record by AssetNum”, OpenAI correctly call this function when asked: “What is the status of asset 11450?”. However, it couldn’t answer the question: “Are there any active work orders on this asset?”. Until I updated the description of the function to “Get details of an asset record by AssetNum. The result will show a list of all open/active work orders on this asset and a list of spare parts”, after which it was able to answer correctly.

In conclusion, despite several limitations, I believe integrating LLM with an enterprise application offers immense potential in addressing various use cases. I am eager to hear your thoughts on potential applications or suggestions for new experiments. Please feel free to share your thoughts in the comments. Your input is greatly appreciated.

Newer posts »