Four best practices to ensure successful delivery of an RPA project
By: Trevor Cheung
When was the last time you had to send an email chaser? If it wasn’t just minutes before you started reading this article, I’m guessing it was at the least, at the start of your workday.
Chasing our colleagues and stakeholders for information is a time-consuming and repetitive task which we are all familiar with — but what if it didn’t have to be? In this blog post, I’d like to share some of my experience in delivering a robotic process automation use case that enabled our team to go from spending four to five hours a week on sending emails, to just 10 minutes. In addition to freeing our staff to support higher value work, we can also send out fresh reminders on an as-needed basis.
Streamlining Operations For Our Treasury & Markets Business
I see my role in our bank’s Treasury & Markets Operations (TMO) team as a problem solver. When our team lead came to us with the challenge of implementing use case automation solutions, I was excited to explore new transformation opportunities.
We quickly identified a particular task as an opportunity to implement an end-to-end RPA solution — a process we refer to internally as ’email confirmation chasers’.
In our team’s support of the bank’s Treasury & Markets business, there are repetitive tasks such as sending weekly follow-up emails, such as confirmation chasers. Every week, a member of our TMO team spends up to five hours preparing required documents for relationship managers, who then follow up with their respective customers to sign. In addition to collating information and an aging analysis table, the team member must also source and attach the appropriate files to the email and encrypt the email with a password.
But, why was RPA the best automation option for this particular process?
Figure 1: The Treasury & Markets Operations team’s process flow for preparing documentation chasers
Identifying RPA as the Right Solution For Automated Email Chasers
While the conventional means of automation is to enhance established systems, our Technology & Operations teams in DBS HK found that this required a lot of time and resources. Not only did we have to spend man-hours testing new features, but regression testing was also required to ensure that existing functionalities were not impacted.
For this particular use case, our team was required to perform many reconciliation tasks for systems which did not have system interfaces. For example, when extracting information from system A, the team had to first send the data to a reconciliation platform before feeding the data back into system B. On top of this, there were particular repositories that simply consisted of spreadsheets which were manually updated by users. To automate these existing processes by building system interfaces — as opposed to an RPA solution — would have been very challenging and possibly even require more time and resources than building an entirely new system all together.
In comparison, RPA would be a leaner and more cost-effective solution for the following reasons:
· Does not require change to any of the established systems, which means there would be no need for regression testing
Minimum infrastructure to start with
· Interface need not be built — engineers can download existing data files from the established systems for general use
· Comparatively lighter testing as there’s no need to set up a dedicated UAT environment, or design/input testing cases
Given that the required tasks sitting between our systems were rule-based (ie given condition A, we do task B; given condition B, we do task C) and that we did not have to employ too many advanced techniques such as artificial intelligence, RPA was the most viable automation solution.
Many of the manual processes within our TMO team are highly specialized — including the email confirmation chaser task that we’ve successfully automated. When relaying the user requirements to the tech team, we had to balance several key considerations. Firstly, processes, terms and regulations that may have been second nature to our operations staff performing this task for several years, might be alien to external parties. What’s more, we had to collectively find a way to translate adherence to these regulations into code.
Secondly, while certain manual processes may have proven effective, replicating these processes verbatim into code could result in headaches for our teams down the line. The more complex the logic was for the RPA, the more difficult the system could be to debug when faced with an issue. These were key lessons learned for our team and invaluable insights as we look to deliver more automation, AI and ML projects in the future.
Tips for translating user requirements into code
Getting all parties to understand the user requirements was a key part of our journey. This was especially true given that T&M is a highly specialized area within the banking and financial services industry. During the project, we observed that the best practice was not to lift and shift the existing manual process into code, but rather work with all parties to re-engineer the end-to-end solution. Here are four of our key takeaways.
1) Practice close collaboration: Our operations team worked closely with the technology folks and the RPA vendor, so that everyone could be well-versed with the processes. With a high standard of operational risk management to adhere to, our operations team needed to take a leading role in ensuring that risk controls were upheld throughout. To meet changing business needs, we made provision for risk control requirements and procedural steps to be added, removed, or changed as needed.
2) Understand the process: Due to the specialized nature of the bank’s Treasury & Markets business (from regulatory risk & control measures to informal team practices), our operations team had to explain processes in detail to our technology counterparts. This required translating terminology into layman terms — and proved to be a bigger challenge than expected.
When carrying out a manual task, we have a general sense of the sequence. For example, tasks A, B, C and D must be done in order to achieve the outcome E. However, the actual execution may deviate here and there, though still reaching E at the end of the day. The quality of E would depend on one’s knowledge, experience and judgement. This could include our operations staff leveraging visual cues such as color coding for information processing. In an automated & systemized approach, visual cues become irrelevant, and the quality of E is guaranteed by a strict set of rules and logic.
3) Re-engineer and optimize the process: While the RPA system would have been able to run the previous manual process in no time at all, the complex logic required to re-create the manual way of doing things would create its own set of challenges. In scenarios that have more complex logic, it can become more difficult to debug systems. Furthermore, if employees maintaining the RPA were to leave, their replacement may have a challenging time understanding how it works. We therefore prioritized optimizing flows before we began specific user requirements. Throughout the re-engineering process, our team simplified processes as much as they could before translating it into code for the system.
Similarly, we identified that if we used the existing online system inquiry for RPA to ‘read’ a particular data point, there was a particular risk of misreading through optical character recognition (OCR). Likewise, any changes to the online system inquiry’s interface would cause the RPA to malfunction. We found an alternative data source in the form of a report which could be downloaded from the system. While additional mappings and logic were required, it alleviated the risk of errors.
4) Encountering unexpected results: When encountering unexpected results from the system, there are three probable sources:
i. The data points are not reflective of the real-life situation;
ii. The system codes are not correctly written — ie they do not faithfully follow the user requirements; other
iii. The user requirements do not reflect what the users are trying to achieve, feature incorrect assumptions, or cover an insufficient number of potential scenarios.
It would be prudent to start looking into these areas when you encounter bugs. If the system features complex logic and the root cause is related to points 2 or 3, the problem may take longer to assess. What’s more, with a set of complex logic, it can become harder to visualize and form a holistic view of what the system is trying to achieve. Had we looked to lift and shift the existing manual process, the likelihood of encountering bugs would have been much higher.
Results and What’s Next
Since we started using RPA to send automated confirmation to chasers in 2020, we’ve saved the man-hours equivalent to one full-time employee. Upon successful implementation, we shared the automation tool with our DBS TMO counterparts in other countries — some of whom are now engaging in proof of concepts. Going forward, we will explore how we can leverage AI solutions for tasks that require more complex interactions with systems. This could include technologies such as deep learning, optical character recognition (OCR) and natural language processing (NLP) to streamline existing manual processes.
Trevor has worked in Operations for T&M for nearly 20 years. Having had a lifelong passion for science and technology, he participated in the Bank’s AWS DeepRacer competition in 2020 where he used AI/ML to program a model racecar. One of the areas he is currently interested in is blockchain and its impact on the way we work.