Capture & Imaging

Part 2: Put Context around Critical Documents with IBM Datacap and Blue Prism RPA

Share this post:

In Part 1 of this blog series, I took a look at the perplexity that is process change – specifically, RPA, IBM Datacap OCR, and the need to build processes that can empower an organizations business units, while at the same time exhibiting a return on investment (ROI). Also, I outlined some key considerations when choosing the right RPA platform. Although MagicLamp is a Blue Prism partner, there are still key considerations when deciding the right RPA platform and finally I outlined a scenario that would show the power of RPA and IBM Datacap.

In this blog, we are going to begin to put the pieces together and show how Datacap can be used through its enhanced classification techniques in putting some context around these documents. Datacap will be able to tell Blue Prism what type of document the Blue Prism robot is working with. Based on the type of document, the Blue Prism robot is then able to properly route the document based on its type.

Datacap Classification Design: To assist Blue Prism, I have designed a very lightweight Datacap classification application that will be able to tell Blue Prism the type of document it is processing. IBM Datacap is an advanced capture and imaging platform that is designed to allow organizations to configure any number of applications on it. In this case, my application is going to be called Blue Prism. This Blue Prism application must perform the following functions:

  • Consumes any documents that are given to it either by a file drop or given to it through a web service request using Datacap WTM
  • Datacap must then performs OCR on each document
  • Once OCR has completed, classification techniques must be applied to each document to determine what type of document it is
  • If the document type is ascertained, then its Document Type property is set
  • If Datacap is unable to determine the document type, then the document is sent to Datacap (s) manual classification user interface where an operator must manually select from a drop down list the desired document type
  • Once all of the documents have been classified, this information is given back to Blue Prism where a Blue Prism robot can then take that information and create a work item out of it

Datacap Classification Self-Learning: Datacap, through its fingerprinting technology, has the capability to continually self- learn. A nice description of what Datacap Fingerprint learning and matching can be found here. To ensure that our application has the ability to easily grow and learn over time, I have constructed it using this modular approach:

  • The first page of every document is assigned a page type of Main Page; all other pages in the document though OCR (d) are then marked as Trialing pages
  • Each Main Page in the document is then sent through Datacap (s) Fingerprinting module where Datacap determines if the page matches a fingerprint within its fingerprint repository
  • If a match is made, then the Document Type associated with that fingerprint is then assigned to the document automatically
  • If the fingerprint module is unable to match the page against its fingerprint repository, then the document is declared unmatchable and drops down to the next level of classification which in this case is keyword lookup
  • During the keyword lookup process, Datacap looks for specific words on the entire document that are stored in the keyword dictionary organized by Document Type. If a keyword is located on the document, then the Document Type associated to the keyword Dictionary is assigned to the document automatically
  • At this point, if the document is still declared unmatchable, Datacap will then send all of the unmatchable documents through to its manual classification process where an operator will be presented with the Main Pages of the documents and must select from a preconfigured drop down list the desired document Type for each document
  • Now with all documents classified Datacap must add new fingerprints to its fingerprint repository for any documents that have not yet been fingerprinted. Along with the fingerprint, the associated Document Type is saved as well

The value to this type of approach is that over time and as the fingerprint repository grows more and more documents of a previously unmatchable type will be fingerprint detected and less and less of those documents will drop down into the other methods of classification. Additionally, since I have designed the classification process using a modular approach, supporting new documents and document types becomes a breeze and maintainability becomes easier because things are broken down into logical units of work.

Blue Prism RPA: With our Datacap application now designed and well underway, it’s now time to focus our efforts on the design and configuration of our Blue Prism application. As mentioned in my previous post, one of the key success factors for any RPA solution is to construct workflows that can easily perform a single unit of work. So in order to do this I have had to construct three separate workflows:

  • Workflow # 1: Workflow # 1 kicks off the entire business process. It reads documents from a network folder and delivers those documents to our Datacap Blue Prism Classification application. The workflow itself is connected to a Blue Prism Scheduler, which is triggered to run every 30 minutes. Once triggered, the workflow itself will run for 28 minutes. The reason I have constructed my workflow in this manner is because I really want to keep the size of my Blue Prism log down to a reasonable number. Blue Prism has very robust logging and audit capabilities and its best when designing workflows to keep this sort of thing in mind.
  • Workflow # 2: Workflow # 2 also has a defined purpose. Its sole job is to respond to any finished jobs from the Datacap Blue Prism Classification application. This workflow takes the JSON response generated from Datacap and converts it to a Blue Prism work item. Each work item has a key, which in this case is the filename of the document and a Tag that is the Document Type. Exposing these values makes it very easy for a Blue Prism operator using the Blue Prism Control Center to see the number of work items in the queue, their types and their statuses.
  • Workflow # 3: The third and final workflow in this series of workflows is considered the traffic cop. This workflow reads the work items from the queue and now that the workflow is able to distinguish the Document Type it is now able to perform tasks based on context, which can be valuable when determining what must happen to a particular document from a business prospective.

It’s important to note that for demonstration purposes our workflow simply delivers the physical document to another applicator for additional processing. Although this is a very simple task, Blue Prism can also perform very complex processes. Regardless of the need, having context of the document is vital and then what a workflow needs to accomplish from a business prospective merely comes down to requirements.

In our final entry to this blog series, PART 3 will focus on the solution implementation itself. I will show pre and post-states and have some images of the Blue Prism workflows in action as well.

Add Comment
No Comments

Leave a Reply

Your email address will not be published.Required fields are marked *

More Capture & Imaging Stories

55 images. 5 seconds. Get smarter with cognitive capture.

55 images.  5 seconds. Can you tell what they are? Variance is the current problem in processing documents.  If your Vendor, Customer, or Government owns the document, but you have to process it, you have a variance problem.  You can’t control how information is given to you, which means you have a knowledge worker training […]

Continue reading

Cognitive Capture: Apply the Future, Now!

In IT, we are always looking for new and better alternatives. We are spending a lot of time and money in development to help ensure that we are better than the competition. I am asking myself more and more, should I tell the customer upfront that we are the best student in class? Do I […]

Continue reading

Disrupting the status quo with cutting-edge data capture

Happy end of quarter everyone and welcome to the second half of 2017! Here in the US, Independence Day is right around the corner and I can’t help but think, doesn’t time fly? Like it was yesterday, I remember our offering and development teams sitting down with IBM clients back in 2016 and charting the […]

Continue reading