I have a question regarding JMS performance. I have a service layer developed as a OSGi bundle that contains my services. This OSGi bundle is wrapped into a OSGi project along with other OSGi bundles (other OSGi services, persistency, OSGi web bundle).
My application communicates with some legacy systems that send large amount of data through an external WebSphereMQ server, therefore my application needs either MDB or wired POJO in SCA that can consume messages. My idea was to have a MDB to consume message, and then call a service in a OSGi service bundle, of course by using SCA wiring).
The messages are quite a huge one, one message can be up to 50-100 MB, and they are logically segmented in 4MB segments in WebSphereMQ queue.
I did not yet built a test scenario, but are there any implications on performance in this particular scenario.
I'm especially concerned because SCA service invocation uses pass by reference semantics, therefore effectively copying my (huge) byte message in method invocation.
OSGi service does not work anything special with this huge message, but to store it into a database by using JPA2 persistence.
I'd like to use SCA because I like the way it wires OSGi and outer world, but I might hit a performance wall in this particular scenario.
Could someone give some hints/tips how to solve this without using a separate persistence unit and separate (stateless EJB) service to make this work effectively?
Pinned topic SCA&OSGi performance with large JMS messages
Answered question This question has been answered.
Unanswered question This question has not been answered yet.
Updated on 2012-05-10T15:22:23Z at 2012-05-10T15:22:23Z by Miljenko
Re: SCA&OSGi performance with large JMS messages2012-04-13T12:45:59ZThis is the accepted answer. This is the accepted answer.Hi Miljenko,
You said pass-by-reference, but I know you meant pass-by-value. You are right, that pass-by-reference to OSGiApps from SCA is a known requirement, but I can't think of any help you'd get from JEE either. It sounds like binding.jms to OSGiApps is the best solution I can think of. Can you elaborate more on your message reconstruction and persistence? We don't actually know what transform will be an issue, depending on how you reconstruct, serialize and JPA and redrive the business logic when you have the whole message. I'm wondering if efficiency is really that important then perhaps MQ or the Database might be able to reconstruct the message and persist it in a more efficient manner than involving WAS in the message reconstruction. Once the message is in its correct form, then perhaps you could drive the business logic once. It's hard to know from your quick description whether that would be helpful or not.
Re: SCA&OSGi performance with large JMS messages2012-04-15T21:07:36ZThis is the accepted answer. This is the accepted answer.
- SteveKinder 110000HUHT
Thanks for the reply, yes, you're right, I meant pass-by-value.
The rationale behind this is that we're exchanging files via a WebSphereMQ queue, and such files could be large and contain some sensitive data that should not be serialized on a disk (basically, we're using MQ to interchange files).
Basically, your point is that it we stick with this scenario I described before it's the same regardless we're implementing an MDB or a SCA component with binding.jms, if we want to call common OSGi service layer, because such an operation in a scenario would include message deserialization and copying when calling OSGi service to store it in one XA transaction.
The only way to avoid it is to create a separate persistence project (as a classic JEE EJB-JAR) to avoid message copying. Am I right?
The idea was to dump the whole file as a BLOB into a database to be processed, and to decouple message reception from message processing (poison message processing here is not that relevant, because we're raising a flag if something happens unexpectedly during file processing). So, we're receiving a file and dump it into a database in one XA transaction. At the other hand, message processing includes a couple of processing steps that could be potentially intensive, hence the idea of separation.
We did not define such integration model, it was imposed as mandatory to us. But we might avoid it if we store this message somwhere temporary, or force some notification mechanism about message reception, so that SCA component is simply notifying OSGi service that message is received and should be processed.
Re: SCA&OSGi performance with large JMS messages2012-04-16T11:06:36ZThis is the accepted answer. This is the accepted answer.
- Miljenko 060000KUD7
one more thing to explain about messages that are received through WebSphereMQ queue: one physical message is sent as one or more logical messages (other party involved in integration uses logical message partitioning by using WebSphereMQ-specific flags (CMQC.MQMF_SEGMENT and CMQC.MQMF_LAST_SEGMENT), and depending of usage of flag CMQC.MQGMO_COMPLETE_MSG, a message is received at once, or each segment separately. So, we're having here nothing but already implemented message segmentation, and message is reassembled on WebSphereMQ level.
Re: SCA&OSGi performance with large JMS messages2012-04-23T17:22:39ZThis is the accepted answer. This is the accepted answer.
- Miljenko 060000KUD7
Right, it sounds like the message itself should not be part of the service interface, but rather a request identifier than then relates to the blob in the database seems like a better approach -- then you are note pushing the message itself through the service frameworks, but rather just the request, and then the OSGi business logic can load just want it needs from the database based on the request identifier; and pass-by-reference and pass-by-value is not that much of an issue.
Re: SCA&OSGi performance with large JMS messages2012-04-23T20:47:04ZThis is the accepted answer. This is the accepted answer.
- SteveKinder 110000HUHT
Right, it sounds like the message itself should not be part of the service interface, but rather a request identifier that then relates to the blob in the database seems like a better approach. Then you are not pushing the large message itself through the service frameworks exposed to several copies, but rather, just the request itself. The OSGi business logic, once-driven can then load just what it needs from the database based on the request using the identifier. Pass-by-reference and pass-by-value performance differences are not that much of an issue with this design.