Some BPM platform from other software vendors using the XPDL standard are being phased out. Now that BPMN also allows execution with the version 2.0 I built a converter that takes any XPDL file (1.0 or 2.1) and converts it to BPMN 2.0 including gateways, subprocesses, data, lanes. I also handle some specific vendor extensions (forms, scripts).
I used JAXB on the standard schemas for the processing.
Then I load the resulting BPMN 2.0 in the Rational Software Architect BPMN Editor or IBM Business Process Designer.
The following image shows some of the correspondance between the process definition standards, however that correspondance needs real Java progamming. XSLT is not enough.
Here are the URLs to the references I use in my book.
Wikipedia Enterprise Architecture, http://en.wikipedia.org/wiki/Enterprise_Architecture
The Open Group, http://www.opengroup.org/.
Zachman Framework, http://www.zachmaninternational.com/index.php/the-zachman-framework
Telemanagement Forum eTOM standard,
APQC portal process classification framework, http://www.apqc.org/portal/apqc/site/?path=/research/pcf/index.html .
IBM Component Business Modeling,
Telco Application Map Standard from Telemanagement Forum, http://www.tmforum.org/page33552.aspx .
OGSI Globus Grid Services Toolkit, http://www.globus.org/toolkit/ .
IBM SOA Foundation: An architectural introduction and overview 12,
The RFC 3444, http://tools.ietf.org/html/rfc3444 , titled “On the Difference between Information Models and
Data Models,” states the main purpose of an IM is to model managed objects at a conceptual level, independent
of any specific implementations.
BPMN specification 1.1, http://www.bpmn.org/Documents/BPMN%201-1%20Specification.pdf .
Open SOA Collaboration, http://www.osoa.org
IBM CICS transactional server, http://www-01.ibm.com/software/htp/cics/.
Forum eTOM reference, http://www.tmforum.org/browse.aspx?catID=1647 .
APQC portal process classification framework, http://www.apqc.org/portal/apqc/site/?path=/research/pcf/index.html .
BPMN Private and Abstract processes page 13 and Figure 7.1 and 7.2 of
XPDL presentation, www.xpdl.org/tdocs/200809_KMWorld/200809_SJ04_XPDL_BPMN.ppt .
WebSphere Business Modeler XML schema reference,
U.S. Census bureau statistics, http://www.census.gov/csd/susb/susb06.htm
IBM announcement letters, http://www-01.ibm.com/common/ssi/index.wss .
U.S. Federal Data Reference Model, http://www.whitehouse.gov/omb/assets/egov_docs/DRM_2_0_Final.pdf
IBM Financial Services models, https://www-03.ibm.com/industries/financialservices/doc/content/bin/fss_bdw_gim_0306.pdf .
Telemanagement Forum Information Framework (SID), http://www.tmforum.org/DocumentsInformation/1696/home.html .
RDF Vocabulary Description Language, http://www.w3.org/TR/rdf-schema/ .
Rational Fabric tooling for UML to OWL, http://www.ibm.com/developerworks/rational/downloads/08/rsa_webmodtool/index.html .
URI RFC standard, http://www.rfc-editor.org/rfc/rfc3305.txt .
ISO 20022 Universal financial industry message scheme,
XML Linking Language (XLink), http://www.w3.org/TR/xlink/ .
Example of IBM Sec Filing using XBRL , http://www.sec.gov/Archives/edgar/data/51143/000110465908071167/ibm-20081028.xml
IBM Master Data Management, http://www-01.ibm.com/software/data/ips/products/masterdata/ .
Relationships in WebSphere, http://www.ibm.com/developerworks/websphere/library/techarticles/0605_lainwala/605_lainwala.html
Services Data Object standard, http://www.osoa.org/display/Main/Service+Data+Objects+Home .
Introduction to Service Data Objects, http://www.ibm.com/developerworks/java/library/j-sdo/
Adaptive Business Objects, http://www.research.ibm.com/people/p/prabir/ABO.pdf .
W3C Web Services home, http://www.w3.org/2002/ws/Activity
Erich Gamma et al., Design Patterns: Elements of Reusable Object-Oriented Software, (Addison-Wesley, 1995)
WSDL standard, http://www.w3.org/TR/wsdl
WS-I Basic Profile, http://www.ws-i.org/Profiles/BasicProfile-1.0-2004-04-16.html .
WS-I Attachment profile, http://www.ws-i.org/Profiles/AttachmentsProfile-1.0.html
Open SOA SCA C++ binding, http://www.osoa.org/download/attachments/28/SCA_ClientAndImplementationModel_Cpp_V09.pdf?version=1 .
OMG’s CORBA Interface Definition Language, http://www.omg.org/cgi-bin/doc?formal/02-06-39
Obtaining the WSDL for a PHP SCA component offering, http://www.php.net/manual/en/SCA.examples.obtaining-wsdl.php .
Open SOA SCA with PHP, http://www.osoa.org/display/PHP/SCA+with+PHP
Fielding’s dissertation on Restful services, http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm .
IETF HTTP 1.1 Hypertext Transfer Protocol standard, http://tools.ietf.org/html/rfc2616
ObjectWeb fractal model, http://fractal.objectweb.org/documentation.html .
Definition of rete: synonym of plexus, http://wordnetweb.princeton.edu/perl/webwn?s=rete
Telemanagement Forum MTOSI standard, http://www.tmforum.org/mTOPMTOSIDocuments/2320/home.html .
Open SOA SCA Java Connector Architecture binding, http://www.osoa.org/download/attachments/35/SCA_JCABindings_V1_00.pdf?version=2
WebSphere Dynamic Process Edition, http://www-01.ibm.com/software/integration/wdpe/ .
Zapthink SOA Software Forms an ESB Federation, http://www.zapthink.com/news.html?id=1949
WebSphere DataPower SOAAppliances, http://www-01.ibm.com/software/integration/datapower/ .
WebSphere Business Services Fabric, http://www-01.ibm.com/software/integration/wbsf/index.html
OWL Web Ontology Language, http://www.w3.org/TR/owl-features/ .
ILOG JRules, http://www.ilog.com/products/jrules/
JSR 94 Rules engine API, http://jcp.org/aboutJava/communityprocess/final/jsr094/index.html .
WebSphere Infocenter description of the Work Area Service, http://publib.boulder.ibm.com/infocenter/wasinfo/v6r1/topic/com.ibm.websphere.express.doc/info/exp/workarea/concepts/cwa_overview.html .
Building an aggregation function using WebSphere ESB, http://www.ibm.com/developerworks/websphere/library/techarticles/0708_butek/0708_butek.html
IA12: WebSphere Message Brokers for z/OS - CICSRequest node, http://www-.ibm.com/support/docview.wss?rs=171&uid=swg24006950&loc=en_US&cs=utf-8〈=en .
Australian Government Standard Business Reporting
Common Information Model for Energy and Utilities, http://cimug.ucaiug.org/default.aspx .
ACORD Insurance Data Standard, http://www.acord.org/home/
Telemanagement Forum clickable business process framework, http://www.tmforum.org/BusinessProcess-Framework/6775/home.html .
WS-BPEL 2.0 Standard, http://docs.oasis-open.org/wsbpel/2.0/wsbpel-v2.0.html
Open SOA SCA standard, http://www.osoa.org/display/Main/Home.
WS-BusinessActivity standard, http://docs.oasis-open.org/ws-tx/wstx-wsba-1.1-spec-errata-os.pdf
IBM WebSphere Business Service Fabric Modeling Tool, http://www.ibm.com/developerworks/rational/downloads/8/rsa_webmodtool/index.html .
WebSphere Registry and Repository Impact analysis,
Web Services Reliable Messaging, http://docs.oasis-open.org/ws-rx/wsrm/200608 .
Web Services Business Activity (WS-BusinessActivity), http://xml.coverpages.org/wstx-wsba-1.1-rddl.html
WebSphere Service Registry and Repository V6.0 documentation,
WebSphere Business Process Engine API,
ITCAM for SOA,
User Defined XPath Function (UDXF) in WebSphere
Business Monitor, http://www.ibm.com/developerworks/library/i-bam617/
OASIS WSFED technical committee, http://www.oasisopen.org/committees/documents.php?wg_abbrev=wsfed .
Liberty Alliance ID-FF, http://www.projectliberty.org/resource_center/specifications/liberty_alliance_id_ff_1_2_specifications
Understanding SOA Security Design and Implementation, http://www.redbooks.ibm.com/abstracts/sg247310.html
WebSphere Process Server end-to-end security tutorial,
Audit trail in WebSphere Process Server, http://publib.boulder.ibm.com/infocenter/dmndhelp/v6r1mx/index.jsp?topic=/com.ibm.websphere.bpc.612.doc/doc/bpc/rg5attbl.html .
CICS Transaction Server example of a signed SOAP message,
fiammante 100000A8UA Tags:  business_process_manageme... soa best_practices dynamic agility bpm 1 Comment 3,251 Visits
As an IT architect currently in the Enterprise Architecture, Business Process Management and Services Oriented Architectures domains I sometimes feel as both a scout and an archeologist. I am from the generation that used punch cards punch tapes, manual entry of machine code with bit entry switches but also FORTRAN, All what we wondered in the early days of programming has become natural and easy. However it is essential to avoid the pitfalls and as I say good architects have scars but need to learn from the difficult cases. Being in a team doing real advanced projects in production with customers, we faced the necessity of delivering value with better and faster implementations, more manageable and flexible than the previous ones. I just completed writing this book where I capture these experience and lessons learned. Its publication due this summer by IBM Press and Pearson publishing. I start with a focused enterprise architecture approach, then look at methods to delivery variability, first addressing variability in information models, then service models and process models. I then follow with what I call the enterprise expansion joint, covering the enterprise services buses and additional practices of performing integration in a flexible way. I then complete by the tooling for the life cycle and the management and monitoring. This was an interesting effort and I developped all the models shown with the appropriate tools. The writing alone experience was also interesting, keeping the motivation. Communication is essential and often a good picture worth a thousand word, and I created quite a number of these pictures in the book including 3D pictures created with Google Sketchup . Here is one example of these pictures to show the mnemonic I created to remember the enterprise architecture layers that are, Business, Application and Services, Information and Infrastructure layers. To create such pictures you just need to create the planes with your usual graphing tool and export them as png that you can then import in Google Sketchup and position, with views and scenes of the same composition. In addition to such pictures, I also added to the text model examples and code examples which all were inspired from real projects but of course all written new by me for this book. I really hope that in really my readers will get immediate value for their projects, with this small brick I am adding to the IT industry.
I am from the generation that used punch cards punch tapes, manual entry of machine code with bit entry switches but also FORTRAN,
All what we wondered in the early days of programming has become natural and easy. However it is essential to avoid the pitfalls and as I say good architects have scars but need to learn from the difficult cases.
Being in a team doing real advanced projects in production with customers, we faced the necessity of delivering value with better and faster implementations, more manageable and flexible than the previous ones.
I just completed writing this book where I capture these experience and lessons learned. Its publication due this summer by IBM Press and Pearson publishing. I start with a focused enterprise architecture approach, then look at methods to delivery variability, first addressing variability in information models, then service models and process models. I then follow with what I call the enterprise expansion joint, covering the enterprise services buses and additional practices of performing integration in a flexible way. I then complete by the tooling for the life cycle and the management and monitoring.
This was an interesting effort and I developped all the models shown with the appropriate tools.
The writing alone experience was also interesting, keeping the motivation. Communication is essential and often a good picture worth a thousand word, and I created quite a number of these pictures in the book including 3D pictures created with Google Sketchup . Here is one example of these pictures to show the mnemonic I created to remember the enterprise architecture layers that are, Business, Application and Services, Information and Infrastructure layers.
To create such pictures you just need to create the planes with your usual graphing tool and export them as png that you can then import in Google Sketchup and position, with views and scenes of the same composition.
In addition to such pictures, I also added to the text model examples and code examples which all were inspired from real projects but of course all written new by me for this book.
I really hope that in really my readers will get immediate value for their projects, with this small brick I am adding to the IT industry.
There are many ways to implement polymorphic or variable information and relevant supporting technologies
Here is a list of ways to support Polymorphic Information:
When business analysts model business process they tend to capture the sequence of tasks and events without trying to structure into patterns. As an example sequential workflows that usually happen between different actors are mixed with very dynamic interactions such as screen navigations with lots of back & forth or context switch actions. Event driven reactive processing is mixed with service proactive sequence handling. The risk of such mix is to face modeling limitations, and to lead implementation team to the wrong technology selection with an induced higher cost. The following picture describe the essential patterns and the corresponding technology.
To differentiate Service Oriented Architecture and Event Driven Architecture the classical gang of four work on Design Patterns can be used as described by the following picture.
InfoQ has just published a review of my book together with a Q&A here is the link of the interview and book exceprt:
InfoQ article link.
I wish you happy holiday with your family and the best for 2010.
My book got a five star review from http://books.dzone.com/reviews/dynamic-soa-and-bpm-best
Managing the complexity of business processes
Enterprises embarking on the business process management journey must ensure that they keep the gains that business processes provide by controlling the cost of their life cycle. Particularly, a process quality approach is essential to enable low-cost changes. This article should interest business architects, and IT managers and architects as it discusses an approach to controlling the development and maintenance efforts for business processes by limiting their complexity.
See the rest of the article on SOA Magazine .
fiammante 100000A8UA Tags:  bpm business_process service variability polymorphism soa 1,603 Visits
Variable Services or Service Polymorphism, in the context of object oriented programming, is the ability of one type, A, to appear as and be used like another type, B.
When applied to service invocation, we are talking about the ability to invoke one single service facade F, but actually invoke other services A, B, C etc, without the caller being aware.
This may look simple for Object Oriented Practictionners but Web Services Interoperability standard WS-I states that "Operation name overloading in a wsdl:portType is disallowed by the Profile".
The consequence is that we have to find service variability techniques that use Enterprise Service Bus mediations or other dynamic endpoint resolutions to solve the problem.
This enables the consumer to interact consistently with a generic service, and abstracts the many possible implementations, or specific services, which may ultimately handle the request.
In my book I have a full chapter on service variability and my colleague Scott Glen has written detailed articles in IBM DeveloperWorks address ways to implement service polymorphism.
fiammante 100000A8UA Tags:  soa variability processes services information business 1,583 Visits
It has been a long week since my last entry ;-) , publishing a book uses a lot of bandwidth.
Before going into variability, I want to explain the issue leading to variabiltiy
We have often forgotten that services exposed information that is carried by processes. Thus if the information evolves in its structure it will propagate to services and processes. Let me give you an example that I experimented in a project: A product catalog may contain attributes for these products. Whether these new attributes were represented or not as new columns in a database, they ended up being new tags in a schema such as <TV_Channels> for a video on demand product. Given that the schema changed the services accessing the product catalog changed signature, and the business processes using these services had to be regenerated and tested. Two product attribute changes per week were occuring, and the system test for the affected processes was 2 weeks on average per process. This ended being a catch 22.
That being said, I now need to answer to the following questions, how can I model the information to prevent structural changes and how can I evaluate the testing effort of a business process.
On the first question, I wrote a full chapter on various techniques in my book, including xsd:any I already mentionned, in this blog but the one we used in the specific case is the CharacteristicValue and CharacteristicSpec pattern from the Telemanagement Forum SID model for Telco operators. This model defines a characteristic specification that will describe the attribute with metadata, including the allowed values the type and validity dates. The characteristic value itself has a link the specification so that a common specification can be used for many values.
On the second question, the experience shows that creation, change and test effort of processes is roughly proportional to the number of arcs (connections) in a given business process. Even if you only change a small aspect you will need to test all internal variations. It is quite common to have two to four person hours of effort per arc in the process.
The following picture shows that with only 3 tasks and 5 nodes in a process you can have 10 arcs, so you may expect 5 days of test.
It somehow relates to a cyclomatic complexity used in software development test evaluation.
Another important aspect of variability are rules and policies. A consistent enterprise approach to rules and policies will require the creation of a common business vocabularity which content must be aligned with the concepts in the information model.
With OMG's SBVR there is now a standard for the structure of rules and policies but not of the contents which will always be specific to an industry and/or an enterprise. The vocabulary will describe the core elements of the information model, while the rules content model will define the acceptable value ranges when they are required by rules or policies.
If we now integrate this information variability with SOA and BPM but also with rules and policies we can have business processes which behavior is driven by the content of information and is much less sensitive to changes. Using a business vocabulary for the rules with a human language like rules or policy description enables business users to manipulate the rules and shift the changes from IT to business. In a further blog entry I plan to give real examples of such policies.
My regards to readers.
As I mention in my book we have too much considered SOA as client/server on Web Services. We really need to think variability as a way to enable reuse and avoid the propagation of provider changes to consumers and vice versa. In complement to the approach for variability I describe in my book looking at information variability, service variability and process variability here are two good articles on variability that an architect I work with in a large european bank just sent to me as we are working on these topics together for that bank.
I often see discussions comparing REST versus SOAP.
REST interactions are usually handled by the Web Server while by definition Web Services using SOAP are initial requester to ultimate provider interactions and the proxies and intermediate servers should be transparent. This is particularly true when using WS-Security with Signature and Encryption as only the destination application should decrypt and verify signature.
The following picture shows the SOAP and REST interaction on an OSI layer representation. See below this picture how to get both of both worlds using the WS-Transfer standard.
Now there is an easy way to get best of both worlds which is to use WS-Transfer standard where the payload is Resource Centric.
You can then get the benefit of the full protection and end to end encryption and signatures that do not stop at Web Server and get the flexibility of a Resource Centric approach.
WS-Transfer is recommended by several governments and organizations such as PEPPOL Pan European Procurement Online Protocol
Seventh of Seven practices for Dynamic Process: Use Information & Event Centric Processes where appropriate
There are many processes that are information or business object centric. In such cases the entities have their own lifecycle and interact with the external world using services. Process analysts must however be careful on ensurign that they are now mixing business ownernship and separate the correctly the facettes that are owned by different owners in an organization. As an example the Telco standard information model clearly differentiates "the Customer Order" owned by customer facing organization from the "Service Order" and "Resource Orders" owned by other organizations in the enterprise.
Then the approach analyzes the state charts for each entity.
This approach can also be taken when different organization have to share a common entity such as a customs manifest declaration. The public entity and its public life cycle can be passed as an SCXML State Chart XML W3C SCXML Standard document. A reference implementation of SCXML available from Apache SCXML project.
The following picture describes such document bei exchanged between different organization and agencies, each with its own IT infrastructure. The only requirement is that each of those organization is able to interpret the SCXML standard to make out how the entity should be handled. It addition the document can carry the history of events that enables each stakeholder to reconstruct a monitoring view.
Well here it comes again. Today I have been confronted to a project where the teams did not understand the need for variability and replaced the Characteristic Value variability pattern from the SID Telco model with static tags for each attributes.
At the end they make their interfaces signature rigid and will have to introduce new services for each different type of orders or products. This is typically what happend with Client-Server where changes on the server side propagated to the client side. We absolutely must avoid to use that Client-Server on Web Services approach for SOA as it ends up propagating and amplying the cost of changes to service consumers ,whether they are processes or other applications.
SOA needs to be about business or semantic loose coupling where the interfaces can absorb variations without having to change all of the consumers that use that specific interface.
The characteristic value pattern from the SID is as follows where the value point to a specification that defines the type, name and constraints. The information is structure in stable parts and variable parts with the variability of the information model enabled by this pattern.
An example of such variable message is also provided. The Characteristic values can be added to described very different aspects and do not change at all the structure of the messages even when adding new attributes to a product, service resource or a customer.
Have a nice day.
On the SOA front I had a very constructive discussion with Jérome Hannebelle from France Telecom/Orange (he wishes to be quoted) on a variability approach that differentiates the provider WSDLs from the consumer WSDLs. His interesting position is that to avoid the impact of version change the providers should expose more generic WSDLs with xsd:any for all the service message parameters branches that are subject ot release variations, however consumers should for the same service be provided with validation WSDLs that have explicit definition of the parameters for a given release. The provider then need at run time to identify the service request version an apply the appropriate routing and handling behind the service facade. This approach is a variation of the patterns I describe in my book where I already state that the umtimate provider's granularity and interfaces may be different from the consumer view. The implication is the dependency tree management in the registry that it implies to that there is explicit correlation between the various consumer validation WSDLs and the provider WSDL.
On the fun side Las Vegas is an interesting location, I flew a total of 15 minutes at the indoor skydiving tunnel. A safe way of experimenting skydiving feelings.
fiammante 100000A8UA Tags:  soa business_process business_process_manageme... variability 1,297 Visits
What is the good ganularity of services ? Well I like to reformulate this question in what is the manageable granularity of services. How many services methods or interfaces can we manage in an Enterprise. If we take a decomposition like APQC Process Classification Framework at the task level which is the 4th level of decomposition, we get aroun 1,500 tasks for the cross industry elements. Each task would have several interfaces or methods. Looking at other decompositons like IBM's Component Business Modeling which is a two level decomposition or Telemanagement eTOM Business PRocess Framework we get an average of 7 to 10 elements at each level of decomposition. This would lead to a potential of between 10 thousand and 100 thousand interfaces at a level 5 of decomposition, which I think everyone would agree is not manageable.
The implication of that simple math is that a manageable granularity is between a level 3 and 4 of decomposition, and if you end up with finer interfaces, we need to consolidate into more variable paylod interfaces at a higher level. It also implies that a decomposition exists in the enterprise otherwise there is no way to find what is the decomposition level of a service.
In addition to the above mentionned CBM, APQC and eTOM, a method to find a tree decomposition from process variations is described by the article from MIT Sloan School of management A Coordination-Theoretic Approach to Understanding Process Differences.
I recently had to perform an industry independent capacity planning for a global client of an industry consortium. The application profile for the target environment was not fully described by the target client and some assumptions had to be made.
Capacity planning and sizing is often considered as an art, in the sense that it requires experienced practitioners to create a future plan that will match the exact future requirements for the system of systems under consideration.
The sizing approach I describe here is based on taking factors that are publicly available, recognized by all server industry vendors, and enabled both a reasonable cost and performance evaluation.
Capacity planning and sizing is often considered as an art, in the sense that it requires experienced practitioners to create a future plan that will match the exact future requirements for the system of systems under consideration.
It usually requires a precise view of the future application workloads which can be expressed as use cases together with precise non functional requirements covering the runtime and non runtime qualities of the future infrastructure. As a complement a precise view of the as-is situation is helpful provided that the CPU usage is expressed with an identical measure for all vendors and builds in the current infrastructure. These fair measures for all vendors on the market are available from public benchmarking organizations such as the “Standard Performance Evaluation Corporation”, or the “Transaction Processing Performance Council”.
An evaluation of the as-is situation based on solely the actual numbers of servers is far from sufficient, because as the results published for the last quarter of 2012 by “Standard Performance Evaluation Corporation” show in it reported CPU benchmark performance for servers values from 26.4 to 6130 which is a 1:232 performance difference between servers. Even bigger differences in performances are reported when comparing performances from year 2006 to 2011 which corresponds to the build of existing servers in the current infrastructure.
In addition no specific application workload profiles using a standard CPU consumption have been expressed by the client to the industry consortium. The only consumer factor that has been made available is the number of users.
As an industry consortium, our best guess has then been based on finding an industry agreed benchmark that would enable us to perform a per user sizing of the infrastructure, benchmark that would be composed of a variety of workloads representing an acceptable view of a complex organization.
That specific benchmark is the virtualization benchmark from “Standard Performance Evaluation Corporation”, which reports a total performance, and a number of tiles, each tile being an instantiation of the workload mix of 500 users of mail server, and typical workload for Web application, and online transaction processing composed of application server and database server transactions.
However none of the spec.org benchmark reports total system costs. Solving that issue requires looking at the other benchmarking organization which is the “Transaction Processing Performance Council” TPC-E transaction performance benchmark and locating machines that are identical in both the virtualization benchmark and the TPC-E online transaction processing benchmark to have correlation points between both benchmarks. Using that approach we can evaluate the total system cost and match a corresponding number of users by the number of tiles that are supported by the systems under consideration.
The TPC-E reported results include a total system cost for all elements of the system including CPU, DASD and RAM. A statistical analyzing of the reported costs show that on average 38% of the total system cost is DASD with a variance of 25%. Between 2010 and 2011 the largest reported system has a DASD space of 281 Terabytes and the smallest 10 Terabytes. The cost per Gigabyte is between 4.30$ to 5.30$ due to the different mix between hard disk drive (HDD) and solid state drive (SSD) , the SSD being much faster by with a higher prices. These cost per Gigabyte match the numbers user if the TCO calculator available from the “ Storage Networking Industry Association” .
The proposed architecture as been designed to support 20,000 users per system on each of the centralized data center systems and 1,000 users on the satellite systems (as derived from TPC virtualization benchmark). Smaller server cost has been extrapolated, as all of the data from TPC-E is for servers that can support much more than 1,000 users. The cost per user of smaller systems is roughly 152$ per user (variations mostly due to DASD space) and we have 4 servers in the satellite centers serving a total of 1000 users per satellite center giving 38,245$ per server.
Based on the total system cost ratios the DASD space available on the centralized systems is roughly 912 Terabytes. The total DASD space in the smaller sites is 80 Terabytes.
The total DASD space for all systems 992 TB.
The technical approach for storage in the target architecture is multi-site virtualization as defined in the Storage Network Industry Association article.
Additional DASD storage requirements should be implemented as extensions of storage in the cloud-in-a-box enclosures to ensure the same internal improved communication and speed that are available from the cloud-in-a-box setups.
The cost for additional storage is to be evaluated using the SNIA TCO calculator already mentioned above.
Since there is an expressed requirement of 2,500 Terabytes, there is a 1.6 Terabytes gap to be added to the total system costs.
The following curve is the result of the full TPC-E result analysis correlated with SPEC virtualization. The best fit is the log curve but readers should not that the variation is much higher for lower number of users, the influence being mostly due to DASD space of the systems included in the benchmarks
fiammante 100000A8UA 1,133 Visits
My homepage is here. Has a little cooking, the van de Graaf I built, a little financial derivatives and futures math and some chemistry.
Achieve Breakthrough Business Flexibility and Agility by Integrating SOA and BPM
Practical from start to finish, Dynamic SOA and BPM squarely addresses two of the most critical challenges today’s IT executives, architects, and
fiammante 100000A8UA 930 Visits
I use the Business Process Modeling Notation (BPMN) categories as defined in BPMN 1.1 standard to categorize and modularize business processes.
There are three basic types of sub-models categories within an end-to-end BPMN model:
These models are use to create the end to end monitoring model by capturing events that surface from each of the smaller modules (the wagons).here is often a confusion between this monitoring model which higher management of the enterprise requires and the process automation which is provided from the next category of processes which are smaller modules.he monitoring model can take actions based on indicators it controls.
fiammante 100000A8UA 662 Visits
I often state that business processes need complexity metrics and would like to share some facts behind that.
The first aspect addresses the complexity in documenting processes and the ability of humans to handle them.
There is a limited capacity of the human brain working memory. Quoting the Wikipedia article "the earliest quantification of the capacity limit associated with short-term memory was the "magical number seven" suggested by Miller in 1956" , see Wikipedia entry on working memory for full article.
So if you have a business process model with more than seven chunks in a single artifact, it is too complex for a normal human. The implication is that the modeler should group elements in chunks that pertain to a different modeling artifact, e.g; a subprocess, so that the working memory of the modeler can handle the complexity.
The second aspect is for business processes that are automated. Business processes is another form of expressing algorithmic and they are graphs with multiple paths and touch points. When you maintain such process and need to validate it, you have to ensure that you have tested all variations and paths. The turn around time to fix problems that may occur needs to stay within reasonable limits, otherwise the business process approach won't provide any advantages over classical coding. This is why a complexity metric that reflects the graph complexity and the turn around time to fix problem is necessary.
Both aspects imply that a complexity metric is defined when doing business process modeling, and that a complexity management method is defined. The method will have to give guidance for grouping elements in manageable chunks, that align with the enterprise organization as a chunk needs an owner, and that make sense from a functional standpoint.
In another blog entry I have mentioned the cyclomatic complexity and the control flow complexity as possible metrics. A way to manage complexity is to ensure that processes are modularized using one of the classification approaches described in the MIT process handbook.
fiammante 100000A8UA 445 Visits
It can easily used to prepare maturity assessments, with the associated questionnaires.
the table of content of the normative section:
BPMM Normative Content and Structure
7.2 Maturity Level : Managed
7.3 Maturity Level : Standardized
7.4 Maturity Level : Predictable
7.5 Maturity Level : Innovating
Some of my client are worried about the differences between services, microservices and APIs, together with approaches like REST and they want some guidance on what to choose and how to weight the granularity of each option. A bit of recent history is necessary to understand the evolution from services to microservices.
in the 90's IT systems had mostly been developed as stove pipes and it was obvious that the Web evolution was requiring some integration, reuse and interoperability so that new processes and application could cut across the business domains. The Services Oriented Architecture was introduced at that time, looking at high value functional reuse.
From SOA to Microservices
SOA approach implies strong governance that implies control of the services catalog with service life cycle that are lengthy by nature. Because of the reusable services where at a high granularity with variability of interfaces. The granularity of these services can be measured using a functional decomposition where the top level is the Enterprise, then the business domains at level 1, then process groups at level 2, processes at level 3, services at level 4. At each level there are 7 to 10 elements which gives a level 4 catalog of potentially 1,500 to 10,000 services. Such decompositions are available from organizations such as APQC process classification Framework, banking industry association Service landscape, Telemanagement Forum Business Process Framework and their Standardized Interfaces and APIs, and identification methods such as IBM's SOMA.
But the new digital word requires faster responses and development, with agile methods and less constraining governance, somewhat contradictory with the enterprise wide service identification and control approach, hence the need for microservices. These microservices are much finer grained but they are not intended to have a complex life cycle with versions or improvements. If something different is need a new microservice is created.
My rules of thumb for microservices and associated components
Searching the Web the APIs appear to be a mix of services, microservices and other components of various technologies. A good example of that is the ProgrammableWeb API Directory that lists 12,000+ APIS of 26 technologies for 300+ categories/ functional domains.
The net is APIs include services or microservices, and that discussions on APIs have to go deeper in defining the approach that matches the enterprise need, particularly in terms of application delivery ecosystem and life cycle
REST aka "Representational State Transfer" is based on uniform resource identifiers (URI) that identify resources. Said otherwise resources are Business entities or "Data" with their access path. REST APIs are the combination of resources and verbs (GET, PUT, POST, DELETE, ...).
The granularity of a REST interaction is the depth of the URI path to access the target resource. In the figure below the top REST interaction has a depth of 2, while the bottom interaction has a depth of 4. A good practice is to keep the granularity under 3 which matches my microservice rule to focus on a data domain and its immediate dependencies.
I hope this entry helps my fellow architects and developer make their opinion about the topic and build identification and granularity evaluation approaches.
fiammante 100000A8UA 229 Visits
Here is the link to what Kurt Veum from NATO Communication and Information Agency and I presented at the Interconnect conference.
The architecture is fully peer to peer, distributed and federated.
Some of the participants implement the Message Oriented Middleware (MOM) with MQSeries. On the WAN between the participants the MOMs are connected using the WS-Transfer standard.click here for WS-Transfer