I recently had a case where I needed to propagate the identity of a Web Service user through the Hibernate session connecting to a SQL Server.
SQL Server has a specific EXECUTE AS USER='username' that you can you prepend to any SQL Select to impersonate the specific 'username' for that query. Of course the user must have the appropriate rights on the target database.
Fortunately Hibernate has Interceptors that you case use amond other things to manipulate the queries before they are executed in the "onPrepareStatement" method.
From the code that creates the Hibernate configuration you add the interceptor
The Model Driven Architecture including the BPMN 2.0 Choreography models, the UML model for information and for services are developed using Rational Software Architect and RTC.
The events and the event driven choreography is executed with the WS-Notification engine of WebSphere Application Server.
The choreographies are monitored using IBM Business Monitor.
The architecture is fully peer to peer, distributed and federated.
Some of the participants implement the Message Oriented Middleware (MOM) with MQSeries. On the WAN between the participants the MOMs are connected using the WS-Transfer standard.click here for WS-Transfer
Some of my client are worried about the differences between services, microservices and APIs, together with approaches like REST and they want some guidance on what to choose and how to weight the granularity of each option. A bit of recent history is necessary to understand the evolution from services to microservices.
in the 90's IT systems had mostly been developed as stove pipes and it was obvious that the Web evolution was requiring some integration, reuse and interoperability so that new processes and application could cut across the business domains. The Services Oriented Architecture was introduced at that time, looking at high value functional reuse.
From SOA to Microservices
SOA approach implies strong governance that implies control of the services catalog with service life cycle that are lengthy by nature. Because of the reusable services where at a high granularity with variability of interfaces. The granularity of these services can be measured using a functional decomposition where the top level is the Enterprise, then the business domains at level 1, then process groups at level 2, processes at level 3, services at level 4. At each level there are 7 to 10 elements which gives a level 4 catalog of potentially 1,500 to 10,000 services. Such decompositions are available from organizations such as APQC process classification Framework, banking industry association Service landscape, Telemanagement Forum Business Process Framework and their Standardized Interfaces and APIs, and identification methods such as IBM's SOMA.
But the new digital word requires faster responses and development, with agile methods and less constraining governance, somewhat contradictory with the enterprise wide service identification and control approach, hence the need for microservices. These microservices are much finer grained but they are not intended to have a complex life cycle with versions or improvements. If something different is need a new microservice is created.
My rules of thumb for microservices and associated components
A microservice development is timeboxed and should be delivered within a sprint.
A microservice is developed and tested independently.
A microservice is never changed after being deployed. If new functions are required a new microservice component is created. There should not be any cost of maintenance.
The first project that uses the microservice pays for the development. Reuse is not a business justification for microservices.
Microservice granularity is contained within the intersection of a single functional domain, a single data domain and its immediate dependencies, a self sufficient packaging and a technology domain.
Searching the Web the APIs appear to be a mix of services, microservices and other components of various technologies. A good example of that is the ProgrammableWeb API Directory that lists 12,000+ APIS of 26 technologies for 300+ categories/ functional domains.
The net is APIs include services or microservices, and that discussions on APIs have to go deeper in defining the approach that matches the enterprise need, particularly in terms of application delivery ecosystem and life cycle
REST aka "Representational State Transfer" is based on uniform resource identifiers (URI) that identify resources. Said otherwise resources are Business entities or "Data" with their access path. REST APIs are the combination of resources and verbs (GET, PUT, POST, DELETE, ...).
The granularity of a REST interaction is the depth of the URI path to access the target resource. In the figure below the top REST interaction has a depth of 2, while the bottom interaction has a depth of 4. A good practice is to keep the granularity under 3 which matches my microservice rule to focus on a data domain and its immediate dependencies.
I hope this entry helps my fellow architects and developer make their opinion about the topic and build identification and granularity evaluation approaches.
I often see discussions comparing REST versus SOAP.
REST interactions are usually handled by the Web Server while by definition Web Services using SOAP are initial requester to ultimate provider interactions and the proxies and intermediate servers should be transparent. This is particularly true when using WS-Security with Signature and Encryption as only the destination application should decrypt and verify signature.
The following picture shows the SOAP and REST interaction on an OSI layer representation. See below this picture how to get both of both worlds using the WS-Transfer standard.
Now there is an easy way to get best of both worlds which is to use WS-Transfer standard where the payload is Resource Centric.
You can then get the benefit of the full protection and end to end encryption and signatures that do not stop at Web Server and get the flexibility of a Resource Centric approach.
WS-Transfer is recommended by several governments and organizations such as PEPPOL Pan European Procurement Online Protocol
2 operations to send and receive representation of a resource
2 operations for creating and suppressing a resource and its corresponding representations
I often state that business processes need complexity metrics and would like to share some facts behind that.
The first aspect addresses the complexity in documenting processes and the ability of humans to handle them.
There is a limited capacity of the human brain working memory. Quoting the Wikipedia article "the earliest quantification of the capacity limit associated with short-term memory was the "magical number seven" suggested by Miller in 1956" , see Wikipedia entry on working memory for full article.
So if you have a business process model with more than seven chunks in a single artifact, it is too complex for a normal human. The implication is that the modeler should group elements in chunks that pertain to a different modeling artifact, e.g; a subprocess, so that the working memory of the modeler can handle the complexity.
The second aspect is for business processes that are automated. Business processes is another form of expressing algorithmic and they are graphs with multiple paths and touch points. When you maintain such process and need to validate it, you have to ensure that you have tested all variations and paths. The turn around time to fix problems that may occur needs to stay within reasonable limits, otherwise the business process approach won't provide any advantages over classical coding. This is why a complexity metric that reflects the graph complexity and the turn around time to fix problem is necessary.
Both aspects imply that a complexity metric is defined when doing business process modeling, and that a complexity management method is defined. The method will have to give guidance for grouping elements in manageable chunks, that align with the enterprise organization as a chunk needs an owner, and that make sense from a functional standpoint.
In another blog entry I have mentioned the cyclomatic complexity and the control flow complexity as possible metrics. A way to manage complexity is to ensure that processes are modularized using one of the classification approaches described in the MIT process handbook.
Some BPM platform from other software vendors using the XPDL standard are being phased out. Now that BPMN also allows execution with the version 2.0 I built a converter that takes any XPDL file (1.0 or 2.1) and converts it to BPMN 2.0 including gateways, subprocesses, data, lanes. I also handle some specific vendor extensions (forms, scripts).
I used JAXB on the standard schemas for the processing.
Then I load the resulting BPMN 2.0 in the Rational Software Architect BPMN Editor or IBM Business Process Designer.
The following image shows some of the correspondance between the process definition standards, however that correspondance needs real Java progamming. XSLT is not enough.
I recently had to perform an industry independent capacity planning for a global client of an industry consortium. The application profile for the target environment was not fully described by the target client and some assumptions had to be made.
Capacity planning and sizing is often considered as an art, in the sense that it requires experienced practitioners to create a future plan that will match the exact future requirements for the system of systems under consideration.
The sizing approach I describe here is based on taking factors that are publicly available, recognized by all server industry vendors, and enabled both a reasonable cost and performance evaluation.
Capacity planning and sizing is often considered as an art, in the sense that it requires experienced practitioners to create a future plan that will match the exact future requirements for the system of systems under consideration.
It usually requires a precise view of the future application workloads which can be expressed as use cases together with precise non functional requirements covering the runtime and non runtime qualities of the future infrastructure. As a complement a precise view of the as-is situation is helpful provided that the CPU usage is expressed with an identical measure for all vendors and builds in the current infrastructure. These fair measures for all vendors on the market are available from public benchmarking organizations such as the “Standard Performance Evaluation Corporation”, or the “Transaction Processing Performance Council”.
An evaluation of the as-is situation based on solely the actual numbers of servers is far from sufficient, because as the results published for the last quarter of 2012 by “Standard Performance Evaluation Corporation” show in it reported CPU benchmark performance for servers values from 26.4 to6130 which is a 1:232 performance difference between servers. Even bigger differences in performances are reported when comparing performances from year 2006 to 2011 which corresponds to the build of existing servers in the current infrastructure.
In addition no specific application workload profiles using a standard CPU consumption have been expressed by the client to the industry consortium. The only consumer factor that has been made available is the number of users.
As an industry consortium, our best guess has then been based on finding an industry agreed benchmark that would enable us to perform a per user sizing of the infrastructure, benchmark that would be composed of a variety of workloads representing an acceptable view of a complex organization.
That specific benchmark is the virtualization benchmark from “Standard Performance Evaluation Corporation”, which reports a total performance, and a number of tiles, each tile being an instantiation of the workload mix of 500 users of mail server, and typical workload for Web application, and online transaction processing composed of application server and database server transactions.
However none of the spec.org benchmark reports total system costs. Solving that issue requires looking at the other benchmarking organization which is the “Transaction Processing Performance Council” TPC-E transaction performance benchmark and locating machines that are identical in both the virtualization benchmark and the TPC-E online transaction processing benchmark to have correlation points between both benchmarks. Using that approach we can evaluate the total system cost and match a corresponding number of users by the number of tiles that are supported by the systems under consideration.
The TPC-E reported results include a total system cost for all elements of the system including CPU, DASD and RAM. A statistical analyzing of the reported costs show that on average 38% of the total system cost is DASD with a variance of 25%. Between 2010 and 2011 the largest reported system has a DASD space of 281 Terabytes and the smallest 10 Terabytes.The cost per Gigabyte is between 4.30$ to 5.30$ due to the different mix between hard disk drive (HDD) and solid state drive (SSD) , the SSD being much faster by with a higher prices. These cost per Gigabyte match the numbers user if the TCO calculator available from the “Storage Networking Industry Association” .
The proposed architecture as been designed to support 20,000 users per system on each of the centralized data center systems and 1,000 users on the satellite systems (as derived from TPC virtualization benchmark). Smaller server cost has been extrapolated, as all of the data from TPC-E is for servers that can support much more than 1,000 users. The cost per user of smaller systems is roughly 152$ per user (variations mostly due to DASD space) and we have 4 servers in the satellite centers serving a total of 1000 users per satellite center giving 38,245$ per server.
Based on the total system cost ratios the DASD space available on the centralized systems is roughly 912 Terabytes. The total DASD space in the smaller sites is80 Terabytes.
The total DASD space for all systems 992 TB.
The technical approach for storage in the target architecture is multi-site virtualization as defined in the Storage Network Industry Association article.
Additional DASD storage requirements should be implemented as extensions of storage in the cloud-in-a-box enclosures to ensure the same internal improved communication and speed that are available from the cloud-in-a-box setups.
The cost for additional storage is to be evaluated using the SNIA TCO calculator already mentioned above.
Since there is an expressed requirement of 2,500 Terabytes, there is a 1.6 Terabytes gap to be added to the total system costs.
The following curve is the result of the full TPC-E result analysis correlated with SPEC virtualization. The best fit is the log curve but readers should not that the variation is much higher for lower number of users, the influence being mostly due to DASD space of the systems included in the benchmarks
When business analysts model business process they tend to capture the sequence of tasks and events without trying to structure into patterns. As an example sequential workflows that usually happen between different actors are mixed with very dynamic interactions such as screen navigations with lots of back & forth or context switch actions. Event driven reactive processing is mixed with service proactive sequence handling. The risk of such mix is to face modeling limitations, and to lead implementation team to the wrong technology selection with an induced higher cost. The following picture describe the essential patterns and the corresponding technology.
The top layer captures the interactions between different business domains, or to take an APQC classification terminology for level 3 decomposition "processes".
The next layer is workflow between different actor belonging to the same business subdomain .
One layer below captures the interactions of a single human actor with the system for a particular task of the above workflow.
The bottom layer addresses the matchmaking usually implemented in ESBs, to expose the appropriate services using adaptation or combination of existing interfaces.
To differentiate Service Oriented Architecture and Event Driven Architecture the classical gang of four work on Design Patterns can be used as described by the following picture.
Characteristic Specification/Value pattern as used by the Telemanagement Forum in its NGOSS SID information model Fully Dynamic extensibility: This patterns provides a flexible ‘Characteristrics’ based extension mechanism, which enables the creation of new attributes and their associated values at run time, in a more ‘dynamic’ manner. It is essentially a variation of the name/value pair pattern, but in a far more sophisticated manner. The characteristics model includes entities that fully describe the new attributes, including their format, default value and constraints, making run time validation possible. This Characteristics model is referenced from a base element (called ‘EntityWithSpecification’), from which all business domain elements are derived, making them all implicitly extendable in this manner These characteristics can then be populated at run time, and shared across multiple entities, thereby dynamically enhancing their capabilities Dynamic extension using this pattern is often the preferred method as it is quicker, less intrusive, does not require the creation of a new schema, and will not impact any of the existing interface
* pros: Agile, flexible, Partial validation against schema,Run time validation easily feasible, No schema changes required * cons:Strong governance required
XBRL: eXtensible Business Reporting Language www.xbrl.org . XBRL tags associate the concepts in the taxonomy to a piece of data, in order to facilitate the interpretation of the data. XML enables the tagging of data with identifying information, according to a classification system (or taxonomy). A taxonomy is essentially a collection of concepts, similar to a dictionary. The australian government uses this to enable businesses reporting for tax or other purposes. Many other governmental or non govermental institutions around the world use it for similar purposes.
SQL with XML extensions (SQL/XML) is a new section of the SQL standard covering a whole raft of XML-related extensions to SQL. It is in iterative development and was the subject of Part 14 of SQL 2003 [ISO International Standard ISO/IEC 9075-14:2003], which has since been withdrawn as work continues on SQL/XML/XQuery integration. SQL/XML was originally developed by the "SQLX Informal Group of Companies," with leadership from IBM® and Oracle, and then in committee at the American National Standards Institute (ANSI), which is the standards organization in which SQL is maintained. The scope of SQL/XML encompasses (quoted from Andrew Eisenberg and Jim Melton):
Specifications for the representation of SQL data (specifically, rows and tables of rows, as well as views and query results) in XML form, and vice versa.
Specifications associated with mapping SQL schemata to and from XML schemata. This may include performing the mapping between existing arbitrary XML and SQL schemata.
Specifications for the representation of SQL Schemas in XML. Specifications for the representation of SQL actions (insert, update, delete).
Specifications for messaging for XML when used with SQL. SQL/XML is developed to be complementary to XQuery.
Variable Services or Service Polymorphism, in the context of object oriented programming, is the ability of one type, A, to appear as and be used like another type, B.
When applied to service invocation, we are talking about the ability to invoke one single service facade F, but actually invoke other services A, B, C etc, without the caller being aware.
So for an "open account" service we might have:
This may look simple for Object Oriented Practictionners but Web Services Interoperability standard WS-I states that "Operation name overloading in a wsdl:portType is disallowed by the Profile". The consequence is that we have to find service variability techniques that use Enterprise Service Bus mediations or other dynamic endpoint resolutions to solve the problem.
This enables the consumer to interact consistently with a generic
service, and abstracts the many possible implementations, or specific
services, which may ultimately handle the request.
It is a key technique in maintaining flexible business processes
Implies that the business data is made flexible to hold variations
without changing message structures.
In my book I have a full chapter on service variability and my colleague Scott Glen has written detailed articles in IBM DeveloperWorks address ways to implement service polymorphism.
There are many processes that are information or business object centric. In such cases the entities have their own lifecycle and interact with the external world using services. Process analysts must however be careful on ensurign that they are now mixing business ownernship and separate the correctly the facettes that are owned by different owners in an organization. As an example the Telco standard information model clearly differentiates "the Customer Order" owned by customer facing organization from the "Service Order" and "Resource Orders" owned by other organizations in the enterprise.
Then the approach analyzes the state charts for each entity.
This approach can also be taken when different organization have to share a common entity such as a customs manifest declaration. The public entity and its public life cycle can be passed as an SCXML State Chart XML W3C SCXML Standard document. A reference implementation of SCXML available from Apache SCXML project.
The following picture describes such document bei exchanged between different organization and agencies, each with its own IT infrastructure. The only requirement is that each of those organization is able to interpret the SCXML standard to make out how the entity should be handled. It addition the document can carry the history of events that enables each stakeholder to reconstruct a monitoring view.
Enterprises embarking on the business process management journey must ensure that they keep the gains that business processes provide by controlling the cost of their life cycle. Particularly, a process quality approach is essential to enable low-cost changes. This article should interest business architects, and IT managers and architects as it discusses an approach to controlling the development and maintenance efforts for business processes by limiting their complexity.
Well here it comes again. Today I have been confronted to a project where the teams did not understand the need for variability and replaced the Characteristic Value variability pattern from the SID Telco model with static tags for each attributes.
At the end they make their interfaces signature rigid and will have to introduce new services for each different type of orders or products. This is typically what happend with Client-Server where changes on the server side propagated to the client side. We absolutely must avoid to use that Client-Server on Web Services approach for SOA as it ends up propagating and amplying the cost of changes to service consumers ,whether they are processes or other applications.
SOA needs to be about business or semantic loose coupling where the interfaces can absorb variations without having to change all of the consumers that use that specific interface.
The characteristic value pattern from the SID is as follows where the value point to a specification that defines the type, name and constraints. The information is structure in stable parts and variable parts with the variability of the information model enabled by this pattern.
An example of such variable message is also provided. The Characteristic values can be added to described very different aspects and do not change at all the structure of the messages even when adding new attributes to a product, service resource or a customer.
It has been a long week since my last entry ;-) , publishing a book uses a lot of bandwidth.
Before going into variability, I want to explain the issue leading to variabiltiy
We have often forgotten that services exposed information that is carried by processes. Thus if the information evolves in its structure it will propagate to services and processes. Let me give you an example that I experimented in a project: A product catalog may contain attributes for these products. Whether these new attributes were represented or not as new columns in a database, they ended up being new tags in a schema such as <TV_Channels> for a video on demand product. Given that the schema changed the services accessing the product catalog changed signature, and the business processes using these services had to be regenerated and tested. Two product attribute changes per week were occuring, and the system test for the affected processes was 2 weeks on average per process. This ended being a catch 22.
That being said, I now need to answer to the following questions, how can I model the information to prevent structural changes and how can I evaluate the testing effort of a business process.
On the first question, I wrote a full chapter on various techniques in my book, including xsd:any I already mentionned, in this blog but the one we used in the specific case is the CharacteristicValue and CharacteristicSpec pattern from the Telemanagement Forum SID model for Telco operators. This model defines a characteristic specification that will describe the attribute with metadata, including the allowed values the type and validity dates. The characteristic value itself has a link the specification so that a common specification can be used for many values.
On the second question, the experience shows that creation, change and test effort of processes is roughly proportional to the number of arcs (connections) in a given business process. Even if you only change a small aspect you will need to test all internal variations. It is quite common to have two to four person hours of effort per arc in the process.
The following picture shows that with only 3 tasks and 5 nodes in a process you can have 10 arcs, so you may expect 5 days of test.
Another important aspect of variability are rules and policies. A consistent enterprise approach to rules and policies will require the creation of a common business vocabularity which content must be aligned with the concepts in the information model.
With OMG's SBVR there is now a standard for the structure of rules and policies but not of the contents which will always be specific to an industry and/or an enterprise. The vocabulary will describe the core elements of the information model, while the rules content model will define the acceptable value ranges when they are required by rules or policies.
If we now integrate this information variability with SOA and BPM but also with rules and policies we can have business processes which behavior is driven by the content of information and is much less sensitive to changes. Using a business vocabulary for the rules with a human language like rules or policy description enables business users to manipulate the rules and shift the changes from IT to business. In a further blog entry I plan to give real examples of such policies.
I use the Business Process Modeling Notation (BPMN) categories as defined in BPMN 1.1 standard to categorize and modularize business processes.
There are three basic types of sub-models categories within an end-to-end BPMN model:
1. Collaboration Processes is the first category: it describes exchanges between 2 independent business entities. These processes just describe the exchanges between the different processes and ensure the good mutual behavior. To take a railway analogy this is the network where the collaboration process ensures the synchronization by managing the signaling for trains to avoid collisions.
2. Abstract (public) processes are the second category: They provide the end to end view from a participant point of view, like the view that a train engineer who drives all of the trains but leaves each wagon responsibility to the respective conductors. He only care about the overall length and weight of the process (train) and the behavior of the links.
These models are use to create the end to end monitoring model by capturing events that surface from each of the smaller modules (the wagons).here is often a confusion between this monitoring model which higher management of the enterprise requires and the process automation which is provided from the next category of processes which are smaller modules.he monitoring model can take actions based on indicators it controls.
3. Private (internal) business processes are the last category: they have a single business owner and usually focus on a main core entity (from the information model). These are the only processes that should be automated with a BPEL. Each wagon is a separate process as it usually has a different business owner. The business owners define the contracts between their processes as business services that act like the links between wagons in trains. Any automated process that has multiple owners ends up being unmanageable because of the conflicts that always occur in that case. The technology for each module can be different represented by a wagon or the locomotive). The CRM can be Siebel, the billing SAP and the Supply Chain WebSphere Process Server. The monitoring model above does not require the technology to be the same provided that you expose appropriate events for the monitoring model.
Next week I will expose information variability and its impact on services and processes.
Achieve Breakthrough Business Flexibility and Agility by Integrating SOA and BPM
Practical from start to finish, Dynamic SOAand BPM squarely addresses two of the most critical challenges today’s IT executives, architects, and analysts face: implementing BPM as effectively as possible and driving more value from their SOA Investments. by Marc Fiammante
Pre-order ISBN: 9780137018918 July 2009 at Amazon.com, BarnesandNoble.com or Borders.com by Marc Fiammante
On the SOA front I had a very constructive discussion with Jérome Hannebelle from France Telecom/Orange (he wishes to be quoted) on a variability approach that differentiates the provider WSDLs from the consumer WSDLs. His interesting position is that to avoid the impact of version change the providers should expose more generic WSDLs with xsd:any for all the service message parameters branches that are subject ot release variations, however consumers should for the same service be provided with validation WSDLs that have explicit definition of the parameters for a given release. The provider then need at run time to identify the service request version an apply the appropriate routing and handling behind the service facade. This approach is a variation of the patterns I describe in my book where I already state that the umtimate provider's granularity and interfaces may be different from the consumer view. The implication is the dependency tree management in the registry that it implies to that there is explicit correlation between the various consumer validation WSDLs and the provider WSDL.
On the fun side Las Vegas is an interesting location, I flew a total of 15 minutes at the indoor skydiving tunnel. A safe way of experimenting skydiving feelings.
What is the good ganularity of services ? Well I like to reformulate this question in what is the manageable granularity of services. How many services methods or interfaces can we manage in an Enterprise. If we take a decomposition like APQC Process Classification Framework at the task level which is the 4th level of decomposition, we get aroun 1,500 tasks for the cross industry elements. Each task would have several interfaces or methods. Looking at other decompositons like IBM's Component Business Modeling which is a two level decomposition or Telemanagement eTOM Business PRocess Framework we get an average of 7 to 10 elements at each level of decomposition. This would lead to a potential of between 10 thousand and 100 thousand interfaces at a level 5 of decomposition, which I think everyone would agree is not manageable.
The implication of that simple math is that a manageable granularity is between a level 3 and 4 of decomposition, and if you end up with finer interfaces, we need to consolidate into more variable paylod interfaces at a higher level. It also implies that a decomposition exists in the enterprise otherwise there is no way to find what is the decomposition level of a service.
As I mention in my book we have too much considered SOA as client/server on Web Services. We really need to think variability as a way to enable reuse and avoid the propagation of provider changes to consumers and vice versa. In complement to the approach for variability I describe in my book looking at information variability, service variability and process variability here are two good articles on variability that an architect I work with in a large european bank just sent to me as we are working on these topics together for that bank.
As an IT architect currently in the Enterprise Architecture, Business Process Management and Services Oriented Architectures domains I sometimes feel as both a scout and an archeologist.
I am from the generation that used punch cards punch tapes, manual entry of machine code with bit entry switches but also FORTRAN, APL, PL/1, mainframe databases, 390 assembler, SQL, 4th generation Languages, Basic, x86 assembler in device drivers for wireless LAN and network bridges coded in Object Oriented with C++ , also real time operating systems, Java, Perl, Rexx..... Long list, it sounds like being an old timer, but in fact all of this happened in less 30 years. Imagine try to get some business logic running on a few kilobytes today.
All what we wondered in the early days of programming has become natural and easy. However it is essential to avoid the pitfalls and as I say good architects have scars but need to learn from the difficult cases.
Being in a team doing real advanced projects in production with customers, we faced the necessity of delivering value with better and faster implementations, more manageable and flexible than the previous ones.
I just completed writing this book where I capture these experience and lessons learned. Its publication due this summer by IBM Press and Pearson publishing. I start with a focused enterprise architecture approach, then look at methods to delivery variability, first addressing variability in information models, then service models and process models. I then follow with what I call the enterprise expansion joint, covering the enterprise services buses and additional practices of performing integration in a flexible way. I then complete by the tooling for the life cycle and the management and monitoring.
This was an interesting effort and I developped all the models shown with the appropriate tools.
The writing alone experience was also interesting, keeping the motivation. Communication is essential and often a good picture worth a thousand word, and I created quite a number of these pictures in the book including 3D pictures created with Google Sketchup . Here is one example of these pictures to show the mnemonic I created to remember the enterprise architecture layers that are, Business, Application and Services, Information and Infrastructure layers.
To create such pictures you just need to create the planes with your usual graphing tool and export them as png that you can then import in Google Sketchup and position, with views and scenes of the same composition.
In addition to such pictures, I also added to the text model examples and code examples which all were inspired from real projects but of course all written new by me for this book.
I really hope that in really my readers will get immediate value for their projects, with this small brick I am adding to the IT industry.