One issue I see customers get confused on is the purpose of separate but equivalent software environments.
An enterprise should divide its IT servers and software into multiple separate and fairly independent environments. The number and purpose of these can vary somewhat, but a typical separation is these four environments:
Dev -- The development environment used to implement and compile software. Typically used for unit testing as well.
Test -- Used to perform functional testing and otherwise make sure that the software from development meets requirements. Scalability testing can be performed here if the hardware is robust and representative of Prod; if it's a shell, scalability results may well be misleading.
Stage -- A representative mirror of Prod, a place to test installation and migration procedures and perhaps the best place for scalability testing. Can also be used as an alternative/backup for Prod. New applications can be deployed by installing them in (part of) Stage, testing that, then swapping it for (part of) Prod.
Prod -- The production environment used to execute applications so that users (internal and external) may use them.
The users in the enterprise really only care about Prod. Dev, Test, and Stage are only used by IT. "The Ideal WebSphere Development Environment" is an old but good article which explains environments and how to use them in greater detail.
These environments are actually four roles than an environment can play. An environment in the Dev role needs development tools and test data, but probably doesn't need monitoring. The environment in the Prod role is the only one that should store confidential customer data and should have monitoring to verify that it's running properly.
The role of an environment is independent of its quality of service (QoS) requirements, a topic I'll discuss in my next posting.
As you can probably now see, my blog is now on My developerWorks. Accordingly, the URL for my blog has changed from this old URL to this new URL. Please update your bookmarks and RSS/Atom subscriptions.
So what is My developerWorks? As described in IBM Helps Software Developers Build Skills and Accelerate Innovation with Social Networking and Collaboration Technology, MydW is a cross between the developerWorks technical content you've come to know and love combined with social networking to connect people with similar interests. When you check out a piece of content such as an article, not only can you see ratings and links to related content provided by other readers, you can also connect to other readers who are also interested in this topic. This makes it possible for people interested in a particular topic to easily connect with each other and collaborate as they desire.
An ACID transaction is one that guarantees reliability, even when success is not possible. The ACID properties (from the docs for CICS, the mother of all transaction managers) are:
Atomic - Enables changes to be grouped and performed as if they're a single operation--all or nothing.
Consistent - A transaction begins and ends with valid data, even if the data is invalid during the transaction. This means that UIDs are indeed unique. For a relational database, this means that it must maintain referential integrity.
Isolated - Each transaction executes as if it's the only one; it is independent of any other concurrent transactions. A transaction's intermediate state is not visable to operations outside of the transaction.
Durable - Changes made by the transaction are persisted and cannot be lost, even if the resource subsequently fails.
A transaction can complete in one of two ways:
commit - Changes made during the transaction are made permanent and cannot be undone.
rollback - Changes made during the transaction are undone or discarded and the resource is returned to the state it held when the transaction began.
Most programmers initially learned their craft developing programs that run in a single process (and therefore on a single machine). This may have changed somewhat in the past decade, but we still start out learning to write simple programs and even today we still develop (though not deploy) complex systems in simplified environments.
The jump from single-process programs to distributed computing is difficult, as Deutsch observed. It requires a significant change in mindset. We get used to the idea that every line of code runs very fast and is immediately followed by execution of the next line of code. We know each line of code takes a little time to run and programs can fail, but generally we get used to programs being reliable stacks of code that can perform meaningful units of work very quickly. But once the program architecture becomes distributed, with parts running in separate processes invoking each other remotely across network connections, these assumptions about the simplicity of program execution get exposed.
I think fallacy #7, Transport cost is zero, nicely summarizes this dilemma. We're used to computational overhead being close to zero; when distributed computing changes that, it messes up a lot of our assumptions. This fallacy overlaps with #1: Reliability, #2: Latency, and #3: Bandwidth. They are why programs designed to run in a single process often perform poorly when arbitrarily split across tiers.
Patterns have evolved to address this issue of transport cost. A Session Facade is used to make remote EJB clients less chatty, since every invocation across the network adds overhead. It's a major theme underlying the Enterprise Integration Patterns, especially the first few chapters of the book. Network latency, along with marshalling/demarshalling of data, are significant issues for RPC/RMI programming. Those, along with asynchronous service invocation, are significant issues for messaging.
Systems were easier to design and develop when they were assumed to run in a single process. Distributed computing makes that a lot more complicated. But what is a problem is also an opportunity.
IBM Support Assistant comes in two editions, Workbench and Lite. Which should you use?I've talked about IBM Support Assistant (ISA) (product page), IBM Software's tool for gathering diagnostic data on your installation of our products and sending that data to IBM Support for analysis, usually as detail of a problem management report (PMR). (See "Submitting diagnostic information to IBM Technical Support for problem determination.") ISA has a pluggable architecture of collectors. Each collector is for a different IBM product; often a product has a suite of collectors, each designed to diagnose a different part of the product. You can also design your own collectors.The question remains: Install ISA Workbench or Lite? The download page has a quick comparison. Both have the same plug-in collectors architecture. Often that's all you need to gather data for IBM and send it to them. Frequently you don't need the extra bells and whistles in Workbench. When it doubt, try Lite first, it's probably all you need. Lite is smaller to download, easier to install, and does the basics Workbench is usually used for anyway.
Tendai, you make some good points here, but I'd like to play devil's advocate for a minute.
How is an SOA Billing Service which is provisioned in a cloud any different from the same SOA Billing Service that's deployed in some sort of non-cloud? Either way, isn't is just an SOA service which performs billing functionality? If the cloud make it something more, how so?
I believe the solution your post suggests is actually two distinct parts:
SOA -- Each department deployed the same billing application, each requiring substantial middleware and hardware. Streamline this by having the departments share a billing service which can be deployed once on a single (clustered) set of middleware and hardware.
cloud -- An SOA reference architecture which SOA services, such as this billing service, can easily be deployed to. The reference architecture should be a grid which dynamically adjusts capacity for the billing service as needed. (I notice that my four-year-old link to grid computing still works but connects to a page now titled "IBM Cloud Computing.")
So, a doubter might ask: How much is your example about cloud computing and how much is it about SOA? I'd like to see a blog posting which addresses that distinction. Thanks.
There's a new release of the Eclipse platform available.
The Eclipse Foundation has released a new version of the Eclipse platform, Eclipse Galileo. Technically, they call it a "release train" built on the platform, which means it doesn't change the platform but instead adds a whole bunch of stuff they think people are going to want. The stuff doesn't necessarily work together, but at least it's all grouped into one downloadable thing.
These "Re:" posts are a new feature of the My developerWorks blogs. When you're a blogger on MydW and you comment on someone else's MydW blog, you also have an option to cross-post your comment as a posting on your own blog. So when I commented on Doug Tidwell's post, Cloud - SOA = Zero, my comment also appeared on my blog as Re: Cloud - SOA = Zero. Also, where my comment appears on Doug's blog posting, the comment has a Trackback link which connects to the posting of this comment on my blog. And the posting of the comment on my blog has a header with a link to the original post on his blog. It's all interconnected.
So if you're interested in what I have to say about stuff--and presumably you are since you're reading my blog--you can also easily find comments I've made on other people's blogs and easily jump to those original blog postings I commented on. And these links can daisy chain, with a comment on a comment on a posting, showing a conversation between two authors or several.
This is going to be a good way for me to show you postings on other blogs that I think you'll be interested in. For example, I commented on postings about SOA for Dummies and IBM Support Assistant for two reasons:
To let readers of those blogs know that I've posted info about those topics in the past on my blog
To let you, the readers of my blog, know that another blogger has new information on a topic I've discussed in the past
It's easy for me and hopefully helpful to you, the readers. It's a win-win for both blogs, and a win for you the reader as well. (Or as Michael Scott would say, a win/win/win solution!)
There are three different achievement levels--contributing, professional, and master--which show increasing levels of accomplishment.
If you've published on dW and would like to receive this recognition, About the program explains how to register and how to gain points. This also applies to anyone who would like to become an author and start tracking points for recognition. (Sort of like a frequent flyer program for authors.) When you register, you'll get a welcome package with a tracking tool; this may take a couple of weeks, esp. if you have published several articles because the tool will be prepopulated with that list of contributions. Once you have the tool, you can use it to track your progress and submit it when you achieve a new level.
Also, even if you're "just" a reader of articles, check out the list of recognized authors to see who's been contributing a lot, and then search to find their articles and check them out.
The example shows CBM and SOMA being used to model a Rent-a-car company. I especially like the way it shows a broad capability like Rent Vehicle decomposed into atomic services which can be implemented in IT and composed again into a business process. This is a SOMA technique called Domain Decomposition. It also shows how Rentals and Reservations is a CBM component in the Rentals Management business competency. This component has services like Rent Vehicle--a course-grained service which, when decomposed, requires finer-grained services like Check Vehicle Availability and Get Customer Information which are offered by other components, Fleet Management in the Fleet Logistics competency and Customer Service in the Marketing competency. This shows how useful composite services often reach across competencies to make a company with many different lines of business operate for its customers as a coherent whole. And notice that all of these competencies are in the Execute accountability level, because they're parts of actually making the reservation (not planning for what one would be like, but actually making one). This all makes more sense if you look at the pictures, which is the point of all the graphics in these techniques, so go look at the PDF.
So the better the business/IT alignment in an SOA, the more IT becomes an enabler of business flexibility instead of an impediment. SOMA helps figure out the services, and CBM as a front-end to SOMA helps it focus all the more on services which model the business. This is a winning combination.
When you install the image, you use one of several pre-configured profiles to make the server run as a stand-alone server, a cluster member, a deployment manager, etc. WAS runs in the image on, I believe, SUSE Linux; in the end, the OS WAS runs in doesn't matter as much as long as the VM engine supports it.
Other editions of WAS are base (aka stand alone), network deployment (ND), express, etc. Versions of WAS are v6.1, v7.0, etc. I believe both WAS 6.1 and WAS 7.0 are available as hypervisor editions.
Sounds like you deploy a Java EE application EAR to a WAS Hypervisor server the same way as a standard WAS server. This means you'll still need to resolve references to external resources (databases, etc.), which probably requires configuring the image to enable access.
See WebSphere charges into the clouds for a slicker marketing-sounding description. Note that while WAS HV can be installed via WebSphere CloudBurst, WAS HV can also be bought and installed without CloudBurst.
As of WAS 7.0, you can now connect an MDB to a remote SIBus.
Great, so what does that mean? The Service Integration Bus is the feature as of WAS 6.0 that is a built-in JMS provider. Message-driven beans are EJBs for receiving JMS messages. For an application's MDB to receive messages from a queue in a particular SIBus bus instance, the application must be running in a WAS application server or cluster that is a member of the bus, basically meaning that the bus has one of its messaging engines running in the server/cluster. Thus when an MDB reads messages from a queue, the bus for that queue is essentially local to the application.
An MDB is configured by a JMS activation specification. The JMS activation specification in WAS 7.0 adds a new property, Provider endpoints, which (as the docs explain somewhat subtly) "allow the applications to consume messages from a remote cell." Technically, it will work with any remote bus, but the main reason to connect to a bus remotely is because it's in a different cell; if it were in the same cell, you could just add the application's server/cluster as a bus member of the bus and therefore make the bus local to the app.
With this property, when the MDB pool is activated, the beans will first try to connect to the bus specified in the Bus name property (if any). When that fails, it will then try to connect to the buses specified in the Provider endpoints. These Provider endpoint buses need not be local ones--ones where the server/cluster is a bus member. Because the buses are specified by the host name and port number of their bootstrap server (essentially, the point for connecting to a bus remotely via TCP/IP), the bus can be remote--the server/cluster does not have to be a bus member. A remote bus is sometimes also referred to as a foreign bus.
Normally to connect to a foreign bus, you need to use a service integration bus link to connect one of your server's local buses to the foreign bus. This is still the only way to send messages onto a destination on a foreign bus. But to receive messages from a destination on a foreign bus, you can now configure an MDB to connect to the bus remotely, bypassing the server's local buses (if any).
A phrase we might all think about more is "It just works!"
I'm reminded of this phrase by the interview "Why Software Sucks" on IT Conversations (a great little gem of a site). The interviewee is David Platt, who apparently worked for Microsoft and wrote a book, Why Software Sucks...and What You Can Do About It. The interview is mostly a long whine that makes whatever point it makes in the first five minutes, and takes 49 minutes to finally get around to the "what you can do about it" part, but it does have some interesting tidbits.
One tidbit is David's description of how the UPS web site works, especially when he was in Sweeden. With a lot of major web sites, if you are located outside of the US and enter the URL "amazon.com" or "yahoo.com," you automatically get redirected to the sister site for that country (like www.amazon.co.uk). With UPS's site, the home page makes you choose the country you're in; there's nothing else you can do on the site--not track a package, not log into your account, not view the annual report--without first selecting your country. And apparently if you're in Sweden, this takes 30 mouse clicks and key presses (David counted). This, even though 90% of UPS's packages are shipped in the US. David describes this as a barrier to using UPS's web site (and I agree). The site could at least detect what country your browser and its Internet connection is in, and default to that country--that is, assuming that the site even really needs to know what country you're in.
By comparison, you can enter a UPS tracking number in the Google search field; Google recognizes the string as a UPS tracking number, gets the package status from UPS, and displays it. Google doesn't need to know what country you're in (or figures it out without bothering you). Google just works.
In fact, a general theme for Google is that it has the world's simplest search syntax, which is no syntax. You type it a search string, it tries to figure out what you mean, and generally does a pretty good job. To get an idea of what all Google can do with one simple search box, check out Google Web Search Features. For example, if you misspell a search term, Google will often suggest the correct term. How does it do this? Some amazing AI cognitive learning computer? Not really. It watches searches that don't return much, and then the next search from the same browser for a term spelled slightly differently, which returns a rich set of matches, where the user follows a match and doesn't search again for a while. When Google sees this several times, it figures out that the second term is the correct spelling for the first term and subsequently suggests it as a correction. It's not figuring out what the current user meant, it's just watching what past users did and how they corrected mistakes and assuming that the current user may need the same correction. ("Database Requirements in the Age of Scalable Services" by Adam Bosworth) I use this Google feature as a spelling checker; type in a misspelled word, the result screen suggests the proper spelling. It's easier that firing up Word and opening a blank document, has no pop-up adds like the dictionary web sites, and is certainly much easier than opening the dictionary book on my bookshelf. Google just works.
So how does Google know that a search string is a UPS tracking number? I think it doesn't. I think it runs many queries concurrently, judges which are the best matches, and merges the result lists with the most promising matches first. So if a string looks like it might be a tracking number, Google probably runs it on all of the major package delivery sites; most don't match, UPS does, so then Goolgle infers that this is a UPS package tracking number. If a package ever had "paris hilton" or "amanda beard" (popular searches, according to Google Zeitgeist) for a tracking number, Google would probably be stumped.
So the point is that Google just works, and UPS doesn't. Google is a better interface to UPS than UPS's own web site is. The UPS web designers could have designed a better interface, but chose not to. But they should have.
Another example is a dishwasher where you put in soap once a month. Each time you run it, it uses the proper amount of soap. It just works.
In Microsoft Word, two examples are the red squiglies under misspelled words and the auto-correct so that when you type "hte" it inserts "the." These are good features; they just work. They're also several years old and point to the lack of useful innovation in the latest versions of Word.
On the flip side, a lot of things don't just work, no matter how easy it may be. Why does Quicken 2007 display ads telling me I should upgrade to Quicken 2007?!
Lest you assume I think that IBM is somehow faultless in this regard, let me clarify that I (and many inside of IBM) believe that our products are too had to use. They're very sophisticated, but they make simple stuff hard. We want to make our products easier to use; it's an effort we call "consumability." Our products too often don't just work; we're trying to make that easier and more common.
One galling example for me is IBM's internal employee directory. The amazingly good news is that we have a single list of all 300,000+ employees, who their management is, how to contact them, and some fuzzy description of what they do as employees of IBM. This is an amazing feat of database federation. The bad news is that search is a pain; it needs Google. As a simple example, with the main search field, you can chose to specify that you're looking for a name, Internet address, Notes address, phone number, etc. I frequently put in someone's Internet or Notes e-mail address, but get no matches because the default search type is name. I have to change the search to the proper type, run the search again, and this time get the match. Why not look at the string and infer what the type seems to be? @ means Internet address, /IBM means Notes address, digits and dashes mean phone number (even internationally), etc.? Why not run several searches concurrently, then merge the one that worked with the others that matched nothing? That would mean more work for the computer, but less work for me, and that's a trade-off I'm willing to make!
Getting back to David's interview: He believes that us programmers focus too much on what programming language we use, whether the code is object-oriented, whether the architecture is service oriented, etc. The users don't care, they just want software that works. I think David's blame is a bit misdirected--programmers ought to know what languages make them productive for various tasks, and architects ought to care about architecture. But someone ought to also care about user experience, and that should be driving the use cases that drive not how the product is implemented but how well it helps the user do their job. OO or not, SOA or not, you want an experience where the user says, "It just works!"
So what kind of software have you written lately? Created any interfaces like the UPS web site? What can you do in your software so that the users will say, "It just works!"[Read More]
So, what's the difference? Wikipedia says "Interoperability: the capability of different programs to exchange data via a common set of business procedures, and to read and write the same file formats and use the same protocols" and "Integration allows data from one device or software to be read or manipulated by another, resulting in ease of use." Yuck, those aren't much help.
To me, interoperability means that two (or more) systems work together unchanged even though they weren't necessarily designed to work together. Integration means that you've written some custom code to connect two (or more) systems together. So integrating two systems which are already interoperable is trivial; you just configure them to know about each other. Integrating non-interoperable systems takes more work.
The beauty of interoperability is that two systems developed completely independently can still work together. Magic? No, standards (or at least specifications, open or otherwise); see Open Standards in Everyday Life. Consider a Web services consumer that wants to invoke a particular WSDL, and a provider that implements the same WSDL; they'll work together, even if they were implemented independently. Why? Because they agree on the same WSDL (which may have come from a third party) and a protocol (such as SOAP over HTTP) discovered in the binding. How does the consumer discover the provider? Some registry, perhaps one that implements UDDI (which sucks, BTW). So SOAP, HTTP, WSDL, UDDI--all that good WS-I stuff--make Web services interoperable.
Another example I like is the "X/Open Distributed Transaction Processing (DTP) model" (aka the XA spec); see "Configuring and using XA distributed transactions in WebSphere Studio." With it, a transaction manager by one vendor can use resource managers by other vendors. Even though they weren't all written for each other, they still work together because they follow the same spec. They're interoperable.
Now consider two systems that weren't designed to be interoperable, or perhaps interoperable but with different specs. This requires integration. The integration code--could be Java, Message Broker, etc.; I co-authored a whole book on this--takes the interface one system expects and converts it to the one the other system provides. This is why WPS has stuff like Interface Maps and Business Object Maps.
So, you want interoperable systems; integrating them is simple. Otherwise, you have to integrate them yourself. [Read More]
The latest version of WebSphere Application Server, WAS 6, has a new feature called "Service Integration Bus" (WAS benefits talks about "a new pure-Java JMS engine"). The SIB is implemented as a group of messaging engines running in application servers (usually one-to-one engine-to-server) in a cell. As a service in WAS 6, SIB is a complete JMS v1.1 provider implementation. (Not just the API; a working messaging system.) The JMS provider is a pure Java implementation that runs completely within the application server's JVM process. (For persistant messaging, WAS also requires a JDBC database such as DB2.) Thus JMS messaging is built into WAS and easily available to any J2EE application deployed in WAS.
Why Service Integration Bus? IBM's software customers over the past few years have divided into two overlapping but still distinct markets with different needs:
Connect any kind of app to any other kind of app. This is the traditional WebSphere MQ market, where you've got different apps written in different languages running on different operating systems and you want them all to talk to each other. This market hasn't changed nor has IBM's commitment to supporting this market.
Connect J2EE apps running in WAS servers. What's changed in the last few years is that many of our customers are converting everything to J2EE apps deployed in WAS and so they don't need to be able to support every platform imaginable, just WAS. WAS 5 addressed this market with its Embedded Messaging feature (see below). This market is now better addressed with Service Integration Bus in WAS 6.
For a customer that finds itself in both groups--you have lots of WAS apps communicating, but you also need to communicate with other non-WAS apps--you will still need full WebSphere MQ. Embedded Messaging and Service Integration Bus only support WAS apps, so if any of the apps are not WAS apps, you need full WMQ. WAS 6 has a feature called MQ Link for connecting SIB and WMQ.
So here's the basic breakdown of WebSphere JMS options:
MQ Simulator -- A feature of the test server (aka, the single user WAS server) in WebSphere Studioand Rational Application Developer. Not a real messaging provider (did the term "simulator" tip you off?), it doesn't provide interprocess communication (pretty much a must-have for messaging) or persistence. What it is very useful for, and why it's in the test server, is testing and demoing your WAS apps that use JMS, without needing a separate JMS provider. When you're developing J2EE apps that use JMS, use this simulator.
Embedded Messaging -- A feature with WAS 5 for messaging just between WAS applications. It is a simplified version of the WMQ code base and is a full JMS implementation, but does not provide all of the quality of service advantages of full WMQ. Runs as several processes (written in C) that run outside of the WAS JVMs. So it involves more moving parts that consume more resources and need to be managed.
Service Integration Bus -- The replacement for Embedded Messaging in WAS 6. Implements the JMS spec; implemented in Java, runs in the app server JVM. (Think of it as "Really Embedded Messaging"!) Provides most (all?) of the same quality of service of full WMQ (such as clustering, which works as part of the WAS ND clustering model), but only supports WAS apps.
WebSphere MQ -- Messaging for just about any computer platform used in business, including WAS and JMS. WMQ is used to connect non-J2EE apps, and to connect a J2EE app to a non-J2EE app. It can also be used to connect J2EE apps, although this is usually because you also have non-J2EE apps as well. Written in C, it runs in its own processes, and does not require WAS or Java in any way (unless your app is a WAS app).
External JMS Provider -- This is the support WAS provides for using any J2EE-compliant JMS provider, so you can use our app server with someone else's JMS product.
WMQ v7.0.1 and WMB v7 have a new feature for standby processes that make the products more highly available.
The feature in WebSphere MQ, introduced in v7.0.1, is called multi-instance queue managers. The corresponding feature in WebSphere Message Broker, introduced in v7, is called multi-instance brokers. In both cases, the queue manager or broker runs in two processes, one active and the other on standby. If the active one fails, the product automatically fails over to the standby, with virtually no service interruption. Note that any resources the processes use, such as a database, must have their own high availability capabilities.
Multi-instance queue manager
In prior versions, to make WMQ or WMB highly available, one had to use hardware clustering (such as PowerHA (formerly known as HACMP) or Veritas). Hardware clustering may still be the gold standard for HA, but for environments that don't quite need the gold standard, software clustering via multi-instances may be good enough.
This new design reminds me of how the service integration bus in WebSphere Application Server works. An SIB bus is a collection of messaging engines managed by the WAS HA manager. By default, it runs two copies of each messaging engine, an active and a standby. If (parts of) the cluster lose communication with the messaging engine, the HA manager switches them to use the standby. Only one copy of the messaging engine can be active and only that one can maintain a lock on the external storage for the persistent messages. Multi-instance queue managers work in much the same way.
WebSphere MQ V7.0.1 introduced the ability to configure multi-instance queue managers. This capability allows you to configure multiple instances of a queue manager on separate machines, providing a highly available active-standby solution.
WebSphere Message Broker V7 builds on this feature to add the concept of multi-instance brokers. This feature works in the same way by hosting the broker's configuration on network storage and making the brokers highly available.
So if you needed a reason to upgrade to WMQ and WMB 7, now you have it.
Thanks to my colleague Guy Hochstetler who made me aware of this new feature.
I find that customers often confuse the role of an environment with its quality of service.
I previously discussed Data Center Environments, specifically the typical environment roles of Dev, Test, Stage, and Prod. These separate environments keep code under development (Dev) away from code the enterprise's users use to do their work (Prod). They also create a reliable, controlled environment for performing testing (Test) and for practicing installation and migration procedures (Stage).
These are environment roles, which describe who should be able to access an environment, what it is used for, and therefore what it should and shouldn't contain. For example, only the Prod environment should be able to access and change real customer data. Stage may contain a separate copy of the production data. Dev and Test shouldn't even have a copy of the production data, which is probably confidential and should be protected, but instead should conatin a representative set of fake data. (Use a Data Obfuscator to produce test data from a set of production data.)
The role of an environment is often confused with the quality of
service (QoS) an environment should support to meet its requirements.
One common example is availability. The applications running in Prod
are typically assumed to need to be available 24x7 (aka always). Test
and Stage are understood to be unreliable, that they may be taken down
or crash at any time as testing needs dictate. The Dev environment is
typically assumed to be fairly reliable, but with the understanding
that outages are acceptable.
These assumptions about the availability of different environments can become a problem for repository products like Rational Asset Manager (RAM) and WebSphere Services Registry and Repository (WSRR). Dev environments are typically not managed for reliability, yet products like RAM and WSRR (used in development to manage SOA governance) need to be reliably available. This is likewise true for the source code management system, but somehow the reliability requirements of RAM and WSRR are seen as being much more complex.
Long story short, customers often decide to install RAM and WSRR in their Prod environment simply because that has people prepared to manage WebSphere Application Server (WAS) servers (which is what RAM and WSRR run in) and make those WAS servers highly available. This, in my mind, is kind of crazy. RAM and WSRR store development artifacts, which are not used by production applications any more than source code is, and so should not be stored in Prod.
Customers often insist on installing RAM and WSRR in Prod because it's set up to make them highly available. I think the far better approach is to set up a couple of WAS servers in Dev for reasonable (maybe high) availability and install RAM and WSRR there, and assign personnel (who perhaps normally work in Prod) to manage those servers in Dev.
I'd be interested to hear from customers using RAM and/or WAS: What environment do you have them installed in?