Aditya Bansod, James Mitchell, Martijn Verburg, Aino Corry (moderator) and I share insights and lessons on doing outsourcing for software development.
with Tags: agile X
TonyGrout 28000000S8 Tags:  agile offshore outsource agiletales architecture agile_architecture 904 Visits
Just a quick update today that came out of some coaching I've been doing with a team that's been relatively successful with agile at scale. They've had challenges around recording architectural and design decisions. What should they keep and how should they record it. There are some that would say that all you need is the code and forget recording anything outside of the code. There are others that believe that every aspect of an application should be described separately to the code.
The answer I've seen work best for large projects and or product development is somewhere in between the two.
My experience is that you need the following:-
Here's what I've found works in these areas:-
Enough described to be useful and no more
To address the first point, having an artefact that records the architecture of the system works well. A good architectural record is like a good architecture. It has enough to fulfil it's purpose and no more and follows the architectural principle of form follows function.
Now I've seen some completely useless architectural documents and same can obviously occur with recorded presentations. If the content is poor it doesn't matter how it's recorded it's equally useless. So what should be in the architectural recording. Well there’s some standards about what should be described. I'm still a fan of Philip Krutchen's 4+1 view, it's simple without being simplistic, some have a added a data view. However, the way you describe it is your call. Just keep it simple.
A low cost cost of recording this knowledge
So given that it’s form should be low cost and allow us to easily consume what the key elements of the system are, in the absence of direct access to key team members, I’ve seen one approach work really well. Creating a video recording of the lead architect presenting the key architectural elements to the team.
These session is recorded using a video camera (smartphones are good enough now) in front of a live audience that is made up of the team, ideally in a theatre style conference room using whiteboards, modelling tools with a data projector or smart whiteboards. If you have a remote audience then smart whiteboards work and modelling tools will work better. I've used Papershow pretty successfully if you can't stretch to smart whiteboard. Use a tripod even if you're only using a smartphone, it makes the whole thing look more professional. I've used the glifplus with an iPhone.
The ability to have the frequently asked questions answered
Ideally the audience for the session will have some new team members to ask the "getting started" questions. The key things are being able to record the session as a video and having the audience there asking questions of the architect that are answered.
The question and answer part of the presentations should cover the answers to the typical questions a team has about an architecture, such as; “How do we handle security at the field level?” or “If the application falls over what does a recovery look like to the users?” This may mean some scripting if the team has already some of these things before the session.
In my experience of joining projects or having others join projects I've been leading, these videos become one of the most used artefacts for new joiners.
But we outsource
It works equally well with outsource relationships where either the delivery partner is taking on an application maintenance contract and you’re providing the architectural video artefact or you’re receiving a video back after some architectural changes or for a new application delivery project.
Video architecture artefacts are relatively quick and low cost to re-record, easy to publish and access using the web and collaboration tooling. These video recordings become key artefacts that new team members or those maintaining the application will use.
Moving your delivery partners and your organisation from a waterfall delivery model to an agile model means building a transition approach that is designed to build trust by addressing the perceived risk to all parties.
The risks to the outsourcing organisation include:-
The outsource partners' risks include:
These risks are some of the reasons that we've spent years working with lawyers and procurement trying to nail down contracts and Service Level Agreements with partners.
It's interesting that some in the agile community say we're incorrect to seek certainty. The challenge is actually that we've confused the definition of certainty. Certainty should not mean expecting to have all of the requirements documented as complete and accurate before they are developed in to code. Certainty is about removing risk as early in the lifecycle as possible.
If you suspect there'll be performance issues then gaining certainty involves creating a contract and funding that motivates the delivery of just enough of the system as executable software to be able to performance test the areas of concern. This has to be better than contracting and funding the production of diagrams showing how the system may deal with the issue.
Moving a delivery partner who is used to managing risk using the Big Requirement Up-front approach to this more agile model is challenging.
So what advice did I give to my client?
If you can't immediatley engage their agile practice (which a lot now appear to have), you'll have to move both them and your organisation in stages.
Using the call the Crawl, Walk, Run adoption approach you can start off with the Crawl approach described below. In no order:
The first thing is to move away from a long list of contextless requirements and start documenting your requirements as Mike Cohn style user stories or Alasdair Cockburn style use cases split into use case scenarios. Make sure that you've thoroughly documented the non-functional requirements and architectural constraints either in the user stories, use cases or separately linked to at least one of these so that later you can test the implementation of the story or use case scenario to demonstrate addressing the non-functionals.
The second thing is to move the delivery mechanism to iterations. Iterations being a fixed time-box which at the end results in working code, ideally of production quality.
The final thing that will ease the transition is to initially use phases that focus on building trust by removing risk and use the appropriate contract types by phase.
Love it or hate it the Open Unfied Process (OpenUP) has useful phases that, since they're built around managing risk using iterations, help the migration of waterfall type organisations to agile. Forget about the activities within the phases, they're interesting as a library of practice but less important, and focus with your delivery partner on the phase outcomes and on how to contract around those. The phases give natural contract and funding break points in the move towards agile. OpenUP also introduces the concept of iterations which are critical in the move to agile. Iterations being time the boxed delivery of user stories/use case scenarios as tested executable code. Iterations again offer a useful contract break point.
So how can the OpenUP phase goals help in the move to agile?
The phases are specifically designed with different risk profiles. They are suited to having different contract types for each phase to balance the risk for all parties. The Inception phase has medium risk as the scope is unclear. The Elaboration phase has high risk since it's purpose is to prove the high risk requirements can be delivered by building them. The Construction phase has medium to low risk since all of the high technical risks should have been removed. Finally the Transition phase’s risk profile is based on the complexity of deploying the software; a web application my be low risk to launch where as an update to a retail banking application may be higher risk.
High level requirements are defined and throw-away executable prototypes demonstrating any critical architectural risks are demonstrated. Identify requirements that are critical from a business perspective and that if built will demonstrate the removal of the main technical risks. Refine only these requirements ready for building should you proceed with the project in to Elaboration. This will give all parties some guidance on the amount of risk being taken in to Elaboration. Since the scope of what's being built is unclear this ideally would be time and materials contract.
Refinement of the requirements to the point where you and your delivery partners can agree an acceptable price and schedule range for the build of the base requirements and what the extended content is. In parallel build the requirements that are critical and also kill the technical risks identified. You don’t leave Elaboration until you've built executables that demonstrate through testing the removal of all identified technical risks. You're
The remainder of the requirements are further refined if required, designed and built in iterations. The grouping of requirements in to iterations should now be done based on business priority, having addressed the technical risk. Each iteration should be delivered as executable code that can be tested along with the relevant documentation. You should be aiming for as short an iteration length as possible but monthly iterations would be a good start. They should certainly be no longer than 3 months and you should push these to be shorter as soon as possible. Iterations are a huge risk mitigation tool. You get to deploy the software regularly into a test environment close to your operational environment. It has to be a good thing. You're only paying for working code, potentially per iteration and better still should commercially you need to deliver something to market early you can as you have production quality code at the end of each iteration. Other industries have worked this way for years.
This phase is about getting the software live and in to the hands of the business. It should be similar though hopefully with less issues. The iterations may be used to timebox the deployment of the application in to different business units, geographies or the deployment of tha application by functional area and any mixture of the above.
Let's not forget that the ideal end game not to “do” agile but to reduce the time to value of producing the right software and building it with an appropriate total cost of ownership profile. We should be striving to remove as much of the overhead created by complex contracts and unnecessary work. To do that we need to build trust both internally and between us and with our delivery partners.
This is the first in a series off entries based on harvested lessons from agile teams I'm working with. Some of the findings are common sense but so many times common sense is not commonly applied. Hope you enjoy the series.
At a recent retrospective one of the teams challenged the value of the planning tool being used. The product release programme was using Rational Team Concert for work item management and planning across the global teams. It was being used in fairly typical way showing all the epics and user stories by product area split into a product backlog and those for the current sprint. However due to a misunderstanding, some of the technical teams had a separate plan called their "bubble chart". This chart showed all of the stories they had left to do, broken down by months, they were told in agile you couldn't do this. Some but not all of these items were in the official plan. Then there were was considerable technical debt from the previous release to dig out plus defects to fix from this release. None of this additional work was in the official Rational Team Concert. Leaving it invisible to the teams and senior management and killing any chance of predictability.
With this setup there was no way of finding out, in one place, where the projects were and then rolling that up to where the product release was. Without all of the information the planning tool was becoming a burden that added no perceived value to the teams.
Many teams I work with still have more than one plan either knowingly or unknowingly. They have a formal plan that shows the project managers view of the tasks they believe are left to complete. The developers then have a list of additional work they need to do as a group and then individuals have their "to do" lists on top of that. Ditto for the testers and so on for the other roles. In reality the "to do" lists are the real plan that we're working to, like it or not. That's the work the team will actually do.
Ironically the management group were equally frustrated because they were constantly chasing the team to update RTC and didn't understand why they weren't. They didn't know about the other plans, the real plans, the "bubble chart" and "To Do" lists. The result was that effort was being wasted and any level predictability was missing .
From this retrospective the key things the team did was to get management to accept their "bubble chart" into the main plan with all of it's "extra" activities, so that there was now only one plan.
The other key thing was to explore how to simplify the configuration of Rational Team Concert so that it's as quick enough to be used as a "To Do" list tool.
A programme or project must have only one plan. The plan can be at many levels of detail but it should contains all the work needed to be done at the level of accuracy and precision appropriate for where the project is and for as far ahead as it's possible to plan. The plan should be built up from the individuals "to do" lists. The plan must be visible to the whole team.
So in summary what it is typically missing:-
A plan that doesn't include all the effort required to complete the project has little value to anyone.