Thursday, October 22, 2009

Metrics in an Agile World

The Art of Agile DevelopmentJames Shore and Rob Myers presented a talk on “Metrics in An Agile World” as part of the Agile 2009 conference. The talk starts out by covering informational vs. motivational metrics. Motivational metrics are extrinsic motivators (such as bonus, pay increase) and extrinsic motivators lead to poor performance when compared to intrinsic motivators (interesting work, appreciation, recognition, and good environment). Several examples are given as proof that extrinsic motivators lead to poor performance.

1. Children were given magic markers and told to draw. One group was given a certificate when they finished while another group was not rewarded. The study showed that the group that got the certificate lost interest in drawing, while the other group kept on doing it because it was fun.

2. Two groups were asked to identify matching patterns. One group was paid for it while the other was not. The group that was paid for it made more mistakes then the other group.

3. Two groups were given a puzzle-like problem to solve in 2 steps with a break in between. During the break, the group that was getting paid simply waited around, while the other group kept on discussing and trying to solve the puzzle.

When performance is based on reward, others areas which are not being rewarded will suffer because everybody is trying to maximize the reward and will pay less attention to other items. As an example, if the number of features is being rewarded then bugs will increase as developers work quickly to add new features and worry less on testing. Because we cannot measure everything, there will always be an area that suffers be it less features, more bugs, increased technical debt, padded estimates, etc…

They then talked about creating a framework for metrics covering several categories such as Quality, Value, Progress, Team Performance, Code Design. Within each category, each metric is identified as Qualitative or Quantitative. A few examples of metrics are customer satisfaction, user adoption, billable hours, function points, sloc, spog, cycle time, # of defects, etc. Each is then categorized and analyzed with the goal being to avoid motivational metrics and collect informational ones. One technique discussed involves making the data anonymous and to measure up between groups of peers as opposed to on an individual basis. This will change a metric from motivational to informational.

This presentation is available on InfoQ at http://www.infoq.com/presentations/agile-metrics

Sunday, October 18, 2009

From Good to Great Developer

At Jfocus, Chris Hedgate gave a talk great developers. Chris warns that a good developer can be fast at coding and solving problems, but in long run, he will slow the team down due to un-maintainable code. Great developers are passionate about code and make sure the code they write does not rot over time.

Chris stresses that there needs to be awareness of the benefits of design patterns, TDD, refactoring, readable code. Total cost of software development is development plus maintenance. Maintenance starts as soon as we write the code instead of at the end after it is delivered. This way productivity is maintained at the same pace instead of declining. Existing code must be continuously maintained and all code must be easy to understand and modify.

Next Chris defines habit as the intersection of: Skills (how), knowledge (why), desire (want to). Chris describes the 4 stages of competence:

1. Unconscious incompetent (we don’t know that we don’t know, we don’t know that we can’t and we don’t see a need)

2. Consciously incompetent (we see something we do not know and we see a benefit from it. We can learn and get better at it).

3. Consciously competent: We work by ourselves; we can perform and can teach.

4. Unconsciously competent: habits are achieved, it’s automatic.

We train with book, lectures, and courses. However if they are at Unconscious incompetent then they need to be inspired and moved to consciously incompetent.

Chris suggests that to move people from good to great, we need to inspire people by leading by example and using simple tools, and involve people so that they move along with you from good to great. There need to be

1. Attitude: simplify, improve, modify, never leave code in worse shape than how your found it.

2. Tools: name things (variables, methods), eliminate duplication, pair programming, study circles, reflect and discuss, code reviews.

Chris wraps up by recommending we think about how to guide the team. Lead by example. Be a role model. If the team gets better, productivity will not only remain steady but increase over time.

Saturday, October 10, 2009

Turning on a Sixpence – No Excuses: Concept To Cash Every Week

At ThoughtWorks conference, Kris Lander and Gus Power gave a talk about their agile process and practices. They share with us their experience and stress that they are not telling us how to do things but just showing us how they do things. Below is a list of their practices:
1. Sustainable throughput – follow the money and ship every iteration to production. Show financials outside bullpen. Profit vs. return, cost per unit vs. units delivered, capacity cost vs. velocity, rework completed vs. rework queues, flow. Minimize work in process, Avoid inventory, Keep it visible with $ value.

2. 1 week iteration – focus relentlessly on delivering profitable features every week: Surfaces waste, flushes out breaks in value stream, start thinking about done straight away, regulates pressure over longer periods, start on Wednesdays.

3. Automated deployment – maintain a 1-click deploy that requires no manual intervention: Automated deployment to all sites (Test, demo, CI, prod), Robust and repeatable, A/B legs; no down time, used Hudson and Gant, Used separate queue, Automated schema/data migration, Checked deployed version.

4. Iterate to get something out there – enrich the functional depth of features iteratively: Business learns what it needs, cheapest solution first.

5. Dealing with bugs – fix bugs as soon as they are discovered: No bug backlog and no tracking tool, onsite customers says what’s really a bug, within slice, interrupt pair and fix in story, outside story, write pink card with visible priority at top of board.

6. Managing technical debt – don’t get into debt. If you do, repay some of it every iteration: Team decision, confront value fetishists, blue card always visible with $ value.

7. TDD – working from outside in, drive with tests to keep code habitable, testable and maneuverable: Low cost of change, maintain options with succession, do the valuable refactoring, acceptance criteria create common language, executable acceptance tests. Use jsunit, junit, selenium, integration, load. Use different groovy testing patterns.

8. Continuous feedback – Develop stories in vertical slices to create opportunities for feedback: tester does exploratory testing, onsite customer steers feature evolution, interaction designer sees feature evolve, follow the dots to done.

The Agile PMO: Real Time Governance

At ThoughtWorks technology briefings, Ross Petit and Jane Robarts gave a talk on governance. They start out by mentioning that governance appears to be there to prevent bad more than for actually doing good. It’s to make sure we don’t fail to meet expectations. Today projects have multiple vendors, stack, and stakeholders. Every project has an element of system integration. There is a lot of discovery and hundreds of decisions being taken. We need to make sure all these decisions are taken in a consistent manner and aligned with our goal. A study shows that well governed organizations outperformed poorly governed ones by 8.5 % annually. They face the same king of market and competitor risks as everybody else but the chance of an implosion by ineffective management is way less.

Next they describe IT governance and how it is usually put in terms of technology, assets, budgets, or planning. But it really comes down to behavior. IT is a people business, so governance should be about how people go about executing. We should answer 2 questions. Are we getting value for money and are we getting things in accordance with expectation (regulatory compliance, business control, security, performance, functionality, etc)

They then describe the dual role of the PMO. To delivery teams, the PMO represents the buyers and the funders. They are the buyer’s agent of an IT asset. To a buyer, the PMO represents delivery. They are underwriting the risk that the asset will be developed and complete. Most PMOs focus on one role, but not both.

Next they describe how businesses and IT are not known for good governance. Some examples in business are the A380, mortgage failures, and madoff ponzi scheme. IT examples include projects that are on time and on budget but do not meet client requirements, or there is a hidden system integration project inside an overall delivery project. To be better at governance, we need to be activist investors and not passive. We need to be aware that effort is not result, and the best time to scrutinize is before the wheels fall of. We need the right information and we need it in the right context. There is an asset context and people context. Then we create a lot of transparency and we can work on problems relentlessly.

They describe a continuous governance cycle of IT projects to answer the 2 governance questions: Are we getting value for our money and are we getting things according to expectation. 1st we get the information that we need, and then we can feed a continuous cycle of challenging what it is that we are doing and what it is that we are getting.



Friday, October 2, 2009

When it Just Has to Work

At Agile 2009, Brian Shoemaker and Nancy Van Schooenderwoert gave a presentation about safety critical agile projects. They start about by describing how software can contribute to poor safety like in chemical plants, power stations, aviation systems, and medical devices. Many think the solution is do things in a control and sequential matter. In traditional planning, there is a lot of emphasis on upfront planning. In agile you have to have a sense of direction. A product backlog has stories which are requirements that are still negotiable.

Next they discuss some benefits of Agile. It has 2 great strengths: fast time to market, ability to hit a moving target (tracer bullets). Agile teams bring the certainty of project costs forward in time. In traditional development, code gets more brittle with time. More effort is needed to create the same feature later. In general velocity goes down with time as the code base gets more and more complicated (Non-linear effort vs. results). In agile, if we finish off what we commit to in every iteration (delivering small features), we can keep the effort vs. results curve almost linear. If we want to estimate the future based on past performance, a conservative estimate is to use the lowest velocity of the team of the past couple of iterations. Lastly, agile development takes away of the hiding of bad news.

They then cover how risk management benefits from iteration. With iterations, we analyze risk early and often. Requirements and hazards converge when we have positive stories and negative stories prioritized in the product backlog. Hazards are often caught in context. Analysis can be done using a classical risk ranking matrix of probability (high, occasional, low, remote) and severity (major, moderate, minor). Acceptability is ranked using Unacceptable (mitigation required), As low as reasonably practical (mitigate as reasonable, risk decision must be document and reviewed), and Negligible (acceptable without review). Another technique is the fault tree analysis which tracks what is the bad thing that can happen. You start with effect and decide what could be the cause. Another approach is failure mode effect and criticality analysis where you build up from components and decide what can fail in each component.

Next they describe 5 types of failure:

1. Direct failure: software flaw is in normal correct use of system causes or permits incorrect dosage or energy to be delivered to patient.

2. Permitted misuse: software does not reject or prevent entry of data in a way that a) is incorrect according to user instructions, and b) can result in incorrect calculation or logic, and consequent life-threatening or damaging therapeutic action.