Sunday, December 27, 2009

Pragmatic Personas: Putting the User Back in User Stories

At Agile 2009, Jeff Patton gave a workshop about how to design and build software that has a better user experience. Jeff starts out by reviewing the different ways that software is built.
  • FUBU: For Us By Us. Start with idea to build something for us. Product is built to solve a certain problem we face. Then audience diversifies and product evolves and no longer becomes for us
  • FMBY: For Me By You. We take down requirements and hope that everything comes out OK.
  • MSU: Making things up design.
  • FTBU: For Them By Us.

Jeff explains that if a product is good, it is built for the people that will use it and that is not us.For them by us is user centered design. To build a good product we need to care about the user. Another way is to keep asking users what they want.

Jeff defines a pragmatic persona as a quick exploration of what we know about our users. We want to build them to start discussions about what we know and don’t and help drive mapping your user experience using stories.

Next Jeff goes over the process of creating a persona by walking through an example of a restaurant review application:

1. Identify kinds of users (user types or roles)

Determine the criteria that makes each type distinct. Identify high priority or focal types.
Think of different kinds of people that will use your product as user types (college students, business professional).
A user role describes the relationship a person has with your product. We change roles like changing hats. Think thing-doer (late night pizza buyer, daytime lunch seeker, penny pinching pizza buyer, married daytime lunch seeker). Isolate aspects that are relevant.
Next prioritize. The user types that are the most relevant to design depends on the business and product goals. Figure out which user types are most critical to achieve the goals and objectives. Refer to these user types as focal or primary. A typical system will have 2 or 3 focal users and several more that aren’t (make sure these are listed as well).

2. Profile Users (start with assumptions about users and then add additional info by doing research). Identify information about users relevant to the product. Profiling adds general information to user types like sizing, gender, domain knowledge, pains, goals, motivation, general activities, context, frequency, collaborators, and other products they rely on.

Tuesday, December 15, 2009

Kanban Adoption at SEP

Chris Shinkle gave a talk about Kanban adoption at SEP during the Agile 2009 conference. Chris described the Dreyfus Model of Skill Acquisition and showed how the team at SEP progressed from one stage to the other as they learned and applied Kanban.
Stage 1 – Novice: Little or no previous experience. It is about following context free rules. No discretionary judgment. Want to accomplish a goal, not learn. Team had clear understanding of project, clear priority of work items, and clearly see progress from the introduction of Kandan boards and standup meetings.

Stage 2 – Advanced Beginner: Still rules based, but start understanding rules within context and based on past experience. Team understood WIP (work in progress) limits, began to collaborate and identify bottlenecks and areas for improvement.

Stage 3 – Competent: People recognize patterns and principles. Rule sets become more rules of thumb. They start to establish guidelines and seek out and solve problems. They see actions in terms of long term plans and goals. The team felt a sense of ownership. They started pulling in alternate practices to optimize the process and solve specific problems. They began to discover and understand lean principles themselves. Things like cause and effect of flow, value, quality, quality, and waste.

Stage 4 – Proficient: People seek out and want to understand the big picture. They perceive a complete system instead of a set of individual parts. They can learn from the experience of others and take full advantage of reflection and feedback and correct previous poor task performance. The team starts to focus on throughput and reducing cycle time. They began to focus on optimizing the whole and reducing the cost of delay and WIP limits. Kaizen moments became more common place.

Stage 5 – Expert: They can look at each situation immediately and intuitively take appropriate action. They know what to focus on and can distinguish between important details and irrelevant ones. Chris’s teams haven’t reached this stage yet.

Chris believes that it is ok to talk about principles but teaching principles does not equate to acquiring a new skill. Kanban provides a framework where principles can be introduced. Then the process in and of itself is going to help encourage those behaviors and allow people to understand them at their own pace.

Next Chris shares the lessons learned:

1. Start by teaching practices, not principles

2. Don’t set WIP limits too low for a new team

3. Keep rules about moving tokens/cards simple

4. WIP is strongly correlated to product quality.

Finally, Chris concludes by stating that Kanban teams mature in a way consistent with the Dryfus model. Kanban is an effective tool for teaching lean principles and managing change in an organization. There multiple levels of maturity and at each level certain behavior guide your focus.

This presentation is available on InfoQ at  http://www.infoq.com/presentations/kanban-at-sep

Sunday, December 13, 2009

The Tyranny of the Plan

Leading Lean Software Development: Results Are not the PointMary Poppendieck gave a talk on lean software development at the UK Lean Conference. Mary started out by describing the process of building the empire state building. The building was exactly on time and 18% under budget:

· 9/22/29 – demolition started

· 1/22/30 – excavation started

· 3/17/30 – construction started

· 11/13/30 – exterior completed

· 5/1/31 – building opened

Mary next explained how they managed to do that before there were computers, GANTT charts and PERT charts. The builder focused on workflow. They did not need to figure out the details and lay out a plan. There was no design when contract signed. They used deep experienced on a fixed priced contract. They focused on key constraint which wasn’t labor, steel or stone. It was material flow of needing the right stuff at the right time (500 trucks a day delivering materials, no storage on site). The building was designed to be decoupled. There were 4 pacemakers which were independently scheduled workflows. They avoided having cascading delays. They understood that it pays to invest money to save time (cash flow thinking). Every day of delay cost 10K (about 120K today). Schedule was not laid out based on the details of the building design. They created the schedule and created the design to fit the schedule. The building was designed based on the constraints of the situation (2 acres, zoning ordinances, 35M capital, laws of physics, and May 1, 1931 deadline). Traditionally we start by figuring out what you are going to do, break it down into pieces (WBS), sum up the total and there is your schedule. Instead here they started with the constraints and created a schedule that can fit within the constraints.

Implementing Lean Software Development: From Concept to CashMary summarized the lessons learned. Design the system to meet the constraints; do not derive constraints from the design (budget, time). Decouple workflows; break dependencies from architectural point of view and scheduling point of view instead of organizing them on a schedule (PERT chart). Workflows are easier to control and more predictable than a schedule. For control and predictability establish a reliable workflow instead of establishing a schedule that needs to be followed.

Next Mary explains that there are 2 reasons that we schedule. The 1st is to control when things will happen. However, detailed schedules are deterministic and do not allow for normal (common cause variation). Machines break down, weather, no way to deal with variations unless we add slack. Attempting to remove common cause variation from a system will cause it to oscillate wildly. Flow systems on the other hand build in slack whenever they need it. Managing a level workflow is a lot easier than following a deterministic schedule.

Thursday, December 10, 2009

Scaling Software Agility: Best Practices For Large Enterprises

Scaling Software Agility: Best Practices for Large EnterprisesDean Leffingwell gave a talk at Agile2009 about best practices for enterprises to scale agile.

He talks about his experience consulting for a company wanting to apply agile at a large scale. There they took the extreme agile approach by throwing away all current practices and starting out fresh with pair programming and slowly adding items as deemed necessary. They started out with minimal tooling and some coaching. They had a commitment to Agile from top to bottom. At the end, the results showed reduced cycle time from 18 months to 4-5 months, productivity increased by 40% and defect density remained the same (meaning new development had lower defect rate than older development, since more was getting done).

Implementing agile involves enhanced project management practices (SCRUM) and enhanced software development practices (XP) along with modified social behavior (face to face communication).
Agile Focuses on developing value to the customer. Agile empower teams to more rapidly deliver quality software explicitly driven by intimate and immediate feedback.

Dean moves on to compare the agile cycle to the waterfall cycle and explains how agile delivers value faster by giving the iphone example. When the 1st version of the iphone was released, it retailed for about $600 and ran on a slow network, did not include apps and there wasn’t even an app store. Later newer, enhanced versions came along at a lower price.

Dean moves on to describe 7 team practices that scale and 7 enterprise practices.

7 team practices that scale:
  1. The define define/build/test team
  2. Mastering the iteration
  3. Two-levels of tracking and planning
  4. Smaller, more frequent releases
  5. Concurrent testing
  6. Continuous Integration
  7. Regular reflection and adaptation
1. Define/build/test team
Organizations have to move from function silos to cross functional teams of 5-9 members. Instead of having separate teams for product management, architecture, development, and QA, have many smaller teams each composed of developers, testers, product owner, tech lead, and scum master. In an enterprise of say 400, that makes about 57 teams. Each team is a self organizing team that can define, build and test something of interest/value. Teams will interface with architects as well as QA and Release management.

Friday, November 13, 2009

Coaching Self Organizing Teams

At QCon London 2009, Jospeh Pelrine gave a talk about coaching self organizing teams. The main point behind the talk was a metaphor comparing coaching to cooking chicken soup. The team is the ingredients and the coach is the cook. The cook has to constantly control the heat. There are several stages:

1. Burning (stressed, burn out, keep fighting).

2. Cooking (border of chaos or anarchy). This is when the team is most creative.

3. Stagnating. What was good is becoming bad and moldy. People stop coming to meetings, no one writes tests, and everybody is procrastinating.

4. Congealing. It’s like jelly. There is still some flexibility, but it is starting to take hard shape. Team is disinterested, there is no ongoing discussion.

5. Solid/Frozen. This is the way things get done here, 9 to 5, no initiative, do my job and leave.

The high and low stages show similar behavior patterns. The key is to control the heat and to keep the team cooking by managing time and tasks.

This presentation is available on InfoQ at  http://www.infoq.com/presentations/coaching-self-org-teams

Monday, November 2, 2009

Agile Project Metrics

Dave Nicolette presented Agile Project Metrics at Agile Conference 2009. He explained that there are 3 levels of maturity in agile teams:

1. Six week iterations. Stories are divided into tasks. Estimating is done in ideal time. Burn down chart is updated daily.

2. Two week iteration. Stories are divided but estimates are made in story points.

3. One week iteration. Stories are kept small. No daily burn down chart.

Also, teams are composed of generalized specialists where everyone can contribute in different areas, Tech lead/Chief where junior members are combined with a tech lead to help out in different situations, specialists with internal handoffs from one member to the other.

Collect metrics for self improvements and discontinue once goal is achieved. To the customer, working software is the most important measure of progress. Things to measure are:

1. Running test features

2. Hard financial values (benefit of using software after every release)

3. Earned Business Value (have customer value features)

4. Velocity

5. Static Code Analysis (statements per method, LOC covered by tests, cyclomatic complexity

6. Earned Value Management

The talk concludes by giving an example of a sample scorecard divided into 4 quadrants:

1. Value delivered: Earned Business Value, Running tested features, Burn down charts.

2. Delivery effectiveness: Burn down with team focus, story cycle time.

3. Software Quality: Customer satisfaction, non functional requirements, testing metrics, static code analysis, observations.

4. Continuous Improvement: Build frequency, escaped defects, use of TDD, refactoring, overtime, issues from retrospective

This presentation is available on InfoQ at http://www.infoq.com/presentations/agile-project-metrics

Thursday, October 22, 2009

Metrics in an Agile World

The Art of Agile DevelopmentJames Shore and Rob Myers presented a talk on “Metrics in An Agile World” as part of the Agile 2009 conference. The talk starts out by covering informational vs. motivational metrics. Motivational metrics are extrinsic motivators (such as bonus, pay increase) and extrinsic motivators lead to poor performance when compared to intrinsic motivators (interesting work, appreciation, recognition, and good environment). Several examples are given as proof that extrinsic motivators lead to poor performance.

1. Children were given magic markers and told to draw. One group was given a certificate when they finished while another group was not rewarded. The study showed that the group that got the certificate lost interest in drawing, while the other group kept on doing it because it was fun.

2. Two groups were asked to identify matching patterns. One group was paid for it while the other was not. The group that was paid for it made more mistakes then the other group.

3. Two groups were given a puzzle-like problem to solve in 2 steps with a break in between. During the break, the group that was getting paid simply waited around, while the other group kept on discussing and trying to solve the puzzle.

When performance is based on reward, others areas which are not being rewarded will suffer because everybody is trying to maximize the reward and will pay less attention to other items. As an example, if the number of features is being rewarded then bugs will increase as developers work quickly to add new features and worry less on testing. Because we cannot measure everything, there will always be an area that suffers be it less features, more bugs, increased technical debt, padded estimates, etc…

They then talked about creating a framework for metrics covering several categories such as Quality, Value, Progress, Team Performance, Code Design. Within each category, each metric is identified as Qualitative or Quantitative. A few examples of metrics are customer satisfaction, user adoption, billable hours, function points, sloc, spog, cycle time, # of defects, etc. Each is then categorized and analyzed with the goal being to avoid motivational metrics and collect informational ones. One technique discussed involves making the data anonymous and to measure up between groups of peers as opposed to on an individual basis. This will change a metric from motivational to informational.

This presentation is available on InfoQ at http://www.infoq.com/presentations/agile-metrics

Sunday, October 18, 2009

From Good to Great Developer

At Jfocus, Chris Hedgate gave a talk great developers. Chris warns that a good developer can be fast at coding and solving problems, but in long run, he will slow the team down due to un-maintainable code. Great developers are passionate about code and make sure the code they write does not rot over time.

Chris stresses that there needs to be awareness of the benefits of design patterns, TDD, refactoring, readable code. Total cost of software development is development plus maintenance. Maintenance starts as soon as we write the code instead of at the end after it is delivered. This way productivity is maintained at the same pace instead of declining. Existing code must be continuously maintained and all code must be easy to understand and modify.

Next Chris defines habit as the intersection of: Skills (how), knowledge (why), desire (want to). Chris describes the 4 stages of competence:

1. Unconscious incompetent (we don’t know that we don’t know, we don’t know that we can’t and we don’t see a need)

2. Consciously incompetent (we see something we do not know and we see a benefit from it. We can learn and get better at it).

3. Consciously competent: We work by ourselves; we can perform and can teach.

4. Unconsciously competent: habits are achieved, it’s automatic.

We train with book, lectures, and courses. However if they are at Unconscious incompetent then they need to be inspired and moved to consciously incompetent.

Chris suggests that to move people from good to great, we need to inspire people by leading by example and using simple tools, and involve people so that they move along with you from good to great. There need to be

1. Attitude: simplify, improve, modify, never leave code in worse shape than how your found it.

2. Tools: name things (variables, methods), eliminate duplication, pair programming, study circles, reflect and discuss, code reviews.

Chris wraps up by recommending we think about how to guide the team. Lead by example. Be a role model. If the team gets better, productivity will not only remain steady but increase over time.

Saturday, October 10, 2009

Turning on a Sixpence – No Excuses: Concept To Cash Every Week

At ThoughtWorks conference, Kris Lander and Gus Power gave a talk about their agile process and practices. They share with us their experience and stress that they are not telling us how to do things but just showing us how they do things. Below is a list of their practices:
1. Sustainable throughput – follow the money and ship every iteration to production. Show financials outside bullpen. Profit vs. return, cost per unit vs. units delivered, capacity cost vs. velocity, rework completed vs. rework queues, flow. Minimize work in process, Avoid inventory, Keep it visible with $ value.

2. 1 week iteration – focus relentlessly on delivering profitable features every week: Surfaces waste, flushes out breaks in value stream, start thinking about done straight away, regulates pressure over longer periods, start on Wednesdays.

3. Automated deployment – maintain a 1-click deploy that requires no manual intervention: Automated deployment to all sites (Test, demo, CI, prod), Robust and repeatable, A/B legs; no down time, used Hudson and Gant, Used separate queue, Automated schema/data migration, Checked deployed version.

4. Iterate to get something out there – enrich the functional depth of features iteratively: Business learns what it needs, cheapest solution first.

5. Dealing with bugs – fix bugs as soon as they are discovered: No bug backlog and no tracking tool, onsite customers says what’s really a bug, within slice, interrupt pair and fix in story, outside story, write pink card with visible priority at top of board.

6. Managing technical debt – don’t get into debt. If you do, repay some of it every iteration: Team decision, confront value fetishists, blue card always visible with $ value.

7. TDD – working from outside in, drive with tests to keep code habitable, testable and maneuverable: Low cost of change, maintain options with succession, do the valuable refactoring, acceptance criteria create common language, executable acceptance tests. Use jsunit, junit, selenium, integration, load. Use different groovy testing patterns.

8. Continuous feedback – Develop stories in vertical slices to create opportunities for feedback: tester does exploratory testing, onsite customer steers feature evolution, interaction designer sees feature evolve, follow the dots to done.

The Agile PMO: Real Time Governance

At ThoughtWorks technology briefings, Ross Petit and Jane Robarts gave a talk on governance. They start out by mentioning that governance appears to be there to prevent bad more than for actually doing good. It’s to make sure we don’t fail to meet expectations. Today projects have multiple vendors, stack, and stakeholders. Every project has an element of system integration. There is a lot of discovery and hundreds of decisions being taken. We need to make sure all these decisions are taken in a consistent manner and aligned with our goal. A study shows that well governed organizations outperformed poorly governed ones by 8.5 % annually. They face the same king of market and competitor risks as everybody else but the chance of an implosion by ineffective management is way less.

Next they describe IT governance and how it is usually put in terms of technology, assets, budgets, or planning. But it really comes down to behavior. IT is a people business, so governance should be about how people go about executing. We should answer 2 questions. Are we getting value for money and are we getting things in accordance with expectation (regulatory compliance, business control, security, performance, functionality, etc)

They then describe the dual role of the PMO. To delivery teams, the PMO represents the buyers and the funders. They are the buyer’s agent of an IT asset. To a buyer, the PMO represents delivery. They are underwriting the risk that the asset will be developed and complete. Most PMOs focus on one role, but not both.

Next they describe how businesses and IT are not known for good governance. Some examples in business are the A380, mortgage failures, and madoff ponzi scheme. IT examples include projects that are on time and on budget but do not meet client requirements, or there is a hidden system integration project inside an overall delivery project. To be better at governance, we need to be activist investors and not passive. We need to be aware that effort is not result, and the best time to scrutinize is before the wheels fall of. We need the right information and we need it in the right context. There is an asset context and people context. Then we create a lot of transparency and we can work on problems relentlessly.

They describe a continuous governance cycle of IT projects to answer the 2 governance questions: Are we getting value for our money and are we getting things according to expectation. 1st we get the information that we need, and then we can feed a continuous cycle of challenging what it is that we are doing and what it is that we are getting.



Friday, October 2, 2009

When it Just Has to Work

At Agile 2009, Brian Shoemaker and Nancy Van Schooenderwoert gave a presentation about safety critical agile projects. They start about by describing how software can contribute to poor safety like in chemical plants, power stations, aviation systems, and medical devices. Many think the solution is do things in a control and sequential matter. In traditional planning, there is a lot of emphasis on upfront planning. In agile you have to have a sense of direction. A product backlog has stories which are requirements that are still negotiable.

Next they discuss some benefits of Agile. It has 2 great strengths: fast time to market, ability to hit a moving target (tracer bullets). Agile teams bring the certainty of project costs forward in time. In traditional development, code gets more brittle with time. More effort is needed to create the same feature later. In general velocity goes down with time as the code base gets more and more complicated (Non-linear effort vs. results). In agile, if we finish off what we commit to in every iteration (delivering small features), we can keep the effort vs. results curve almost linear. If we want to estimate the future based on past performance, a conservative estimate is to use the lowest velocity of the team of the past couple of iterations. Lastly, agile development takes away of the hiding of bad news.

They then cover how risk management benefits from iteration. With iterations, we analyze risk early and often. Requirements and hazards converge when we have positive stories and negative stories prioritized in the product backlog. Hazards are often caught in context. Analysis can be done using a classical risk ranking matrix of probability (high, occasional, low, remote) and severity (major, moderate, minor). Acceptability is ranked using Unacceptable (mitigation required), As low as reasonably practical (mitigate as reasonable, risk decision must be document and reviewed), and Negligible (acceptable without review). Another technique is the fault tree analysis which tracks what is the bad thing that can happen. You start with effect and decide what could be the cause. Another approach is failure mode effect and criticality analysis where you build up from components and decide what can fail in each component.

Next they describe 5 types of failure:

1. Direct failure: software flaw is in normal correct use of system causes or permits incorrect dosage or energy to be delivered to patient.

2. Permitted misuse: software does not reject or prevent entry of data in a way that a) is incorrect according to user instructions, and b) can result in incorrect calculation or logic, and consequent life-threatening or damaging therapeutic action.

Tuesday, September 15, 2009

Deliberate Practice in Software Development

Leading Lean Software Development: Results Are not the PointAt Agile 2009, Mary Poppendieck gave a talk on the theory behind craftsmanship. She starts out by giving some stats on Hockey Players from the book Outliers. 40% of elite Canadian hockey players were born between January and March, 30% April – June, 20% July – September, 10% October – December. The cut-off date is the end of the calendar year. So the ones born in December are at a disadvantage since they will be the youngest. The same applies for European football players.

Mary then defines talent (nature) as abilities, powers, and gifts bestowed upon a person; natural endowments; general intelligence or mental power. Also, talent (nurture) is a special innate or developed aptitude for an activity. Talent is overrated. The most accomplished people need around ten years of deliberate practice before becoming world class (ten-year rule).

Next Mary defines deliberate practice: Identify a skill that needs improvement, devise a focused exercise designed to improve the skill, practice repeatedly, obtain immediate feedback and adjust accordingly, focus on pushing the limits expecting repeated failures, practice regularly and intensely up to 3 hours a day.

Then Mary elaborates on the four elements of deliberate practice:

1. Mentor: masters, teachers, deeply knowledgeable and involved. They hire and grow people, review and guide work, set technical standards, and ensure technical excellence.

2. Challenge: do it frequently and it will become easy, test/integrate/release early and often, change the code frequently, assign challenging work assignment, focus on constant improvement of the product, process, and people. Don’t get comfortable, get better.

3. Feedback: Immediate feedback: Design reviews with mentors providing guidance, common code ownership with peers providing visibility, TDD, CI, customer feedback at every iteration, escaped defect feedback and analysis. Focus on what’s not perfect so it can be improved. You have to ask constantly “why are we doing this?” Bad news first!

4. Dedication: skill development takes time in place, time to learn, and time to invent. Also need motivation, and career paths for technical experts.

Implementing Lean Software Development: From Concept to CashMary wraps up by stressing the importance of clean code and going over the Software Craftsmanship manifesto. We value not only working software but also well crafted software, not only responding to change, but also steadily adding value, not only individuals and interactions, but also a community of professionals, not only customer collaboration but also productive partnerships.

This presentation is available on InfoQ at  http://www.infoq.com/presentations/poppendieck-deliberate-practice-in-software-development

Integration Tests are a Scam

At Agile 2009, Joe Rainsberger gave a talk about integration tests. Joe states that integration tests (end to end) are a scam. He defines integration tests as is any test were the result of the test depends on the correctness of more than one interesting behavior (multiple objects, multiple unrelated methods, complex enough to be tested on their own).

Next Joe covers some basics system object classifications from Domain Driven Design:

1. Values: Simple like string or number, transient, equality depends on value rather identity, light weight enough to be created when needed and thrown away when not

2. Entity: values with entity semantics have essential identifier and end up getting stored somewhere, equal if ids are equal.

3. Services: Stateless action of some kind.

Joe explains that any test that verifies 1 entity or service behavior possibly with multiple values involved is as focused a test as we can expect. Values are our inputs and values are our outputs. We do not want to test multiple entity or service behavior at the same time as this will lead to long running test suite. These result in slow feedback that eventually lead to no feedback since we stop running them. The tests lose their power of a pool of change detectors. The later a mistake is found the more costly it becomes. This also leads to a false sense of security. Over a long running time, the value of having the tests is roughly the same as not having tests at all.

Tests consist of arrange (put together the things you want to test), act (invoke what you to test), assert (check result). Next Joe discusses the problems with excessive test setup that are usually needed for integration tests. Due to all the setup, we end up having multiple asserts per test or even worse, multiple actions per test. Alternatively we copy/paste setup into multiple tests. The hard coding of dependency is making it impossible to check something without executing something else that we do not want to check right now.

Next Joe explains that when integration test fails it is not clear where the failure is. Integration test are slower than focus tests. # tests needed is the product of the code path(4*3*5=60 test to be thorough). The # of test is not exponential but combinatoric. In reality if we do a really good job we end up with less than 1 % coverage. It is like the lottery, when the odds of winning are 1 in 40 million, buying 10000 tickets is not that much better than buying 10. When we need 4 million tests, writing 20000 tests is not better than 2000.

Monday, September 14, 2009

I Come to Bury Agile, Not to Praise it

Agile Software Development: The Cooperative Game (2nd Edition)Alistair Cockburn gave a talk at Agile 2009 about software development. He explains that developing software is about people making concrete ideas in an economic context. It is about people inventing, deciding, communicating, solving problems and creating solutions that they don’t yet understand and keep changing, expressing ideas in primitive languages to an interpreter unforgiving of errors, making decisions with limited resources and where every choice has economic consequences. All of engineering and team design patterns kind of fall into this definition.

Next Alistair describes one pillar of software development as a cooperative game. Games have positions, moves, and strategies. Some are competitive like chess which is finite and goal-directed or poker which is open-ended. Others are cooperative like rock climbing which is finite and goal-directed, or jazz and music which are open ended games. Some fall in the middle like both like organizational survival and career management. These are infinite.

He explains that software development is cooperative, finite and goal oriented (funding model): 1 round is a deployment of the system. IT system is cooperative and open ended games. They live until we get tired of them, retire them, rebuild them or buy a commercial system. Product Line Management maps to infinite games. After one release 3.0, a competitor comes out with something new, and we follow our release with a .x release just to stay in the game and so on.

The game has two conflicts, to deliver the software and to setup for the next game by refactoring architecture, adding tests, making sure juniors are ready to lead, and adding documentation, etc. Both are competing for finite resources. Only three moves are allowed: invent, decide, and communicate. Also, the situations (almost) never repeat. The strategy that worked on last project, odds are it might not work on the next project. We have to always be alert.

Next Alistair states that we have to adapt to your situation based on the number of people to coordinate and the criticality to the application. Communication and control structure change and strategies, setup, conventions need to change accordingly. Face to face is the most effective form of communication. When there are question and answer the best form of communication is people at whiteboard, then people on phone, then people on chat. If there are no question and answer then best is videotape, then paper.

Crystal Clear: A Human-Powered Methodology for Small TeamsAlistair describes distance as expensive. If we assume that 2 programming in pairs communicating once every 20 minutes cost nothing then 12 people on same floor but in different rooms cost 100K/yr penalty in communication speed, and 12 people on different floors cost 300K/yr penalty in communication speed. People issues determine a project’s speed. Setup a project so people can detect when something is not right. They need to care enough do something about it and effectively pass along the information. Speed of the project is the speed of which ideas moves between minds (attitude and mechanics for communication).

Thursday, August 20, 2009

Performance Tuning for Apache Tomcat

At SpringOne 2009, Mark Thomas gave a talk about Tomcat configuration. Mark mentions that 80% of performance problems will be in the application rather than in tomcat. Having said that, Mark shares some tomcat optimizations:

1. Logging: Turn off logging to std and only log to a file. Add rotation by limiting file size and # of file.

2. Connector Tuning: depends on application, client(web of mobile), tcp, http keep alive, and sll. Keep alive uses one tcp connection for multiple http requests. Modern pages uses over 100 requests per page. Blocking IO connectors use a thread per connection. For sticky sessions, layer 4 load balancer uses client ip and port (not always available when isps proxy user requests). Layer 7 load balancer understands http headers and tomcat puts markers for sticky session in header.

a. BIO – Java blocking connector. Stable (use as default)

b. Native (APR) – non blocking, Open SSL (use when need SSL or high concurrency with keep alive). Not stable on Solaris so use NIO instead.

c. Java non blocking IO – JSSE based SSL

Tuning:

a. MaxThreads: maximum number of concurrent requests (in BIO, it is the max number of open connections). Set to 200 – 800. Start with 400 and increase if low CPU usage or decrease with high usage

b. MaxKeepAliveRequests: Use 1 (turn off) or 100. Used to clean up and close unused requests. Disable for BIO with high concurrency, layer 4 load balancer, no ssl. Enable for ssl, APR/NIO, layer 7 load balancer. Disable if using httpd and have it on same box as tomcat

c. ConnectionTimeOut. Typical value is 3000 (3 seconds). Default of 20000 (20s) is too high. Increase for slow clients or layer 7 load balancer with connection pool and keep alive.

3. Content Cache: caches static content. Configured with CacheMaxSize in KB (10240), CacheTLL in ms(5000), CacheMaxFileSize in KB (512).


4. JVM:

a. Xmx/-Xmx used to define size of java heap. Aim to set as low as possible to leave enough space for all other processes. Setting too high can also lead to very long GC cycles. If app needs 200M set it for 250 or 300M.

b. –XX:NewSize/-XX:NewRatio set to 25%-33% of total java heap size. No need to change, but if you need to go into these details, set NewRatio over NewSize.

c. -XX:MaxGCPauseMillis –XX:MaxGCMinorPauseMillis. Goal to end up with more frequent shorter GC.

Reference: http://blog.sun.com/watt/resource/jvm-options-list.htm

5. Scaling: State management and failover. Setup cluster in httpd.conf, enable manager in httpd.conf, add sticky session in tomcat by adding unique name to jvmRoute in server.xml and then add unique route name to httpd balance member and add sticky session to proxy pass. For adding session replication, most of the time, cost of setting it up, coding for it and maintaining it is not worth it.

Mark wraps up with some hints. He recommends setting up clustering in development. He also recommends using 3 nodes instead of 2 to test load balancing and node failure to make sure load of one node is distributed equally on the remaining nodes. Mark also notes that redeployment can lead to memory leaks.


This presentation is available on InfoQ at http://www.infoq.com/presentations/Tuning-Tomcat-Mark-Thomas

Monday, August 17, 2009

Navigating the Rapids - Real World Lessons in Adopting Agile

Sam Newman talks about lessons learned while transitioning clients to agile practices. He 1st recalls a story where he failed to educate stakeholders upfront that he will share with them the good news and the bad news on a regular basis in hope that they can step in and help with the bad news when needed. Next he discusses how when adopting agile productivity 1st goes down as the team is adapting to the new practices. Patience is needed before there is a clear sign of productivity increase. Finally he describes how different clients prefer to see data in different ways.

He summarizes the lessons learned as:

1. Educate stakeholders about early signs of pain

2. Bite of something small – Earn trust early: Need to show that things are better (hard numbers, metrics).

3. Track your data and know how to present it

This presentation is available on InfoQ at  http://www.infoq.com/presentations/navigating-agile-rapids

Saturday, August 15, 2009

The Dancing Agile Elephant: IBM Software Group’s Transition to Agile and Lean Development

At QCON San Francisco 2008, Sue McKinney gave a presentation on IBM’s agile adoption strategy. She started out by giving an overview of some of the business and operational challenges at IBM. These include innovating the business to differentiate and capture new value, heighten responsiveness and closer linkage to customers, and improved time to value. On the operations side, issues included better workload management, improved quality, improved project development cycle times, improved predictability on schedule, and making better use of resources to be more productive

Next Sue described the IBM environment. IBM has a global team, with different companies brought together by acquisition. There are 500 development teams in 5 divisions. Teams are as large as 800 or as small as 20. Very few teams are collocated. They are highly security conscious. They use many tools (due to acquisitions), and many platforms.

Sue describes how IBM software development transformation went from waterfall in the 1980s (rigid, late feedback, slow reaction to market changes) to iterative (customized RUP, community source and component reuse, emphasis on consumability), to agile (global reach, SOA, agile practices, outside-in development, tool and not rules, continuous learning and adaptive planning

Sue mentions how she sold agile as a way to deal with uncertainty, and to respond to changing markets. Agile gives them transparency to check where things are going and enables them to take a course correction if needed by meeting with stakeholders and deciding what change to make. She kept processes to deal with complexity (team size, geo distribution, compliance or regulatory issues) and move from lightweight to things that are more thorough and allow for traceability.

Next Sue lists some things to consider before getting started. These include getting management support, picking strong experienced leaders, picking the right project as a proof point (project with high visibility and some risk), providing the right education, tooling and governance, ability to allow change to occur, and keeping it simple.

Sue defined Agile as short time boxed iteration with stakeholder feedback. This creates automatic constraints (iteration length) that force the team to find defects earlier and address them. The team becomes more responsive and there is always a sense of urgency. There is also a notion of transparency with demos at the end of each iteration. The constraints forces teams to optimize and eliminate waste (automate). Share holder feedback causes us to focus on the essentials.

The initial project was a team of 200 people over 4 sites with 1 week iteration. 1st 2 months, put together infrastructure to enable the team (test case harnesses, continuous integration, static code analysis), then 1 week iterations began. The software was published for regular consumption and received regular feedback. For sustainability (220 out of 500 teams are using agile), they created a practices for distributed development:

Thursday, July 30, 2009

Realistic About Risk: Software Development with Real Options

At QCON London 2008, Olav Maassen and Chris Matts gave a talk about Real Options. They started out with a quick exercise where the audience was divided up into teams and each team had to write down who they thought would finish task A 1st and who would finish task B 1st. then they gave detailed instructions of each task and asked the teams to complete them. At the end, none of the teams picked the correct winner, where as both Olav and Chris picked correctly. The point of the exercise was that Olav and Chris made their decisions later where as the teams made their decision early even before knowing what the tasks were.

They then compared the risk profile of an agile project (many short increments) vs. the risk profile of waterfall project (one long increment). People who are risk averse prefer the waterfall model even though the agile risk profile appears safer.

Next they explain real options which is an approach that allow optimal decisions with the current context. It has two aspects: Math and Psychology. Using math to price real options is hard, but the results of the math tell us that:

1. Options have value

2. Options expire

3. Never commit early unless you know why

When making a decision, you can be right, wrong, or uncertain. People hate uncertainty so much that they will rather be wrong than uncertain. From a psychological aspect, rather than saying don’t make the decision now, say let’s make the decision under these conditions and circumstances (postponed till a future date) “let’s do it when…”

They wrap up by showing how real options apply in IT. No big upfront design and defer decisions too last responsible moment. Pair programming gives you options by sharing knowledge and making you less dependent on one person. Also, by assign juniors 1st to projects this leaves seniors (valuable skills) to coach and mentor with an option to work on emergency or high priority projects that come up later. When using an MS project plan, it gives us the zero probability date. Before that date there is no chance of delivering early. On that date you have zero percent probability of delivering and then the distribution builds from there. A study shows that to get 90% probability you need to multiply the given estimates by 4.

This presentation is available on InfoQ at  http://www.infoq.com/presentations/software-with-real-options

Sunday, July 26, 2009

Coaching and Scaling Agility

At QCON San Francisco 2008, David Hussman gave a presentation on coaching. He defines coaching as helping plan products, helping with iterative delivery, helping tune and improve, and helping to build community. Coaching is about guiding people from how to why Shu, Ha, Ri or practice, improvise, and evolve. Coaching gigs and styles vary greatly. They can be prescriptive where this is what you should do or descriptive where this is what I have seen work.

When coaching large communities, it is important to understand that there is no recipe. Each community is unique. David recommends coaching respectful change where change must happen with people and not to them. Try to understand who people are, how they work right now, what’s working for them, and what motivates them to change. Try to find some set of practices, a way to bound as a community, some way to talk about products, some way to deliver it, and some way to tune it.

Then David talks about providing real education by building a library, providing pragmatic (need to try and experience things) training, coaching classes, facilitating training, and teaching TDD-refactoring.

David talks about chartering which covers who is involved, what we are trying to get done, what are the risks, timeline, etc… David uses chartering to find common goals and build a collective groove. Here is what we are trying to do, here is how we know when we are done, this is who is building it, etc...

Next David discusses scaling core practices. He recommends creating pragmatic product roadmaps (3, 6, 12, 18 months), pairing beyond programming (business person, testing person, and development person), radiating information, making issues visible, and promoting improvisation.

Then David describes two situations: many teams, many products or many teams one product. They need to be working in cross cutting concerns. Also we need to build customer communities composed of people that have interest, passion, knowledge, and some decision making capability.

Lastly, David discusses issues with large distributed communities. Conference calls are NOT just like being there. One option is building whole sub-teams around common goals and common understandings.

This presentation is available on InfoQ at http://www.infoq.com/presentations/coaching-scaling-agility

Monday, July 20, 2009

Behavior Driven Development

The RSpec Book: Behaviour Driven Development with Rspec, Cucumber, and FriendsAt QCON 2008, Dan North gave a talk about BDD. Dan start by stating that projects fail because the products come in too late, cost too much, do the wrong thing, are unstable in production and crash every day, break the rules, or the code is impossible to work with.

Next Dan compares the bottom up and top down approached to delivering software. Frederick Taylor’s scientific management theory kind of says that people are lazy and stupid and as a result we need to make their work simple and we need to constantly monitor them. We need to separate work from management. Top down process on delivering software is based on premise that people downstream are increasingly pluggable and prone to make mistakes so we need to front load our process with all the smart stuff so there is less risk of something going wrong as we go further along. Since the price of defects increase the later it is discovered, we need big upfront design(BUFD). Plan, requirements specs, functional design, detailed design, test plans is hedging against the exponential cost of change, but this is in turn creates it and thus creates a reinforcing loop. There is no cause and effect, they both cause one another.

Meanwhile Deming’s premise is that people generally want to do a good job and take pride in their work and they respond well to an encouraging environment. Bottom up process builds business objects to represent business domain and then produces an architecture to wire them together.

Next Dan shares statistics that say that 60% of features are never or rarely used, 30% are sometimes used, and 10% to 15% are often or always used.

Then Dan explains that Behavior Driven Development is about implementing an application by describing it from the point of view of its stake holders. Only focus on high-value features, flatten the cost of change of anything at any stage, prioritize often, change often, adapt to feedback and continuously learn. Pull requirements as needed, evolve the design, code that can change, frequent code integration, frequent regression tests

BDD is derivative from XP (TDD, CI), Domain Driven Design, Acceptance Test Driven Planning (estimation based on building one scenario on top of another), Neurolinguistic programming (NLP), systems thinking.



BDD is getting the words right, enough is enough and more is waste while less is irresponsible, agreeing on DONE, outside-in (more systemic approach), interactions (series of interaction between people with different skills and software is an output of these interactions). People over process.

Dan next elaborates:

Sunday, July 5, 2009

Monday, June 29, 2009

Agilists and Architects: Allies not Adversaries

Patterns of Enterprise Application ArchitectureMartin Fowler and Rebecca Parsons gave a talk at QCON about architects in an agile environment. Architects worry about their jobs and where they fit in agile. Architects try to achieve reuse, documentation, see what is happening and agile can help architects by providing:

1. Transparency and visibility into progress and therefore can react in a reasonable way.

2. Up to date specification of functionality through strong emphasis on testing.

3. Results in adaptable code. Reuse after the fact rather than planned. You at least know that it is useful once and then adopt it in other places.

To make it work in agile, we have to look at architects as the customer. As a customer, architect need to prioritize decisions and need to specify what they mean and remove ambiguity. Architects also need to participate as team members. Architects need to be able to code. This will give them more information to be able to make adequate decisions. It will increase trust between them and the developers.

This presentation is available on InfoQ at  http://www.infoq.com/presentations/agilists-and-architects

Friday, June 19, 2009

Lean Concepts for IT Professionals

At ThoughtWorks conference, Richard Durnall and Kraig Parkinson gave a talk about Lean Software development. They start out by giving a brief history on Lean and how it led to the Toyota Production System (TPS). They then describe Lean in detail and cover the 4 levels of concern as

1. Philosophy - long term thinking or challenge. How are we going to accomplish these goals given the constraints: Long term vision(looking at 5, 10, 20 years), adaptive planning as we learn more and are faced with new challenges, process based company with human focus.

2. Process - steps to take to get there: Pull systems (wait for order before starting), eliminate waste, value streams, Jidoka automation (providing humans with tools to support what they do), Heijunka (even out the work to make a balanced system), visual controls (see where the work is), stop the line (at Toyota the line stops about 27,000 times a day), build in quality.

Eliminate waste (overproduction, waiting, unnecessary transportation, over processing, excess inventory, unnecessary movement of our people, defects, unused employee creativity)

3. People and partners - Respect and work together towards the ultimate goal. Supplier and partner relations (share resources to make suppliers and partners better since they share the same ecosystem), encourage the right behaviors (point everyone in the same org in the same direction), training and personal growth.

4. Problem solving - in depth look at the real issues. Genchi Genbusu (go, see and do for yourself, get involved), 5 why’s (root cause analysis tool – ask why 5 times and at the 5th time you know the root cause), 5s (stabilize and standardize what we do as a platform for continuous improvements going forward), ishikawa (fishbone diagram for root cause analysis tool), A3 reports (get all information you can on one side of A3 paper – annual financial statement – intent that less is more), PDCA (plan, do check, act cycle).

Wrapping all this together is a sense of continuous improvement (Kaizen).

Next, they look at Lean in IT:

1. IT has a problem - mediocrity: over budget, over schedule, not delivering useful features, projects failing.

2. IT is a business process and we can use Lean techniques to improve them like value stream map to measure cycle time efficiency. The process involves walking through the process and tracking 3 metrics: value added time (amount of time to do something of value that the customer will appreciate), elapsed time (from start to finish including waiting time), and cumulative elapsed time (over all time from request until delivery). This shows IT waste like extra features, waiting, unnecessary transportation, gold plating, partially completed work, unnecessary movement, defects, and unused employee creativity. One can apply Lean practices like pull practice, eliminate waste, and adaptive planning to improve the cycle. We do not need to fix everything. We need to work on our immediate constraint and once that is fixed we can move on to the next one.

3. Further engage business and deliver better results: IT works across business units and can see the process from end to end. Focus on customer. Realign key management metrics to be more of a throughput focused vs. status focused - what is the profitability, how smooth is the process.

4. Look at other lean organizations and learn from them. Start with the customer, focus on quality not cost (and cost will drop as quality improves), find the change agents and empower them, don’t get trapped in the tool age (concentrate on philosophy, beliefs and values), and remember that anyone can introduce change.

This presentation is available on InfoQ at http://www.infoq.com/presentations/durnall-parkinson-thoughtworks-lean-it

Tuesday, June 16, 2009

Just You Wait

Extreme Programming Explained: Embrace Change (2nd Edition)Kent Beck gave a talk at QCON about current trends and where they are going. He covers these trends in the following themes:

1. Communication: Information sharing (twitter), information collecting (logs, recordings), more frequent releases

2. Simplification: flat data(Amazon simple db, Google large table), Data parallel(Hadoop, Map reduce), Screen-less computing

3. Unintended consequences: Energy usage (small devices, sustainability), privacy (privacy is going away)

4. Disappearing: “Free” or differed revenue model (ads are out, need to pay for things we find valuable), reuse, status

5. New Approaches: design (good design valuable to enable frequent releases), tests (automated, need to catch mistakes early)

Implementation PatternsKent wraps up by asking what have I accomplished in the past years and what will I accomplish in the years to come?

This presentation is available on InfoQ at http://www.infoq.com/presentations/just-you-wait

Sunday, June 14, 2009

Crafting an Agile Adoption Strategy

Amr Elssamadisy gave a talk at QCON 2008 about agile adoption strategies. He started out by defining what is valuable? Is it being agile, design, requirements, time to market, user satisfaction, meetings? Value is different based on the context. It is what you want to get out of it. It should be your goal for adopting an agile practice. This is how you will measure your success.

Next Amr talk about a learning cycle of 1) goal, 2) process, 3) test, stop, and learn, apply lessons learned, and repeat until success. He then simplifies agile software development to recognizing and responding to change (learning, requirements, team structure, new practices), feedback practices and communication practices for the team (iterations stand up meetings, retrospective), and technical practices for developers (TDD, refactoring, pair programming).

Next Amr gives examples of some business values: reduce time to market, increase quality, increase flexibility, increase product utility, increase visibility, reduce cost, increase product lifetime. This is followed with some examples of some smells: Us vs. Them, Customer asks for everything, direct input from customer unrealistic, management is surprised, bottlenecked resources, churning projects, hundreds of bugs, hardening phase needed.

Each of these problems is addressed by different set of practices or patterns. A pattern is a problem solution pair within a context.

Amr then goes through some patterns:

1. Decrease time to market: We need to apply Iterations. For that we need a product backlog and a well defined definition of done. The backlogs improve product utility and increase visibility by enabling high quality feedback as the goals are met and reviewed regularly. They decrease time to market because the prioritized list enables negotiation of scope and early release with the most valuable functionality.

  • a. Backlog context: You are on a team that decides to adopt agile including iterations. You decide to go away from big upfront design. Team has needed expertise to expand and evolve requirements
  • b. Backlog adoption: Customer flushes out coarse grained requirements ahead of the iteration planning meeting. At the beginning of each iteration, team should have enough info to roughly estimate and begin development. Items chosen for development make up the iteration backlog.
  • c. Backlog smells: Estimation paralysis, techies take over, multiple non cross functional teams have trouble working from one backlog

2. Increase quality: implement test driven development which needs refactoring which in turn needs collective code ownership and automated developer tests (Test first development or test last development). Refactoring increases flexibility and the product lifetime by enabling and encouraging developers to change the design of the system as needed. Quality to market and costs are reduced because continuous refactoring keeps the design from degrading over time, ensuring that the code is easy to understand, maintain, and change.

  • a. Refactoring context: Development team is practicing automated developer tests. Requirement is not well supported by the current design or you want to make the design cleaner.
  • b. Refactoring adoption: Just do it keeping in mind that it is a practice not a tool. Start automated developer tests until you are comfortable writing tests for all of the code. Adopt team code ownership. Agree on how to handle broken tests that result from refactoring. Read Martin Fowler book on refactoring and a book on TDD that is exercise driven.
  • c. Refactoring smells: don’t get carried away and refactor just for the sake of refactoring. It does not deliver direct business value. Many missed small refactoring build up over time causing the need for a large refactoring. Beware of code ownership issues and pride that might lead to “commit wars”.
This presentation is available on InfoQ at  http://www.infoq.com/presentations/adopting-agile-practices

Sunday, June 7, 2009

Responsive Design

Extreme Programming Explained: Embrace Change (2nd Edition)Kent Beck gave a talk at QCON 2008 about Responsive Design. Kent starts out by defining design as beneficially relating elements. Design has elements. The elements have relationships, and the relationships can be beneficial or not. He states that there should be an ongoing investment in design. Design should happen at the right moment to enable a steady flow of features.

Kent moves on to briefly cover some design values, patterns, principles strategies, refactorings, successions and data that help achieve the goal of steady flow of features.

1. Values: simplicity, feedback, community

2. Patterns: most decision are not based on the domain, but are based on dealing with a computer. Having access to a wide library of patterns makes me more effective (vocabulary, efficiency). Do not waste time on originality for problems that do not require it.

3. Principles: There are universal principles like don’t repeat yourself, and then there more specific ones.

4. Strategies: Move your design in safe steps.

  • a. Leap: break it up into tiny steps so you can make a leap in safe steps.
  • b. Parallel: Operate 2 designs in parallel for a while.
  • c. Stepping Stone: If cannot do something in a safe step, build a little component or service to help make progress towards a safe step.
  • d. Simplification: Solve a simplified version 1st without any constraints, then slowly add constraints and continue to solve problem

Refactoring: Improving the Design of Existing Code5. Refactoring:

  • a. Bidirectional: extract method and inline method, extract component and inline methods. All refactorings are bi-directional.
  • b. Fluid: It is not from here to here, but more a sequence of steps.
  • c. First class: Refactoring are 1st class.

6. Succession: important sequences of design that happen over and over again. Like if you know that you need to deal with n elements, deal with one element now and then transform it to deal with n elements when you need to.

7. Data: understand metrics patterns to justify advice.

Test Driven Development: By ExampleKent concludes by reminding us that the goal is to find a way to continually invest in design to more closely approximate this steady flow of features to create value.

This presentation is available on InfoQ at  http://www.infoq.com/presentations/responsive-design

Saturday, May 30, 2009

Beyond Agile: Cultural Patterns of Software Organizations

At QCON London 2008, Marc Evers and Wilem Van Den Ende gave a talk about cultural patterns. They start out by mentioning that cultural patterns can help make sense of what’s happening. It helps us understand various subcultures that exist in an organization, and can predict conflicts. It helps put agile in perspective and helps us adapt our change strategy to particular situations. They then cover 6 cultural patterns:

1. Routine – We follow our standard procedures (except when we panic). Bring order to disorder. Management by controlling. Process oriented. Working in a well known context. It is very predictable, but difficult to improve productivity

2. Variable – We do whatever we feel like at the moment. Value Craftsmanship, foster innovation. It’s characterized by close cooperation between customers and developers, craftsmanship, hands off management. Performance and quality is totally dependent on individuals. It works when starting with a few customers developing custom built software until the number of clients increase to 8 or so. Once they go beyond that organizations switch to routine. But when something goes wrong, then they go back to variable. Variable produces fast delivery and good relationship between customer and developer.

3. Steering – We choose among our routines by the result they produce. Make extraordinary things ordinary. It’s characterized by feedback control, results oriented, trust based, act early, act small, XP, and Scrum. Testing and feedback play an important role. It uses feedback. You can improve as you go. Can be light weight.

Moving towards steering:

a. Mental models: Have more work to do. Increase hours/week to try to finish work. However, more hours per week lead to fatigue which results in more defects and thus increases work to do.

b. Visibility: Charts and boards on the board.

c. Stability: Need to have a stable velocity.



4. Oblivious – We’re not aware that we’re developing software. No separation between user and developer. Highly adaptive, highly customer oriented. Customer and developer is the same person. You always get what you want as long as you can make it

5. Anticipating - We establish routines based on our past experience with them. The art of the long view. Pay more attention to long term planning and changing your processes. Consciously managing change, process oriented, always improving your processes (if it ain’t broke, fix it). Practices (retrospectives, scenario planning, risk management). Lean Software development. It makes steering culture more predictable with a conscious process of managing change

6. Congruent – everyone is involved in improving everything all the time. It is a culture of ongoing reflection and improvement.


They see some similarity with CMM where level 0 is Oblivious, 1 Variable, 2 Routine, 3 Steering, 4 Anticipating, and level 5 Congruent. They conclude by stressing that you should find the patterns that fit your context.

This presentation is available on InfoQ at http://www.infoq.com/presentations/beyond-agile

Friday, May 22, 2009

Born to Cycle

Fearless Change: Patterns for Introducing New IdeasLinda Rising gave a talk at QCON London 2008 entitled “Born To Cycle”. This talk is very similar to “Perfection is an unrealistic goal”. Linda does add a couple of interesting experiments:

1. Rats in a maze. During REM sleep rats were transferring information about what they learned about the maze. The next morning the rats navigated the maze and found the cheese.

2. Two groups (humans) were taught a task. The group that took a nap improved more than the group that stayed awake. After a night’s sleep, both groups were at the same level.

3. One group was taught a task and then 6 to 8 hours later, a second task. The next day, the group did not improve on either task.

4. Two groups were taught a task. Both were taught a second task, but one group took a nap in the interim. No improvements were noticed later in the day, but the next morning, the nap group had improved at both tasks.

This presentation is available on InfoQ at http://www.infoq.com/presentations/born-to-cycle

Saturday, May 9, 2009

Agile Mashups

Agile CoachingRachel Davies gave a talk about Agile Mashups at QCON London 2008. Rachel mentions that organizations trying to adopt agile can be confused by the different agile methods (Scrum, XP, Crystal, DSDM, Lean). These methods are simplified ideas and practices to make it easy to transmit and understand. It will help teams get started but they have to fill in the gaps. A lot of teams create their own agile combinations. A good way to do that is through retrospectives.

Next Rachel goes over some typical agile practices like standup meetings, sprints, user stories, release plans, TDD, velocity, burndown charts, team boards, retrospectives, continuous integration. Most teams adopt these practices but struggle with pair programming, product increment, and collocation. Often the teams are self organizing and cross functional and include 5 to 10 members with at least one tester. The customer role is split between someone who is making priority calls and someone who is explaining the domain. Agile project manager or scrum master are responsible for facilitating meetings, shielding the team, working with the team to remove obstacles. Sometimes they also do project management activities like reporting progress and preparing the road ahead.

Rachel stresses the importance of setting up a visual space to see what the team is working on at the moment. The team gathers around the project board for their daily stand up meeting.

Projects start with an iteration Zero that creates the release plan, sets up an infrastructure, architecture, and for some initial estimation. The typical cycle is 2 weeks. Half of 1st day is spent planning, then development, and at the last day there is a demo and then a retrospective. A lot of teams start the sprint on Thursday. Lots of teams have a preplanning meeting. The meeting at the beginning of the iteration is for estimation and task creation while the meeting to decide what goes into a sprint happens a couple of days earlier. Demos need to involve product owner and a wider set of stakeholders. It’s ok to have breathing spaces between iterations. It is also ok to have polish iterations before a major release.

Rachel wraps up by recommending that we read multiple agile books and use them as a source of ideas and not as “religious” texts. Projects are varied and each might need a different approach. We need to inspect, adapt and evolve.

This presentation is available on InfoQ at  http://www.infoq.com/presentations/agile-mashups

Friday, May 1, 2009

Agility: Possibilities at a Personal Level

Fearless Change: Patterns for Introducing New IdeasLinda Rising gave a talk about personal agility at QCON San Francisco 2008. The talk was about the effects and history of caffeine. In the industrial age, people were having caffeine for breakfast. They boiled the water to make it safe to drink and then had coffee or tea. Coffee, tea, clock, and factories appeared at the same time. Before that we used to have beer for breakfast (Europeans were consuming 3l beer /person/day). We used to wake up and sleep based on the sun and seasons. We now have to adapt and cope with a work schedule set by a clock and not daylight or the natural sleep cycle. Inventions of the clock and availability of caffeine changed lives.

Next Linda explained that caffeine blocks the effect of adenosine (one of the body’s natural sleeping pills) and keeps us awake. The average person takes about 3.5 hrs to metabolize caffeine. It takes longer for thinner, or pregnant women. Newborns cannot metabolize caffeine. Nicotine moderates the mood, extends attention, and doubles the rate of caffeine metabolism.

Then Linda mentioned that without adequate sleep, we are not at our best, physically, mentally or emotionally. We have come to believe that sleep is a waste of time and makes us overall les productive. As a result, we are sleep deprived and our brains show visible signs of premature aging.

Linda shared research that shows that caffeine is not better than breaks. It improves “vigilance tasks” – prolonged attention, little physical activity. But a good night’s sleep improves performance, mood, and alertness better than caffeine and the benefits last longer. For simple tasks, caffeine improves performance. But on complex tasks, extroverts’ performance tends to improve, while introverts’ tends to get worse.

Finally, Linda showed how spiders on different drugs performed (marijuana, chloral hydrate, Benzedrine and caffeine). The spider web of a spider on caffeine had no architecture, no regularity, no structures what so ever and was rather erratic.

Linda wraps up by questioning if agile is the new caffeine. It is energizing, stimulating, fun and addictive. It also has side effects: irritable, restless, anxious, and sleepless and teams get themselves into a lot of hot water! Is what is good for teams good for us? Linda wonders if we can do a better job instead of living our lives as we did in the industrial age.

This presentation is available on InfoQ at http://www.infoq.com/presentations/agility-personal-level-possibilities

Tuesday, April 28, 2009

Teamwork is an Individual Skill

Teamwork Is an Individual Skill: Getting Your Work Done When Sharing ResponsibilityAt QCON 2008, Christopher Avery talked about teamwork. He started out by stating that we all live in a world of shared responsibility. A team is a result. It emerges from an opportunity for a shared responsibility. He explained that the biggest problems are within departments and other departments, within team functions and other functions. This gives the attitude of you are my problem and I am your problem. Agile is moving to a different culture of

1. Collaboration, partnering, trust

2. Openness, transparency, visibility

3. Adaptive, iterative, evolving

4. Awareness, learning, facing reality

5. Humaneness & performance

Chris then demonstrated playing 4 by 4 tic tac toe with the goal of maximizing your score while taking turns. The game showed that the only way to maximize your score is to maximize the other’s score. It does not have to be win lose. It can be win/win

Next, Chris covered the 3 phases of power from the power of economics and organizing

1. Power over: authority power

2. Power to/by: exchange power – power of the vote, power of the budget, power to barter

3. Power with: Integrative power – ability to use only ideas and actions to attract other people to you to accomplish something far greater than you could do by yourself.

We want more of power over and power to rather than power with even though they are scarce and limited, while power with is available in virtually unlimited abundance.

Chris mentioned that we have far more power and ability to get things done, produce value, and make changes than we give ourselves credit for.

Wednesday, April 15, 2009

What is Scrum?

Confused about Scrum? Here is a brief video explaining it.

Wednesday, April 8, 2009

Managers in Scrum

Agile Product Management with Scrum: Creating Products that Customers Love (Addison-Wesley Signature Series)Roman Pichler gave a talk at QCON 2008 about the role of managers in Scrum. He starts out by describing a typical organization as a hierarchical structure with a command and control management paradigm. Managers receive reports, make decisions and give orders. Subordinates comply with orders and execute and then report back. Managers are removed from the day to day work and lose touch with what is going on and makes it difficult to make decisions. Managers are accountable for decisions and results. Subordinates are doing things they might not always agree with. Authority and responsibility are separated across 2 roles. This makes it difficult to have buy-in, accountability, ownership, and to learn and improve.

Next, Roman mentions that under Scrum the outlook on management is different. It is about helping the people doing the real work in adding value to the product, and creating the right environment for them to succeed. He then covers some management practices in Scrum:

1. Servant-leadership: lead others by serving them. Serve 1st, lead 2nd. Help the team and its members grow and develop. Respect individuals, honor effort and goodwill. Help create the right work environment.

2. Empirical Management: As a manager, make decisions on the basis of facts and empirical evidence. Report and numbers are not sufficient. You need to see what is happening for yourself. Create transparency and be able to inspect and adapt. Managers engage with employees to understand what’s happening where the actual work is done. Ask questions, share observations, make helpful suggestions to assist and guide but do not micro manage.

3. Empowerment: Delegate decision making authority to the lowest possible level. Collaboration instead of command and control. Authority and responsibility are united.

4. Quality first: Quality is built into the product from the start. Encourage and empower the team to identify and rectify problems together with their root causes.

5. Continuous improvement: Challenge the status quo on an ongoing basis. Identify and remove wasteful activities. It is about continuous innovation and change. It is a learning non-judgmental, non blaming approach.

6. Standarization: Standards developed by the team and then communicated to the rest of the organization

Next Roman goes over some transition techniques:

1. Focus everyone on the customer needs. Consider the entire value stream and a void sub optimization.

2. Systematically remove overburden: Overburden decreases morale and robs people of creativity. Give people slack so they have time to reflect and continuously improve. Avoid overburden by limiting demand on capacity and capability.

3. Promote team work: help create effective teams and foster creativity and collaboration

4. Clear the way: remove impediments promptly and anticipate new impediments.

5. Be a Scrum champion: teach, encourage and guide. Be a role model and walk the talk.

Finally, Roman summarizes by mentioning that the good news is that there is plenty left for managers to do in Scrum. The Management culture must change profoundly from telling people what to do to supporting and guiding them.

This presentation is available on InfoQ at http://www.infoq.com/presentations/managers-in-scrum

Saturday, March 28, 2009

A Kanban System for Software Engineering

At QCON London 2008, David Anderson gave a talk about Kanban. David starts by mentioning he had some failures in institutionalizing agile changes and failing to scale it to a significant size. He then had success with Kanban. Kanban system allows for focus on quality, reduces or limits work in progress, balances demand against throughput, prioritizes.

David then describes a case study at Microsoft where a team was at CMMI level 5 and producing high quality software however they had a huge backlog and items constantly delyed. David shows how implementing Kanban helped this team improve productivity by 200%.

Instead of having an agile transition plan and force a team to use a particular agile process, David now advocates starting from where you are now and create a culture that encourages people to improve, teach them lean principles, teach them about waste and bottlenecks, and then have the team figure out on how to get better.

David then covers another case study at Corbis and describes Kanban in more details. Kanban can work with specialist teams of analysts, developers, testers, QA, etc. However, it puts a limit on the amount of items that each team can process. There are no iterations in Kanban. It is more of a continues flow with a release every 2 weeks, but the release content is bound and published only 5 days prior. Items that take longer than 2 weeks are just not released. Prioritization meetings are held every week (inputs and outputs are not in sync in a cycle). Whenever there is an empty slot, an item from the backlog is added. Prioritization and competition for an empty slot eventually evolves into collaboration. No estimation is done on individual items, with the effort to estimate turned back into increased productivity in analysis, coding, testing, etc. The Kanban white board gives visibility into process issues (transaction cost of releases, transfer though stages, bottlenecks, ragged flow). Daily stand up helps eliminate impediments affecting productivity and lead time. Kanban also allows for scaling standup meetings. Large teams can go through tickets on the board and ask if anyone knows of something impeding the progress of the ticket.

David summarizes that the Kanban method enabled:

1. Culture Change: trust, empowerment, collaborative team working and focused on quality.

2. Policy Changes: No estimating, late binding release scope, late binding prioritization

3. Regular delivery cadence: Releases become routine.

4. Cross functional collaboration

5. Self regulating process robust to gaming and abuse

6. Continuous improvement: Increase throughput, high quality, process continually evolving, kanban limits empirically adjusted.

7. Little Management Overhead: Little to no involvement in day to day

This presentation is available on InfoQ at  http://www.infoq.com/presentations/kanban-for-software

Wednesday, March 18, 2009

The Ethics of Error Prevention

Working Effectively with Legacy CodeMichael Feathers gave a talk at JAOO about the ethics of error prevention. He mentions that preventing errors in applications have many different solutions, but most of the time we only need to pick one or two. He then moves on and coves 5 techniques:

1. Abstraction: Use of object oriented languages and higher end languages.

2. Design by contract: document precondition and post-conditions. The clients of the routine are obligated to fulfill them.

3. Clean room engineering: discipline of annotating code that proves to yourself that the code you are writing is correct.

4. Test driven development: write a test for a new capability, compile, fix compile error, run test and see it fail, write code, run the test and see it pass, refactor as needed, repeat.

5. Pair programming/software inspections: Fagan inspection: planning, overview, preparation, meeting, rework (looping back to planning), follow-up

Michael wraps up by mentioning that each of these techniques forces us to focus on what we are doing and steers us away from complication. They each trigger contemplation. The craft of software development is not about languages or tools. It is about practices. Quality is in the intangibles.

This presentation is available on InfoQ at http://www.infoq.com/presentations/error-prevention-ethics