Blog

Should a Tester Be Afraid to Hear the Word Refactor?

By: Agile Velocity | Oct 10, 2012 |  Agile Technical Practices,  Article,  Technical Practices

multimeter tester - software testing refactor softwareI used to cringe when I heard developers say they were going to refactor some code.  To me, as a tester, that meant they were going to change working code and possibly introduce bugs into the software.  If it ain’t broke, don’t fix it, right?  Why are we refactoring it?  Truly, the answer lies in basic Agile practices.  Agile teams implement lean code that meets the user’s needs, then we put the code in front of the user, get feedback, and continually evolve the code based on that feedback.  We also implement code knowing we need to learn about our design decisions. Sometimes what we learn is that there is a better way to design the solution.  All of these practices require code to be refactored.  It is not a bad thing.

However, what can be bad, is when a developer cannot make changes to some code because the code is too complex, or it is unclear what the code does.  If there are no unit or integration tests automated, then it becomes risky to make changes and ensure there are no regressions in functionality.  The best Agile teams understand this and ensure there is automated test coverage as features are being written.  They rely on these automated tests for rapid feedback.  They can refactor their code (or code other’s have written) with confidence.  The code remains testable, simple, and maintainable; all of which save time and heartache later on.

So, testers should not fear code refactoring.  We should work with developers to put unit and integration test frameworks in place on our projects.  We should help our teams increase unit test coverage.  We should encourage a team culture that embraces refactoring.  We should only fear brittle, untestable code that must be retested manually after major code changes to existing functionality.  I’ve tested code like this and I don’t want to do it anymore.  I’m smarter than that.  And so are you.

Blog

You! Might Be an Agile Tester

By: Agile Velocity | Aug 13, 2012 |  Agile Technical Practices,  Article,  Technical Practices

Man handcuffed to his keyboard - Agile TesterYou! Might Be an Agile Tester…

Here’s my list of 12 reasons you migh be an Agile Tester:

  • If you work in close proximity to Developers, you might be an Agile Tester.
  • If you are test-infected.
  • If you think automated testing is key in software development.
  • If you invite yourself to meetings that you weren’t invited to by someone else.
  • If you are learning new skills and practicing continuous improvement along with your team.
  • If you keep your team focused on delivering core functionality that has the highest business value.
  • If you stay in touch with the big picture.
  • If you and the Product Owner develop Acceptance Criteria for User Stories together.
  • If you push automated tests to the lowest possible level to encourage fast feedback.
  • If you are part of the “Power of Three” in functionality discussions.
  • If you are a skilled exploratory tester discovering issues that could not be detected by automated tests.
  • If you define non-functional requirements as User Stories, you might be an Agile Tester.

Inspired by Jeff Foxworthy and “Agile Testing” by Lisa Crispin and Janet Gregory

More on Agile Testing

Blog

Maintaining a Healthy Backlog

By: Agile Velocity | Jul 30, 2012 |  Article,  Product Owner,  Scrum,  Team

Our prior post, Evolution of a User Story, provides a better understanding of how a new feature or product transforms from high-level epics to sprint-sized or incremental stories.  Let’s zoom out now and focus on how your backlog will be affected while user stories are evolving, grooming sessions are happening, and sprints are in progress.  Refer back to the Evolution of a User Story if you would like details on terminology used.

Here is a generic breakdown of how two features might be decomposed through the grooming process over time.  In our example, we are going to back up some and pretend that not all of the decomposition has happened yet to reflect how the timing of these things work in reality.

epics and stories in a backlog

Now, let’s walk through a series of illustrations depicting snapshots in time of how our backlog might look.  For space purposes, we’ll only show the top portion of the backlog.

Prioritized backlog

In the above graphic there are a number of concepts to point out:
  • First, you can see we are working from a prioritized backlog.  The idea here is that the Product Owner (PO) has been working with the team and stakeholders through a series of backlog grooming and stakeholder review sessions that resulted in the backlog being prioritized in its current state.
  • The top three stories were selected in the first sprint and worked on to completion.
  • Meanwhile, B3, which is currently a feature story, is being further decomposed into sprint-sized stories in a backlog grooming session.

Let’s fast-forward the clock to the end of the next sprint.

Prioritized and completed backlogYou can see:

  • An additional 2 stories were selected and completed in the second sprint.
  • New green stories have entered the mix and some of them have been been prioritized to the top of the backlog.  And for clarification, these could actually be new stories that have been identified as high-priority or they may have existed all along, but new information has caused them to be elevated in priority.
  • And you can see two of the sprint-sized stories (B3a and B3b) have replaced their parent feature story.

Now we’ll jump ahead to 2 sprints.

Prioritized Backlog, Completed and Released

 Here you can see:
  • The first 3 sprints worth of work has been released.
  • Four of the stories were completed in the fourth sprint.
You can see why it is important for the PO to take control of managing the backlog since we are never working in a vacuum.  The backlog in most cases is constantly evolving as priorities change, stories are decomposed, and we learn from what we have completed.

And here is a different graphic to show your backlog in the context of a funnel.

Prioritized backlog, epics and uncertainty and Higher priority & More Detailed

Key takeaways from this graphic:

  • At the top of the funnel it is okay that stories and epics are large and have a lot of uncertainty.  As they flow down and get closer to the end where they will flow into sprints we need to tackle the details and work with the development team to make sure we are in agreement on scope.
  • I’ve exaggerated the funnel walls here to illustrate that the funnel narrows from top to bottom.  The point being that large epics cannot flow directly into a sprint.  They must go through a process of decomposition where the result will be incremental stories which the team can handle that will also deliver value.
  • As epics are decomposed, you will find that you may defer certain stories until later or you may decide to never implement them.
  • Prioritization will rely heavily on team estimation.  If the team says you can get 3 of the smaller stories or 1 of the larger stories the PO is now armed with information that can be used at a stakeholder backlog review meeting for prioritization.

In my opinion, there are two keys to maintaining a healthy backlog:

  • A regular cadence of grooming which includes backlog grooming and stakeholder backlog review sessions.
  • Good communication between the stakeholders, PO, and development team.

We have put together a list of general questions that we typically ask while assessing the health of a backlog.  How many of these can you answer and feel good about with your backlog?

  • Do you have a prioritized backlog of user stories?
  • Do your stories have estimates from the development team?
  • Do you have 2-3 sprints worth of incremental or sprint-sized stories?
  • Do your sprint-sized stories have acceptance criteria (the what, not the how)?
  • Do you have high-level notes on your epics and feature stories?
  • Do you have a regular backlog grooming meeting scheduled?  How often and who is attending?
  • Do you have a regular stakeholder backlog review meeting scheduled?  How often and who is attending?
  • Do you have an idea of your current sprint velocity?  This one is indirectly related since we want to make sure we are keeping enough in our pipeline in the way of sprint-sized stories.  It is also good information to know.
Blog

Evolution of a User Story

By: Agile Velocity | Jun 15, 2012 |  Article,  Scrum

open book - user storyMany Product Owners we work with find themselves struggling with when and how they should break their epics down into an actionable user story.  In this post, we’ll talk more about the “when” portion and how you can get into a regular cadence of evolving your backlog.  We will focus mainly on Scrum teams in this narrative, but the principles can easily be translated to other agile teams.

We have all been there…whether in a brainstorming session or while working on documenting a new product feature or initiative.  Ideas are flying and details are being documented.  Lots of time is spent detailing features and discussing what-if scenarios to make sure we do not forget anything.  We are an agile team so Product Owners are feverishly documenting the details into user stories.  Months later when the development team begins to work on the feature many of the assumptions we made have changed and now we have to re-work designs, acceptance criteria, and user stories to fit with our current situation.  Product Owners are defensive about the work and effort they put into the old user stories and development teams are saying they do not have enough information to proceed.

How can this be, we followed our agile training and created the user stories with detailed acceptance criteria, but our development team still says it’s not good enough?  In many cases, the key problem in this scenario is that the product owner and development team have not been collaborating enough on the stories leading up to this point and perhaps too much detail has been put into the story too early.

How can we fix it?

We recommend using regular Backlog Grooming and Stakeholder Backlog Review sessions as the drivers for collaboration and prioritization.  Let’s put definition and context around some of the terms we are using in this post.

Epic is the inception of a new feature or product.  There is more uncertainty at this level and we usually want to keep details non-prescriptive.  At this level we will start adding some notes and information from our discussions with the team.

Feature stories are the breakdown of epics into smaller sized chunks of work.  These typically are the smallest definition of releasable value from a customer perspective.  Here we want to see acceptance criteria and make sure the team understands the story.

Sprint-sized stories are the breakdown of feature stories (or epics) that are small enough to fit into a sprint yet still deliver incremental value.  Typically the Product Owner (PO) has to partner with the team to determine how to breakdown a feature story into incremental stories.

Backlog Grooming sessions should be time-boxed and are typically held once per sprint with the following participants: PO, Scrum Master (SM), team, and User Experience (UX).  The major goals of the meeting are (your first session probably would be modified from this list to get bootstrapped):

  1. Make sure stories prioritized for the upcoming sprint are understood and dependencies and environments are handled (no surprises in sprint planning).
  2. Begin breaking down upcoming feature stories into sprint-sized stories (with estimates).
  3. Review near-term feature stories that are not quite close enough to start breaking down further.  We do this as new details may need to be captured.  Add or modify estimates where necessary.
  4. Estimate epics.  This item and the previous item are crucial to the success of the Stakeholder Backlog Review sessions so that items can be prioritized accordingly.

Stakeholder Backlog Review sessions will typically be held once per sprint, but this may vary depending on the volatility of priorities.  As the name implies the stakeholders should be involved with this meeting along with the PO.  Team members are optional. Major goals of this meeting include:

  • prioritizing new epics and feature stories against existing backlog
  • reviewing portfolio prioritization
  • identifying release scope

Example

Let’s follow an example to help illustrate.  Let’s start with an Epic being born in a brainstorming session for an imaginary online shopping website.  Most of you are probably familiar with a similar site called Amazon.com.  In this case, let’s pretend we are adding a “wish list” type of feature to our site.

Story Brainstorm Session - User Story

After the meeting, the product owner adds some notes to the card that were brought up in the brainstorming session.  We want to be careful here not to be too prescriptive to allow for later conversations:

Wish List capability - User StoryIn the next Backlog Grooming session, the team reviews the new epic and assigns it a point value of 40 since it is relatively large with uncertainties and seems to be at least an order of magnitude bigger than some 20 point stories already on the backlog.  During the next Stakeholder Backlog Review meeting the epic is prioritized into the existing portfolio backlog.  For the sake of the narrative let’s pretend this is a very high-value feature and it is targeted for an upcoming release.
Backlog Grooming - User Story ProgressionAt this point the PO makes an attempt at breaking our epic into feature stories:

Backlog Grooming Session - User Stories

During the next Backlog Grooming Session the PO reviews the feature stories with the team and gets story point estimates.  During this process, the team discusses each story for clarification and to identify questions.  Now, the PO will document new stories and acceptance criteria: Product Owner Documents Acceptance Criteria for User story
The PO can take the new, estimated stories to the Stakeholder Backlog Review meeting to be prioritized into the backlog replacing the epic story.  And again for the sake of the narrative, let’s pretend at least one of the features is near the top of the backlog.  The PO can begin engaging with UX and the team to start getting mock-ups of the feature.

Product Owner uses User Story to engage with team At the next Backlog Grooming session, the PO can work with the team to break the feature down into sprint-sized stories with new story point estimates.  And the PO will prioritize the sprint-sized stories against the backlog.  In subsequent Backlog Grooming sessions, as the feature becomes closer to implementation, the team can begin making sure blockers and dependencies are handled.  And remaining questions and acceptance criteria can be dealt with prior to the story being picked up in a sprint.  And finally, the story can be picked up in a sprint for implementation.

Conclusion

In summary, you can see that we have followed one epic being broken down over the course of multiple Backlog Grooming & Stakeholder Backlog Review sessions to the point at which some piece of it will be picked up for implementation.  In reality, the example becomes more complex as there are typically many more stories and priorities involved with your backlog.  However, you will see if you can get into a regular cadence with your planning sessions everyone involved should be thankful as you will be focusing detailed efforts on the right things while keeping everyone in the loop and allowing for re-prioritization of the entire backlog.

Become a Certified Scrum Product Owner. 

Blog

Infrastructure Agility

By: Agile Velocity | May 17, 2012 |  Agile,  Agile Technical Practices,  Article

Architectural Infrastructure - infrastructure AgilityWhether your team is agile, lean, or anything else you have likely run into frustrations with your infrastructure

See if any of the following strike a chord with you in relation to Infrastructure Agility:

  • You aren’t sure how your servers are configured
  • Your servers, workstations, etc. aren’t configured the same way
  • Nobody is sure who changed a configuration file, or why, and what was the last good version of the file?
  • Who installed that rogue server process? Why was our standard version of a dependency upgraded that is now breaking our applications?
  • Why are our development servers configured differently than our QA servers? What will it take to make them the same?
  • How long will it take to upgrade or install application x on our cluster of servers?
  • Your developers/QA/UAT Testers are blocked 3 days waiting on ops to install/upgrade a server with something new needed for a story
  • It takes 3 days for new developers to get set up with all the standard dependencies (or the machine image used has old/missing versions and needs a lot of upgrading)

No matter what your particular frustration, your infrastructure, and systems take time and effort. We see it all the time, but it is especially frustrating when teams following lean/agile principles who have put effort into eliminating waste and providing quick feedback find themselves against another wall they must continually climb and are regularly slowed.

Getting Control of Infrastructure

We don’t advocate solving a problem you don’t have, so if frustrations like those mentioned above are not among the bigger problems affecting you, just file this away for later.. But for many of you the above probably resonates.

So, where do you start?

Outsource It!

First, why not just fix the glitch? Why spend time trying to address a problem you can get rid of? Look at your application, infrastructure, whatever that is causing you pain. Is it standardized enough that you can just offload the problem?

We don’t want to just move the problem to a team doing the same thing in another location. What I’m referring to is taking advantage of platforms that have taken common deployment or infrastructure scenarios and packaged the operations around them as a service. You do this when you choose to host a server on a cloud service like [Amazon] or [RackSpace]. This kind of cloud computing model which abstracts and automates the details of physical hosting of storage and computing resources is often referred to as Infrastructure as as Service, or IaaS.

You can take this to another level by using a service like Heroku or AppFog that removes the need to manage servers and instead deploy to more highly managed environments that accomodate certain solution stacks. If your application fits their managed platform or isn’t too customized you can avoid having to deploy both servers and much of your solution stack, focusing on your core application code and configuration. This level of cloud computing services is often referred to as Platform as a Service or PaaS.

Offloading what you can offer you the reduced complexity of operations for some or all of your environments. But for many teams, we have found constraints that don’t allow taking advantage of these types of services.

Control It!

Whether you have resources in the public cloud, private clouds, or on good old bare metal hardware, you have work to do to manage provisioning, configuration, deployment, and tracking of assets and infrastructure.

Agility in infrastructure is achieved through:

  • Providing good visiblity on the infrastructure you have
  • Eliminating bottlenecks to adding / changing your environments
  • Minimizing complexity
  • Being able to adapt quickly to changing business needs
  • Having a high level of communication and visibility across all those involved in delivering software to end users

Many operations teams already track assets in various places. Some keep standard configuration files and checklists they use for consistency. Others have scripted common tasks in their daily operations work. But not all do these sorts of things. And even if they do, manual work ends up being the biggest bottleneck of all. The Path to Agility® requires finding straightforward, consistent ways to communicate, control, and automate your infrastructure management.

Infrastructure as Code

While not necessarily new, there has been some disruptive change in recent years led by the growing popularity of tools like Chef and Puppet. Similar tools, such as CFEngine have been around much longer in different incarnations both inspiring the newer generation of tools and greatly evolving in their own ways. The combination of these types of tools with the rapidly expanding selection of virtualization and cloud tools.

The philosophy is simple, with the support of the right tools a team can create configuration files and scripts (code) that describe what their infrastructure should look like and how to go about creating it. This code can be executed to provision systems, configure and install dependencies, deploy applications, take inventory of what is deployed, and keep things consistent.

As Jesse Robins once described, the goal is to:

Enable the reconstruction of the business from nothing but a source code repository, an application data backup, and bare metal resources.

Such an approach to managing infrastructure isn’t limited to servers either. Some groups use it to help keep developer, tester, or other types of desktop machines up to date with the latest tools/configuration a team needs. When you have only a few machines to manage such an approach is neat. When you have hundreds or thousands it becomes essential.

Hopefully, this has piqued your interest or brought awareness to how we include infrastructure in our assessment of agility and waste. As always, don’t solve a problem you don’t have. But if infrastructure issues are affecting your team and you have identified the bottlenecks stay aware of your options.

We will follow up with additional posts in this series on Infrastructure Agility with looks at DevOps and a closer look at tools like Chef and Puppet.