Blog

Decomposing to Attack Complexity

By: Agile Velocity | Nov 21, 2013 |  Agile Technical Practices,  Article

Complex architecture - Software ComplexityOur first post in a series on decomposition introduced the importance of breaking things down throughout the process of delivering software. Let’s continue this series by looking at more concrete examples of decomposition to attack software architecture and software complexity.

What is Software Complexity?

Herbert Simon once said, “Complexity comes from a large number of parts that interact in a non-simple way.” Architectural Complexity often seems subjective because who defines what is simple? Many people working on software projects get a sense that there are too

many different components interacting in overly complex or differing ways. The number of parts and interactions are certainly a factor, but some problems have some inherent complexity that is necessary. The question should be, “Can it be simpler”?

When looking at code, there are often more automated ways to gauge complexity. Metrics such as Cyclomatic Complexity help highlight the relative scores of individual methods to factor in the number of branches or paths and exit points through the code. A high complexity score often correlates with a larger source of errors, unexpected errors, and risks.

An Ounce of Prevention

The best way to avoid complexity is to keep it simple in the first place. Some common techniques are:

  • Break down seemingly complex user stories along a complexity boundary. The part of the story that would introduce the most complex implementation problems may be the part that can be delayed and the team can learn from the simpler story implementation.
  • When implementing, decompose your implementation options into a set of options and pick the simplest thing that could possibly work (DTSTTCPW). This often misunderstood technique encourages choosing the simplest of options. If an Array works over creating a more complex custom data structure, use it now and focus on learning about the core problem. Will an ‘on the fly’ assumption about really needing the more complex option really get validated right now anyway?
  • Apply the YAGNI (You Aren’t Going to Need It) principle. Similar to DTSTTCPW above, the goal is to eliminate unnecessary work that won’t be validated and might never be needed. When designing or tasking out a User Story, break out each task as a small slice of validatable feature work and throw out those that you can’t measure or match back to the acceptance criteria.

And A Little Cure

Of course, we often find complexity is what is already implemented, such as the Class, module, component, or application that appears to do too much, is too difficult to understand, and is difficult to test or reuse. Decomposition helps here too:

  • Decompose large modules, methods, etc. into smaller, easier to follow versions that are easier to understand, reuse, and test. Use refactorings like Extract Method and Extract Class.
  • Find modules, classes, methods, etc. that seem to do more than one clear thing and break it apart to produce modules that follow the Single Responsibility Principle

 Conclusion

With architecture and code complexity, decomposition is a helpful tool to defer or remove work that would add unnecessary complexity. It is also a useful tool to simplify what is already implemented. This is just another application of a useful skill that is valuable in Agile and software development in general.

Blog

Top 10 Reasons I Love Doing Software Quality Assurance

By: Agile Velocity | Nov 13, 2013 |  Agile Technical Practices,  Article

Software Quality Assurance, Why I love it:

colorful Test tubes, testing for quality assurance

#10

I love quality assurance because every bug I find is one that was not found by the customer.

#9

There is always more to learn about testing techniques and tools.

#8

Being part of a team that delivers a quality product gives me great pride.

#7

Running an automated UI test run kind of feels like the computer is doing your work while you sit and watch.

#6

I still get to program (automated tests).

#5

I like working closely with product visionaries.  By this, I mean both Product Management as well as software architects and developers.

#4

The software does what it should and doesn’t do what it shouldn’t.  That’s just how I think.

#3

Preventing defects is more fun than finding them and writing them up in a bug tracking system.

#2

I can make my team members laugh simply by creating goofy test data.

And the #1 Reason

Because quality is something that everyone wants!

See also 5 Good Habits of High-Quality Scrum Teams

What are your reasons to love quality assurance?  Comments are welcome!

Blog

It’s Not Just About Process

By: Agile Velocity | Aug 12, 2013 |  Article,  Scrum,  Technical Practices

process flow chartAgile software practitioners focus a lot of attention on people, communication, collaboration, and strong values. In many environments, these are unquestionably the best opportunities for improvement. Inevitably, though, all software development teams reach a point where their greatest opportunity for improving the way they implement and deliver a product are of a technical nature. There is no single standard set of universal technical practices, and Best Practices really need to be thought out in the context in which they would be used. Each team should look at what is most appropriate for their context.

Practices

Here are the core practices we see used by high-performing Agile teams:

Agile Testing

Agile testing is a core part of applying the lean principle of Build Quality In as we develop products. Testing in Agile involves applying testing in a way that increases collaboration and understanding, improves feedback, and helps implement quality products quicker. While the use of automation is an important practice, ensuring we can do the types of manual testing not easily automated, but provide valuable feedback, are just as important.

The key practices and principles here are:

Other Resources:

Test-Driven Development

For most teams automated testing is not new. Some form of unit testing or functional testing is common, but to get a better return on our testing investment, there is a need to level up. We can increase the level and timeliness of feedback by adopting the practice of Test-Driven Development (TDD). In addition to earlier feedback, TDD leverages tests to give the team design feedback that influences better and less wasteful implementation of features than is usually found by testing after implementation.

Test driven development processThe same TDD workflow that is most often applied to Unit Testing can be applied to different levels of tests. The most common TDD related testing activities are:

Other Resources:

Continuous Integration

Software implementation involves the combined efforts of one or more people brought together to form something that is ultimately deliverable. Feedback should be given at the earliest point possible to tell us when new features do not cleanly integrate with existing ones, or the behavior of the system has changed in an unexpected way. While merging and integrating code lines is often considered painful, doing it more often and in smaller increments makes is easier, as well as provides earlier feedback.

The most common practices within Continuous Integration are:

  • Use a single source repository or main code line
  • Automate the build
  • Test each build
  • Commit to the mainline frequently (daily)
  • Drive integration, build, etc from every commit

Other Resources:

Continuous Delivery

continuous delivery processContinuous Delivery is in many ways an evolution or extension of Continuous Integration to extend benefits of frequent feedback and automation to packaging and deployment. This means continually delivering code to relevant environments (even production) as soon as it is ready by leveraging deployment and infrastructure automation. This is often where much of DevOps is focused to support collaboration, sharing, and feedback between development and operations.

With Continuous Delivery we mean:

Other Resources:

Refactoring

The goal of Code Refactoring is to restructure existing code in order to improve its readability, reduce complexity, improve maintainability, and make it more extensible without changing the external behavior of the code. This is done by relying on automated tests to ensure the behavior of the code stays the same while making a series of small, incremental changes to the internal structure of the code.

Many modern Integrated Development Environments (IDEs) and code editors provide functionality for assisting with applying common patterns of refactoring such as extracting a new class from an existing one or renaming a method across a code base for clarity/readability.

Refactoring is a key part of Test-Driven Development but can also be performed outside of the cycle when necessary. Normally, the observation of a Code Smell is the driver for performing refactoring.

Other Resources:

Peer Review

peer review processAgile Teams are always looking for frequent feedback and knowledge sharing and achieving both at the same time is a big win. By increasing the number of eyes that see and understand any given part of a code base, design, architecture, infrastructure, etc., we apply more knowledge, perspective, and experience to a solution and increase the number of people effectively working in that area.

Core Practices:

Emergent Design

Developers, architects, and teams often struggle with how and when to approach design. In contrast to the more waterfall style of Big Design Up Front in a designated phase, Agile teams tend to design all the time and let the design emerge as features are added. This certainly happens continually through refactoring as part of Test-Driven Development, but other elements of design also take place in planning sessions and designated design sessions. By employing emergent design throughout, we reduce the waste or rework associated with unvalidated or unused designs.

Some key practices and concepts are:

Other Resources:

DevOps

DevOps is often discussed as an outside yet complementary practice to Agile. But its focus on collaboration, shared understanding, and shared responsibilities across Development and Operations are much like the Agile team struggles between Developers and Testers or other combinations of Roles. For Agile teams, DevOps is a key part of achieving Continuous Deployment and Continuous Delivery. These practices are also important for improving consistency, reducing wait time and other wastes, and sharing knowledge between roles that traditionally have been very siloed.

While collaboration and communication are critical, the key technical practices involved are:

Practices for Improvement

Whether you are Agile or Lean, use Scrum or Kanban, or none of the above, these practices and principles and others like them should be considerations for your toolbox and for initiating team improvement.

Blog

Why Decomposition Matters

By: Agile Velocity | Jun 05, 2013 |  Article,  Technical Practices

decomposition of an animalAs children, we were told by adults to take smaller bites when eating. We didn’t always understand the risk of choking that they saw, but ultimately it was easier and safer to chew those smaller bites. In some cases, taking smaller bites made the difference in finishing more of the meal. But have we translated that lesson to other things in our lives?

Often we try to tackle things that are too big to sufficiently understand, estimate, or complete in a reasonable amount of time and we suffer for it. We don’t always see the risks that hide in more complex items and thus don’t feel compelled to break them down. Other times we fail to break things down due to unknowns, ignorance, uncertainty, pride, optimism, or even laziness.

Variation in Software

Delivering software products is a constant struggle with a variation of size and complexity. It’s everywhere, from the size of user stories to tasks to code changes to releases. It is present from the moment someone has an idea to the moment software is deployed. We strive to understand, simplify, prioritize and execute on different types of things that are difficult to digest. The more variation we see, the more we struggle with staying consistent and predictable.

Because of this variation, we need to be good at breaking things down to more manageable sizes. This need is so pervasive, I propose it be viewed as a fundamental skill in software. I don’t think most teams recognize this and certainly don’t develop the skill. Many people on Agile teams exposed to concepts like Story decomposition don’t often realize how often they need to apply similar practices in so many other ways.

Why Decomposition Matters

My example of eating small bites was simplistic. In developing software, there is a lot more to gain but it is still ultimately about reducing risk and making tasks easier.

Progress

According to The Progress Principle, a strong key to people remaining happy, engaged, motivated, and creative is making regular progress on meaningful work. Not surprisingly, we want to get things done, but we also want to feel a sense of pride and accomplishment.

Of course, stakeholders and other parties are interested in seeing progress from those they depend on. When we are able to make more regular progress toward goals, we provide better measurements and visibility for ourselves and others to know how we are doing.

It should be no surprise that smaller items enable quicker progress toward goals if done right. We certainly need to take care to avoid dependencies and wait time. Smaller, more independent goals allow more frequent progress and all the benefits that come with it.

Collaboration

As we have more people working on something, is it easier to put them all on a larger, more monolithic task or divide and conquer? Usually, we prefer to divide and conquer. Yet how we divide is important as well because dependencies and other kinds of blocking issues create wait time and frustration.

Decomposition can be one of the easier ways to get additional people involved with helping accomplish a larger goal. By breaking up work into more isolated items to be done in parallel, we are increasing the ability to swarm on a problem.

Complexity

complex know - decomposition
Complexity is one of the greatest challenges in software development. With more interactions, operations, and behavior,s we are more likely to have edge cases and exposure to risk when anything changes. Large goals are easier places for complexity to hide. The larger the task is when we try to accomplish it, the more details we have to discover, understand, validate, and implement.

Focusing on smaller units of work can be a helpful constraint we place on ourselves. Why add our own constraints? When we constrain ourselves to work on a small portion of a larger task we are trying to limit the complexity that prevents us from accomplishing something. We want to avoid a downward spiral of “what if?” and “we are going to need” scenarios that, while important, can slow the task at hand and lead to overthinking and overdesigning and accumulating work we may never need. We are trying to avoid Analysis Paralysis.

Control

By looking at Kanban systems we can see wide variations in size can impact lead times. If we can be more consistent with the size of items flowing through a system then we will have more consistent throughput and cycle times. By breaking down work items more frequently into items of similar size cycle times become more stable and the average size (due to whatever size variation remains) becomes more useful for forecasting due to the law of large numbers.

Clarity

It isn’t a coincidence that Break It Down is also a slang phrase appearing in music and pop culture that also relates to our goals. Urban Dictionary defines “Break It Down” to mean: To explain at length, clearly, and indisputably. By looking at the pieces of a larger whole individually and in more details we can often gain more clarity and understanding of the bigger picture than if we had never spent the extra effort.

Summary

Software Development is a continual exercise in dealing with variation of size and complexity. From early feature ideas, to low-level code changes we have work that can be difficult to understand, manage, and predict, especially when it is large. Decomposition helps us make this work more manageable.

So, we need to remember to Break It Down. It is all about decomposition. And in software, decomposition is everywhere, yet so many struggle with recognizing the need and applying it well. I believe decomposition should be considered one of the most fundamental and critical skills in software development. Getting better at it takes a combination of discipline, practice, and learning but can pay off immensely.
To be effective, even this post required decomposition. We are going to continue with a series of posts exploring many of the individual types of variation in software and how lean/agile teams cope with these different situations.
Blog

Qualities of a Good Team Player

By: Agile Velocity | May 29, 2013 |  Agile,  Article,  Leadership,  Team
Blue Angels are team playersThe word “team” in Agile Team is hugely important and something we rarely give much thought to.  I recently browsed the web to discover and define what really makes a good team player.  Part of my personal journey is to improve as a member of my team.  I look to these words for inspiration.

Coming together is a beginning.
Keeping together is progress.
Working together is success.

– Henry Ford
None of us is as smart as all of us

Dependable, reliable, and consistent

You can count on a reliable team member who gets work done and does his fair share to work hard and meet commitments.

Communicates Constructively

Teams need people who speak up and express their thoughts and ideas clearly, directly, honestly, and with respect for others and for the work of the team.

Shares openly

Good team players share. They’re willing to share information, knowledge, and experience. They take the initiative to keep other team members informed.

Asks “What can I do to help the team succeed?”

Team members who function as active participants take the initiative to help make things happen, and they volunteer for assignments.

Listens

Teams need team players who can absorb, understand, and consider ideas and points of view from other people without debating and arguing every point.

Cooperates

Good team players, despite differences they may have with other team members concerning style and perspective, figure out ways to work together to solve problems and get work done. They respond to requests for assistance and take the initiative to offer help.

Flexible

A flexible team member can consider different points of views and compromise when needed.

Problem-solver

They’re problem-solvers, not problem-dwellers, problem-blamers, or problem-avoiders. They don’t simply rehash a problem the way problem-dwellers do. They don’t look for others to fault, as the blamers do. And they don’t put off dealing with issues, the way avoiders do.

Considerate

Team players treat fellow team members with courtesy and consideration — not just some of the time but consistently. They care about the team winning.

When observing the best teams, it is difficult to identify leaders.

  • The creative type who generates ideas called the Plant
  • The extrovert who has good networks (the Resource Investigator)
  • The dynamic individual who thrives on the pressure (the Shaper)
  • The person who soberly evaluates the usefulness of ideas (the Monitor-Evaluator)
  • The cooperative team player (the Team-worker)
  • The ones with specialist skills (the Specialist)
  • Those who turn ideas into solutions (the Implementer)
  • The person who get issues completed (the Completer-Finisher)
  • The person who keeps the team together effectively (the Co-ordinator).
Blog

Austin DevOps Events Summary – Culture is Important

By: Agile Velocity | May 07, 2013 |  Agile Technical Practices,  Article,  Technical Practices

DevOps Conference - Culture is important - picture of a diverse teamLast week some of our team attended several DevOps Austin related events. We had a great time learning and interacting with other technologists attending both  PuppetCamp Austin and DevOps Days Austin.

DevOps ConferenceThis edition of the annual DevOps Days event in Austin (which also takes place in other cities each year) was declared the biggest. There were great discussions, Ignite talks, and Open Space sessions as well. And while there are always many conversations around technology, there was a noticeable focus on culture.

Many of the talks on both days had a strong cultural component. On the second day, the organizers even mentioned feedback from some attendees to move past the culture stuff and on to tech talks. But it was obvious from the sessions that a large number of people felt the cultural conversation was important. Those of you in the Agile community who haven’t looked closely at Development Operations will be recognized these are similar conversations to those occurring about Agile in general.

Some notable takeaways:

There were too many great talks to highlight them all here. You can see all the recorded talks on Vimeo. You should also look back through tweets to see what people were saying during the conference.

If you haven’t attended one of these events, you should definitely try to attend at least one they put on each year.

Blog

Is Hadoop the right solution?

By: Agile Velocity | Mar 06, 2013 |  Agile Technical Practices,  Agile Tools,  Article,  Technical Practices

I noticed some interesting announcements recently concerning the open-source Apache project Hadoop. Firstly, in the last week, both Intel and EMC have announced their own distributions (link). It seems that the big iron hardware vendors are finally coming around to seeing Hadoop as the standard for big data processing. It makes sense for these vendors to optimize and integrate it into their platforms, but in reading these articles, I have to wonder if these vendors are focused too much on Hadoop as the single solution for all data processing needs.

I stumbled across an article some time ago by Mike Miller at Cloudant that made the case that Hadoop’s days are numbered. Mike makes the point that while Google MapReduce and it’s open-source cousin Hadoop were great innovations when first introduced, even Google has moved on to other technologies that have fewer limitations and are better performing. I have personally struggled with determining the best approach to handling streaming data sets. In fact, it seems that something like Storm might have been more appropriate.

Part of the job of a good software architect is knowing what tools to recommend to the team that are available and best fit the job. This means that you need to know about a range of tools with different strengths and weaknesses. So, the next time someone mentions Hadoop as the solution for a large-scale processing need, take a step back and make sure the problem maps well to it. If there are ad-hoc analytics, dynamic data sets, or other features that it has trouble supporting, look for other alternatives that might perform much better.

Blog

Podcast Recommendation: The Ruby Rogues

By: Agile Velocity | Mar 05, 2013 |  Agile Technical Practices,  Article,  Technical Practices

WAIT! Stop Right There!!! I know some of you saw Ruby in the title and are about to move on, but I encourage you to read on. This could be beneficial to you even if you don’t know or don’t care about the Ruby programming language.

Podcast Description

For those of you who listen to podcasts while driving, mowing the lawn, running, cleaning the garage, or lounging at home, here is a recommendation of something I like and listen to. Like many, there aren’t always enough hours in the day to keep up with various topics and I like to listen to podcasts when I can to keep up.

Our reading audience are technology folks, usually involved with delivering software using an Agile or Lean approach. This podcast recommendation isn’t explicitly Agile or Lean focused, but those elements can be found here in there along with healthy doses of pragmatism. While there is a focus on a specific language I have found a wealth of good knowledge and discussion often applicable to general software practitioners of any technology set.

Agile Velocity has a lot of experience working with Ruby on many projects over the years, which led me to the Ruby Rogues Podcast. This is a group of Ruby practitioners who lead a weekly panel discussion of Ruby and Software Development topics with frequent guests from the community. Many of the regular hosts are well known in the Ruby community and are also authors and conference speakers. Each episode also ends with fun technology (and sometimes non-technology) picks by the panelists.

This is an easy recommendation for anyone who works with (or is considering) Ruby. But there are many episodes that focus highly on general software issues around development, delivery, agile, technical practices, craftsmanship, etc.

Recommended Episodes

Here are some episodes that general agile software practitioners may find interesting:

It was difficult for me to pick just this list because there are so many more great episodes. While this is probably nothing new to people in the Ruby community, I hope I have pointed the rest of you to one of the best software development related podcasts around that most software practitioners can benefit from.

Blog

Write Software That Behaves!

By: Agile Velocity | Feb 11, 2013 |  Agile Technical Practices,  Agile Tools,  Article
Write Behavior Driven Development

Behavior Driven Development

Behavior Driven Development is the process of writing high-level scenarios verifying a User Story meets the Acceptance Criteria and the expectations of the stakeholders. The automation behind the scenarios is written to initially fail and then pass once the User Story development is complete.

BDD is largely a collaboration tool which forces people working on a project to come together.  It bridges the gap between stakeholders and the development team.  Practicing BDD is fairly painless, but does require a meeting to discuss the intended behavior the scenarios will verify.  The meeting to write the BDD tests is usually an informal one which is led by the Product Owner or stakeholder and involves members of the development team.  The goal is to collaborate so everyone is on the same page as to what this User Story is trying to achieve: Start with the story. Talk about the “so that”. Discuss how the software currently works and how this story will change or impact current functionality.

Scenarios are written from a user’s perspective. Because they are run on a fully functioning system with real data, they can be expensive (meaning the time it takes to write, execute, and maintain the tests).  However, the scenarios will serve as executable documentation for how the software behaves. This is useful for developers to understand each other’s code and gives them tools to refactor with confidence.  Over time, the tests will evolve as they are maintained and also serve as easy-to-read descriptions of features. Documenting behavior in this manner is useful when onboarding new team members by communicating the software’s functionality.

Things to ask yourselves when writing scenarios:

  • What functionality is not possible or feasible to cover with lower-level automated tests?  How do these tests differ from unit tests, integration tests, or service tests?
  • What is the “happy path” of the User Story functionality? This is the type typical usage of the software.
  • What is the atypical usage?  Are there variations of inputs that are possible, but used less frequently?
  • How should the system handle bad input?
  • How does the system prevent bad output?  How does it display or log errors?
  • What is the impact on other parts of the system?
  • What are the integration points – other components, other applications?  Should the tests include some verification of this integration, or is it covered elsewhere?

The process that has worked well for me is to, first, write the scenarios together with the Product Owner.  The PO should lead this discussion and write the steps in a way that makes sense to stakeholders.  At least one Developer and one Tester should also be present.  Some like to call this the “Power of Three”.  Read through the Acceptance Criteria asking the questions listed above. Try to use consistent language in the steps and use terms that make sense to people outside the development team.  It may be tempting to write steps involving user interaction with the software, like this:

Given I am on the Login screen

When I enter “user1” in the username field

And I enter a “mypassword” in the password field

And I click the Login button

Then I should see the error message “Username/password is invalid”

I have found, it is better to describe the behavior in broader terms not closely tied to the application layout itself.  For example:

Given I am on the Login screen

When I enter invalid credentials

Then I am not logged in and a meaningful error message is displayed

This way, the code behind each step is closely tied to the application or the user interface, not the test scenarios themselves.  As the application grows, the test steps themselves will be less likely to require changes, isolating the maintenance to the code behind the test scenarios.

Now, once the scenarios are defined using existing steps as well as some new ones, the Testers can partially implement the new steps to fail.  For example, adding assertions that a file exists on the file system.  Or, writing code returning a negative result for now.  Maybe the code will eventually query the database to return a positive result, or maybe it will ensure some value is displayed on the UI.  Sometimes this part is minimal; sometimes it can include almost all of the step implementation.  Lastly, during feature implementation, the Developer writes the final test code to make the scenario pass.  I encourage Developers and Testers pair at this stage. This type of teamwork keeps the Tester engaged in how the code is being implemented and ensures they understand how the software works.  An informed Tester is a good Tester.

As you probably know, automated tests provide more value the more frequently they are executed.  This is why you want to be smart about the tests covered at the user level.  Automated testing is an investment.  The team should view the tests as code as well.  Automated tests require maintenance to keep them passing and best if shared by all members of the team.  When practicing BDD, make sure all scenarios provide value. They should not be too difficult to automate.  Be careful not to include too many variations of input data.  If possible, cover the various inputs using tests at lower levels: unit, integration, service-level. Use the BDD scenarios to cover what is not, or better yet, cannot be covered by other types of automated tests. Don’t be afraid to get rid of a scenario altogether if it doesn’t provide value.  It is okay to run some tests manually as long as the team understands manual tests are executed much less frequently, so feedback is delayed.

Tips for BDD:

  • Write them to be executed locally on a Developer’s machine
  • Monitor execution time and keep it to a minimum
  • Scenarios should not be dependent on each other
  • Scenarios can be executed multiple times in a row and still pass (some cleanup may be necessary)
  • To keep the number of steps from getting out of hand, pass in variables to the steps to maximize reuse
  • Keep your steps organized in separate files by major functional area
  • Scenarios are grouped to allow for a quick regression test of each major functional area, or a full regression of the entire system
  • Use source control for your test code

When BDD is done properly (before implementation), the real value is gained by simply collaborating and discussing the expected behavior of the software!  Once implementation is done, the scenarios ensure the software meets the needs of the stakeholders.  From then on, the automated tests act as a safety net for developers to refactor code and implement new features with confidence.  Teams should strive to make execution of at least a portion of the BDD tests part of their Continuous Integration build/deployment process and make the test results visible.  Failing test scenarios around existing functionality should be a top priority to fix.

Have fun!  And write software that behaves!

Tool references: http://cukes.info/ (Ruby)

http://www.specflow.org/specflownew/ (.NET)

http://jbehave.org/ (Java)

Blog

The Lean Startup Book Review

By: Agile Velocity | Jan 11, 2013 |  Article,  Leadership

The Lean Startup by Eric Ries is without a doubt one of the better entrepreneurial books I have read.  The book claims to explain “how today’s entrepreneurs use continuous innovation to create radically successful businesses.”  The author’s writing is easy to follow, and he uses a lot of real-life examples to help drive his points.   On one hand, I look back and say much of this is common sense…of course that is what you should do, but on the other hand I feel this book is somewhat game-changing as Ries has simplified some powerful techniques explaining how to cut through uncertainty plaguing many start-ups and hindering innovation at existing companies.

First, Ries explains that entrepreneurs are everywhere and defines that the model is not limited to actual start-up companies, but can be applied to any business.

A startup is a human institution designed to create a new product or service under conditions of extreme uncertainty.

The foundation of the book is to use the “Build-Measure-Learn feedback loop” to help steer your startup.  While the concept is not exactly new, he does provide interesting insight into what he calls innovation accounting to measure progress and how many metrics can be categorized as “vanity” rather than “actionable.”  And finally, Ries explores techniques for accelerating the Build-Measure-Learn feedback loop even as it scales.

Here are additional quotes I enjoyed:

The MVP is that version of the product that enables a full turn of the Build-Measure-Learn loop with a minimum amount of effort and the least amount of development time.

As you consider building your own minimum viable product, let this simple rule suffice: remove any feature, process, or effort that does not contribute directly to the learning you seek.

A startup’s job is to (1) rigorously measure where it is right now, confronting the hard truths that assessment reveals, and then (2) devise experiments to learn how to move the real numbers closer to the ideal reflected in the business plan.

As they say in systems theory, that which optimizes one part of the system necessarily undermines the system as a whole.

Validated was defined as “knowing whether the story was a good idea to have been done in the first place.”

A pivot requires that we keep one foot rooted in what we’ve learned so far, while making a fundamental change in strategy in order to seek even greater validated learning.

Every entrepreneur eventually faces an overriding challenge in developing a successful product: deciding when to pivot and when to persevere.

The more money, time, and creative energy that has been sunk into an idea, the harder it is to pivot.

Failure is a prerequisite to learning.

Small batches pose a challenge to managers steeped in traditional notions of productivity and progress, because they believe that functional specialization is more efficient for expert workers.

When I work with product managers and designers in companies that use large batches, I often discover that they have to redo their work five or six times for every release.

As an agile practitioner, much of the book is validation of the learning and course correction ingrained through years of practice.  While you will find very insightful information on how to measure and learn through reading this book, I believe it’s worth a caution that applying the techniques in this book are much harder while you are the one performing the execution of tasks.  In my experience, it is much tougher to be as objective as necessary without some help from an outside party to help validate results and lessons of your experiments.  I will end with one final quote on this note:

“Organizations have muscle memory,” and it is hard for people to unlearn old habits.