Archive for category Startup essentials
Like most engineers, I do a lot of optimizing, often just for fun. When walking to work I seek the shortest route. When folding my laundry I minimize the number of moves. And at work, of course, I optimize all day long alongside all my engineering colleagues.
Optimization, by definition, requires an objective function as the basis for measuring improvement or regression. How short is the route? How many moves does it take to fold a shirt? But what is the objective function at work around which my team and I should optimize?
I’ve worked in many software engineering organizations where the objective function is an unstated confusion that evolved on its own over time. It’s often a bit of “don’t break things” mixed with a dose of “conform to the standards”. Sometimes more destructive objectives find their way into the culture: “get yourself seen,” or worse “don’t get noticed.” And my least favorite, “horde your knowledge.”
Recently, while working with a client, I had to state my views on a good objective function for a software engineering team. It’s to predictably deliver a continuous flow of quality results while minimizing dark time — the time between when a feature is locked down for development and when a customer starts using it.
Predictable: Your process has to be stable and sustainable. It’s not about sprinting to collapse; nor is it about quick wins followed by a slog through technical debt. It’s about a steady pace over a long time. Hence the volatility measure in Pivotal Tracker; a good team has low volatility, and therefore their rate of delivery is highly predictable.
Delivery: Code is not worth anything until it is in use by customers. Calling delivery anything else often leads to spinning wheels and wasted effort.
Continuous flow: Activities that potentially disrupt the flow and would be better off if dealt with in the moment, in the normal run of things. For example, I find mandatory code reviews disruptive and demoralizing. Gatekeepering steps like these, by definition, stop the normal flow and send things back for rework. In contrast, pair programming often achieves the same quality and consistency objectives in real time and without disrupting the flow
Quality: This is a relative measure. The work needs to be done sufficiently to avoid rework (i.e. bugs) and to prevent the accumulation of technical debt. Spending more time trying to achieve “quality” beyond these measures is just waste.
Results: What it’s all about.
Minimizing dark time: Many software engineering organizations miss this one because it’s driven by the business rather than the needs and craftsmanship of the engineers themselves. And yet, minimizing dark time is perhaps the most critical contribution that an engineering team can make to a business.
Dark time is what the business experiences between when the engineers remove the businesses ability to re-spec a bit of work and when they hand back a working result. In this dark time the business can no longer refine their decision nor observe and learn from the results. They’ve ordered (and paid for) the new car, but are waiting for the keys. It’s dark because during this stage there is nothing for the business to do, with respect to that feature, but wait.
While coding I experience the same dark time when working on a slow code base or, worse yet, working with a slow test suite. My TDD cycle grinds to a crawl as I wait a minute or more (the horror!) between when I hand my code to the compiler/interpreter and when it spits back the red-green results.
If you hate twiddling your thumbs waiting for slow tests to run, think how frustrating it is for the business folks when their engineering team throws them into the dark for days, perhaps even weeks. Of course they pipeline their work and find ways to be useful, but the dark time still sucks.
When a software engineering team chops dark time down from a month to a week business folks cheer. When the engineers chop dark time down to a day or less, the business folks do what us coders do when working with a fast test suite… we fly.
This post also appears on my blog at Pivotallabs.com.
I’ve recently been consulting as a lean startup expert at the retail side of a large bank in the UK which is exploring how to increase their rate of innovation. The project has been challenging, inspiring and filled with lessons learned.
The bank hired me to help them create a new venture, and in doing so it became clear that the bank is struggling with a larger underlying challenge: how to drive innovation in and around their organization.
I went into the project expecting to find bank managers who had no interest in innovation – why would a bank that is “too big to fail” have much interest in shaking things up? To my surprise the situation was quite the opposite. Almost everyone I met, which spanned a range of management levels, shared three views. First, they wanted to innovate. Second, they were frustrated at the inability to innovate within their organization. And third, they were proud of the ideas that the bank had managed to nurture and launch.
The bank, it turns out, has the potential to be what Steve Blank would call an earlyvangelist customer.
Earlyvangelists are a special breed of customers willing to take a risk on your startup’s product or service. They can actually envision its potential to solve a critical and immediate problem—and they have the budget to purchase it. Unfortunately, most customers don’t fit this profile. (source)
In this case:
- The bank management have a problem – lack of innovation
- They are aware of the problem.
- They are actively looking for a solution.
- The problem is so painful that they cobbled together interim solutions.
- And they have even allocated budget to continue to tackle the problem.
With this in mind I used the project as an opportunity to iterate towards a business model for delivering an effective intervention for driving innovation in large banks. As you’ll see in the remainder of this post, my team and I have learned many lessons. We’ve also made considerable progress towards finding a model that is likely to work.
The rest of this post details the business models we tested and the resulting lessons we learned. It ends with a proposal for a new model, the Inside-Out Incubator, that’s centered around seeding an ecosystem of innovation in and around the bank. It uses an indirect approach that is more likely to succeed than trying to directly change the bank’s deeply engrained culture.
So, without further ado…
At Power of Two everyone on our team has a coach. The idea started because our core business is offering online coaching for couples in challenging relationships. We’ve applied the idea to ourselves and found it to be hugely valuable.
An effective coach answers questions that you may not have even thought to ask.
- Review your work product. The purpose of a coach is to advance your understanding beyond what you can do on your own. this only works if they have information beyond what you tell them. A coach should review developer’s code, designer’s designs, writer’s words, a customer developer’s iteration plans and results, etc. A person who gives advice without reviewing your work product is simply a mentor. Mentor’s are helpful, and good for one’s morale, but they are not a coach.
- Have deep respect. The amount your learn from your coach depends on how much expertise they bring to the table and whether or not you value the suggestions they make enough to act on their suggestions. If you don’t act on your coach’s advice then it’s all just a waist of time.
- Pay for the time. When you give your coach work to review you are asking to spend their time for your benefit. This relationship is much simpler and more likely to succeed if there is a balanced exchange of value.
- Ask stupid questions. Your coach works for you. They are there to help you with both complex and things that you might think are stupid. Often it is the questions that initially seem stupid that point to an gap in your knowledge base or skill set.
- Be a bit scared. Your coach’s job is to tear into your work, expose the weaknesses and then help you address them. This is ego-busting stuff. If you aren’t a bit scared about sending work to your coach then it’s time to find a new coach. At the same time, you should feel empowered after addressing the shortcomings that your coach has identified.
Engaging with a good coach will accelerate your learning curve and get you to the top of your game. As an essential tool for success, it is worth trying to ensure that everyone on your team has their own coach.
If your startup is searching for product-market fit then the life of your company depends on how quickly you can decide, release and learn. To speed the release part of this cycle it is critical that you encourage your dev team to release features before they are “ready”.
The challenge is that this goes against what we learn at university and experience in corporate jobs. In those environments releasing unpolished work can be a career-limiting flaw.
I just got a complaint from a customer about the feature that Jon just released. We need to warn him to be more careful and maybe put him on a less important project. [FAIL]
Adopting this corporate-world attitude will retard your iterations and undermine your agility.
As a lean startup your primary objective is to learn about customers. You win when you release code and learn something about your customers. It does not matter if the code is buggy or incomplete so long as you gain validated learning about customers. The faster you can get something, anything, in front of customers the better. Therefore you should encourage your team to release, not just commit, early and often, but to release early and often.
What does this mean? For starters you should talk about the minumum viable form of each feature, but this point is obvious. What’s non-obvious is how you should deal with the inevitable problems that arise from a hastily (in a good sense) released feature. Here are some suggested responses:
80% of our users jumped in to use Jon’s new feature. It’s a hit! Jon, could you now put in some time to debug this complaint I just received.
Ugh, the feature Jon just released isn’t getting traction. Let’s scrap it and move on. No need to worry about this bug report.
Jon, I think the feature you just released needs a bit of debugging to be a viable test. Here’s the complaint I just got.
Flapping in the Breeze
Everyone on your startup’s team should understand that when devs release early they put themselves into a very exposed position. Their work is incomplete. The feature is probably going to break. Some customers may get annoyed. If your devs think that they’ll be blamed when the feature goes pear-shaped then your team has a problem.
A dev who releases a minimum viable feature, complete with bugs and unhandled cases, is taking a personal risk to benefit the company. The entire team needs to appreciate this fact. A newly released feature that still has bugs is a good thing. It means that the dev trusts the team and is letting them help decide how to allocated their time. The alternative, of a developer silently debugging a feature, often wastes cycles on work that does not contribute to the larger goal of customer development.
The key point is that in a lean startup everyone must be doing the minimum effort to generate the most learning. For developer, who’s work product is on public display, there is inherent pressure to fall back from this goal to keep their ass covered. Counteracting this requires trust and understanding across the entire team.
Getting Your Wobbly Bits Out
Here are a few tips for getting to a minimum viable feature faster:
- Put off for tomorrow what you don’t need to do today. Use your ticket system to log all non-minimum aspects of the feature so that the team as a whole can decide whether or not the effort is justified.
- Establish a blame free culture. Every one on the team needs to know that they are asking the devs to take a personal risk with every release. And, that the team as a whole benefits from their risk taking so NEVER rub in the blame when there are problems. The fact that bugs exist is frustrating enough for a dev, they don’t need to feel judged in the eyes of the team as well. The right approach is to realize that the bugs exist because the dev is doing it right.
- Consider and praise the minimumness of a release. This is not to say that devs should not learn from their mistakes and take steps to reduce future bugs. Rather, the entire team should understand that over-polishing a feature is as much problem releasing it with bugs. As the team assesses their processes it is important to ask whether feature releases could both be more viable (less buggy) and more minimal.
My four year old son is into withholding information.
“How was school?”
“What did you learn today?”
“Were the other kids mean to you?”
Like most kids, he finds it fun to keep me in the dark. Of course, what he doesn’t realize is I can help him with problems, but only if he tells me about them.
Software without an automated test suite behaves like my four year old. It keeps secrets and makes it really hard to proactively fix problems. In contrast, software with good test coverage calls out “Yo, I’m not doing what you expected!” It grabs your attention so that you can fix things before they become big problems. Automated tests let your code step up and communicate like an adult.
Unfortunately, it’s very tempting for developers to skip writing tests. The mantra “if it’s not tested, it’s broken” echoes around within the development community because writing tests can be a drag – it’s a time consuming process that reveals flaws which the dev might prefer to gloss over. If you want well tested code, which you should, then you need to give the tests an audience outside of just the dev team. The push for good tests needs to be from the whole company because most developers need prodding to maintain good test coverage. Developers, including me, feel safe and often subconsciously prefer to keep non-tech stakeholders in the dark, just like my four year old.
I led our company, Power of Two, into this trap last year as we developed the foundations of our service. At the time we were rapidly iterating through customer development, evolving the business model on a weekly basis. When I presented the challenge of keeping the code aligned with our ever-changing business model the dev team leader suggested that we avoid the overhead of writing tests. He rightly argued that writing tests would take time away from feature creation and that test suites easily become brittle and break on changes that aren’t actually problems. What I didn’t yet understand was that the business value of good test coverage far outweighs these costs.
Our dev team did great work in every way except test coverage. We quickly adapted our code base with every pivot. We built great feature fast. And we created software that held its tongue like my four year old. Now, a year on, almost every day I find myself wanting to slap the software upside the head and tell it to start talking. Without full test coverage (a situation we are working hard to fix) every change is frightening. We frequently hit with unexpected breakages because, even though the code is well-structured, there are inevitably dependencies lurking, forgotten, in the code. We also often find ourselves pondering “why does the code work this way?” Our developers are not capricious. all the code in our system has purpose. Still, without tests to clarify the expectation that necessitated a given special case, it is often a struggle to remember why we wrote what we wrote.
Poor test coverage is a problem for devs. it is a also a huge lost opportunity for the non devs because because the code is silent to them. Well-written tests, or even just well-written test titles, declare how the code is expected to work. These are the statements that bridge the business needs into the coded reality. Non-devs should understand and follow the expectations that are declared in the tests. The tests are their direct, non-technical window into the code base. Unless non-devs engage with the test suite they are entirely reliant on the devs to explain the code base. These explanations often get garbled. When your code base doesn’t talk, the company has to play a game of telephone (Chinese whispers here in the UK):
- non-dev to dev: How do we expect our messaging feature to work?
- dev to code base: Let me see. How do you work?
- code base to dev: I send a message and record the content and when it is deleted.
- dev to non-dev: It sends a message and records the content.
- non-dev to dev: Let’s let people delete messages.
- dev to non-dev: Oh yeah, it already does that.
- non-dev to dev: Let’s add a way to tell when it is read.
- dev to code base: I expect you to record when a message is opened (having assumed read == opened)
- code base to dev: I record when a message is opened
- dev to non-dev: The code does what you want.
- non-dev to dev: Great.
- … Several days later.
- non-dev to dev: How come messages are marked as read when they are only opened?
- … and so forth.
The point is that when the code can’t speak for itself the non-devs are stuck outside of the conversation. In contrast, here is a typical conversation at Power of Two:
- code base to all:
- When a message is sent I record the content.
- When a message is deleted I record the date.
- non-dev to dev: Lets add a way to tell when a message is read.
- dev to code base: I expect you to record when a message is openned (having assumed read == openned).
- code base to all: When a message is opened I record the date.
- non-dev to dev: Wait, I don’t want to know when it is opened, but rather when it was marked as read.
- dev – non-dev: Ahha, I misunderstood.
- dev to code base: When a message is marked as read you should record the date.
- code base to all: When a user marks a message as read I record the time.
- non-dev to self: Rockin!
As you can see there is still potential for confusion, but by letting the code base speak for itself the misunderstanding is resolved more quickly, and the non-developer does not have to rely on what the dev says the code does.
More importantly the non-devs now directly get a host of other information about things like when which features are completed, how complex they were, which other expectations were impacted, and so forth. All in all, the conversation becomes more rich, more direct and more valuable. When the code talks to everyone, the whole team will make better decisions, spend less time trying to stay coordinated, and stay out of the the destructive us versus them mindset that often erupts between devs and non-devs.
Also, when the devs know that everyone reads their tests they have strong incentive to keep the test suite up to date and well written. This prevents the developers from quietly falling away from good test coverage and bringing on all the future grief mentioned above.
- Make your code base into an adult that can speak for itself with an automated test suite.
- Create a conversation about expectations between your code base and your entire team, non-devs included.
- Iterate faster, reduce confusion and don’t dig your self into the whole caused by code without automated tests.
Here’s how we run this at Power of Two.
Each time we do a release, which is several times a day, we email a notice like the following to the full team. This notice summarizes the release, lists the expectations for the changed code and provides the commit messages for greater detail.
Here is the gist of our release notes generation script. It is written in python using Fabric.
=== RELEASE MESSAGES --- 7.1g0 Feature: Initial coaching stocks 7.1g1 Tweak 7.1g2 Tweak: Fix behavior test CHANGED EXPECTATIONS (not all tests have changed) --- winwin_project/apps/processes/tests/profile.py: - New user should have a consistency score - User with none consistency score should get a score when they login - User with none consistency score should not get a score at a new period - New period should add a non login period to the consistency score - New login for period should make the most recent consistency period into yes - Actual login should make most recent period a login for consist history - Needs coach flag should become true when a user logs in - Email users action should redirect to content creation page - Set coach message should do nothing - Apply assignment should do nothing winwin_project/apps/profiles/tests.py: - Needs coach flag for a new user should be true - Staff user link should return a url - Logged in function should alter member profiles - Logged in function should not alter staff profiles COMMIT MESSAGES --- * 27fde36 - Better admin for coaching using stocks adjusted to handle blank database during testing (23 hours ago by Jesse) * 7fb8b23 - Showed requesting help and reordered stock admin. (23 hours ago by Jesse) * 7f6cadf - Associated a coach with a signupkey to auto-assign. (23 hours ago by Jesse) * da61b51 - Removed script that set up south. (23 hours ago by Jesse) | * de200fc - release fix and db copy to dev server (16 hours ago by Jesse) | * c47a08a - Merge branch 'master' of po2-winwin (16 hours ago by Jesse) | * 68805b8 - Load copy of live db into a dev server. (13 hours ago by Jesse) |\| | |\ |/ / * | c84b7e3 - Merge branch 'master' of po2-winwin (13 hours ago by Jesse) | * 1cd398c - Merge remote branch 'origin\master' into coaching (22 hours ago by Jesse) | * 5ee41dd - Members stock and don't mark staff as needing coach. (13 hours ago by CRM) |/ * 9677d74 - Merge branch 'coaching' (13 hours ago by Jesse) (tag: 7.1g0) * 4e4ebe9 - Print nothing to do (13 hours ago by CRM) (tag: 7.1g1) * 1c5adf9 - Aligned behavior test with new user accordion. (13 hours ago by DB) (HEAD, tag: 7.1g2, staging\release, release) ===
I hope that you find these ideas helpful.