Thursday, November 19, 2009

Eric Ries on lean start-ups (MIT talk on 11/19)

Eric Ries is here at MIT courtesy of the 100K competition and is addressing a sizeable crowd in 32-123. A predominant chunk of the audience consists of current and aspiring entrepreneurs. I am blogging live.

First slide reads: Most startups fail. No big surprises here.

Next slide: The pivot - what do successful startups have in common?
Pivot is the ability to change directions quickly. The difference between a successful and an unsuccessful start-up is the number of pivots a start-up makes before it dies.

Next is a story of 2 start-ups. Start-up 1 invited Eric to interview. When he arrived at an unmarked located in the middle of nowhere in Silicon Valley, he found a banner that essentially said "we can't tell you what we build, but we can tell you who works here. In start-ups, it's all about the team!".

Strategy of the company was to build a world class technology platform with a compelling long term vision. They raised plenty of capital, hired the best and the brightest, hired an experienced management team, and created buzz in the press and the blog-sphere.

The outcome: the company failed, $40M and 5 years later.

Why did this company fail? Two words: shadow beliefs.

Shadow belief #1: We know what customers want.

Shadow belief #2: We can accurately predict the future. The company had gained a lot of momentum in a particular direction, which made it very difficult to pivot.

Shadow belief #3: Advancing the plan is progress.

Next, is the story of startup #2 called IMVU. Here is what IMVU did differently:

- IMVU shipped its first product in six months, albeit a horribly buggy beta product. Almost nobody used the software.
- Charged from day one. This allowed them to enter into, and maintain a regular dialog with their customers.
- Shipped multiple times a day (by 2008, on average 50 times a day)
- No PR, no launch

Asked themselves: what is the riskiest assumption we've made and how can we test it quickly? In their case, will people pay real money for a virtual avatar?

Results in 2009: profitable, revenue > $20MM

The lesson to take-away is lean start-ups go faster.

Plug for Steve Blank and 4 steps to Epiphany, see my first and second posts from 2 years ago on Steve Blank!

The talk's now focussing on customer development for start-ups...

Eric recommends two teams for all start-ups: a problem team, and a ? (sorry, lost this part multi-tasking. if you attended the talk, post a comment with the answer please?)

Minimize total time through the loop: ideas -> build, code -> measure, data -> learn... ideas -> build

How to build a lean start-up:

- Continuous deployment.
- Tell a good change from a bad change quickly
- Revert a bad change quickly
- Work in small batches (at IMVU, large batch = 3 days worth of work)
- Break large projects down into small batches
- Have a cluster immune system
- Run tests locally. Everyone gets a complete sandbox
- Continuous integration server - tests to ensure all features that worked before still works
- Incremental deploy - reject changes that move metrics out of bounds
- Alerting and predictive monitoring - wake somebody up if metric goes out of bounds. Use historical trends to predict
acceptable bounds.
- Conduct rapid split tests: A/B testing is key to validating hypotheses
- Follow the AAAs of metrics: actionable, accessible and auditable

Ok, at this point, I got to admit that some of the above points has me scratching my head at its obviousness. All these steps from what he calls "sandbox", to "cluster immune system", "incremental deploy", "alertive and predictive monitoring" etc. are STANDARD practices we follow in semiconductor chip design!

We call this "regression testing", and use a standard test bench comprised of carefully generated test vectors to make sure existing functionality isn't broken with new code that's checked in, and of course everyone gets a local sandbox to play in without affecting the source code! That's the only way to do it when designing complex systems! I was therefore a bit bemused to see the same thing being described as though it's a new practice for software design. Software folks, tell me, is this new for you guys or was I missing the point and Eric was merely stating what all techies knew anyway for the benefit of the non-technical folks?

Perhaps Eric is going to touch upon this - it's standard practice in chip design to not only run a subset of tests to test key functionality before checking in new code, but to run a bigger subset of test vectors overnight to make sure all of the code checked in that day does not break something, plus we run a mega set of vectors over the weekends when they can run 48 hour sims uninterrupted, to make sure we didn't break the tiniest part of the chip in the process of making changes... is software design methodology very different?

Unfortunately, my laptop is running out of charge at this point with nary a power point is in sight. If you have notes from the talk you will allow me to share here, shoot me an email!

3 comments:

Kyle Mathews said...

Speaking as a software guy -- yes, these practices Eric describes are very unusual (still) in software development. Sadly enough.

Shuba Swaminathan said...

Wow, no kidding!

Unknown said...

I believe that the significant difference between what you're describing and what Ries advocates is bypassing the staging environment when releasing new features. Thorough unit and functional testing is done at the commit level, with the system rejecting broken code (apparently, their system is sophisticated enough to reject visual/UI changes that would reduce conversion rates, say, by making text unreadable over the background).

This also means that tests have to run quickly, so no over-the-weekend runs.

The premise is that speed of iterations and quantity of A/B experiments is what matters. Automated testing is just one of the drivers.

See:

http://timothyfitz.wordpress.com/2009/02/10/continuous-deployment-at-imvu-doing-the-impossible-fifty-times-a-day/

http://www.startuplessonslearned.com/2009/12/continuous-deployment-for-mission.html

http://www.startuplessonslearned.com/2010/01/case-study-continuous-deployment-makes.html