Learning Lessons from Failure

I used to believe that learning from failure was futile. I believed this because I thought that failure could come from many reasons, so it was a wasted effort to try to determine any single reason for why it failed. 

I was influenced by this Peter Thiel quotation:

“I think failure is massively overrated. Most businesses fail for more than one reason. So when a business fails, you often don’t learn anything at all because the failure was overdetermined. [TF: Overdetermined: “To determine, account for, or cause (something) in more than one way or with more conditions than are necessary.”] You will think it failed for Reason 1, but it failed for Reasons 1 through 5. And so the next business you start will fail for Reason 2, and then for 3 and so on.”

Excerpt From: Timothy Ferriss. “Tools of Titans.” 

Now, I believe that both Peter and I are/were wrong for holding this point of view. (I still agree that most failure is overdetermined; it's unlikely that one particular thing caused a business to fail.)

Firstly, I'm going to argue that having a piece of knowledge is better than having no knowledge for starting a startup. 

1. the probability of succeeding in another startup conditioned on having a bit of learning from failure (not the full reason) is higher than the probability of succeeding in another startup with no takeaways from the failure. 

It just doesn't make sense that knowing about a particular situation to avoid (i.e. failure) would reduce the likelihood of success from the initial state of knowledge before failure at all. (Though I'd be very open to hearing arguments against this)

2. I would posit that it still is much better to learn the right way rather than learning from the wrong way. Historically, we have seen that there are more wrong ways to run a startup than right ways to run a startup.

Thoughts where I'm wrong? Leave in the comments :)

3 responses
Point 1 is mathematically correct by the definition of conditional probability. I agree with the basic premise, but I think that the computational cost of figuring out what went wrong is often underestimated, which you seem to account for in your point about most failure being overdetermined. A lot of things fail for extremal reasons that you need to account for in order to know if doing something again would work, but those extremal reasons can be impossible to calculate, and trying to do the calculation just gives you an incorrect picture because you end up doing bad sampling.
2 visitors upvoted this post.