Rewriting code is good in my experience. It's always better the second time around (or third, or fourth). It's not that my first attempt was rubbish, it's that I didn't understand the problem as well as I did the second time (and so on).
Lean is also about avoiding premature optimisation. Which is hard because it cuts against the grain of our engineer sensibilities. Doing something "good enough for now" is tough, when you know that with just a few more days' effort you could make it bulletproof. But I've had to delete "bulletproof" code so many times, because it turns out the product didn't need that feature, or it needed to work differently.
In the long term, Lean avoids more wasted effort, in my experience.
"Good enough for now" is a great excuse to keep hacks around and letting them accumulate to the point where code is unmaintainable. Meaning too much code, meaning waste. Even better is "it works now, do not touch" especially when current code base is untested.
Programmers are typically lazy and do not bulletproof anything ever. Thus rampant security issues.
The alleged wasted effort is from the point of view of some manager who doesn't get to tick boxes quicker. (And disregards later massive drop in development velocity while presumably demanding same results.)
This means spotted issues are pushed towards never unless a customer reports them. Which they won't or even can't so you get your software brand recognized as buggy trash - with workarounds being commonly peddled among users and devops.
Dysfunctional teams and organisations produce this kind of cynical rage, not Lean.
Lean is not a software development methodology. It is made for factories and production lines, a terrible fit for most kinds of software. The only salvageable parts from it is iteration and listening to frontline workers to get process improvements.
"Autonomation"/Poke Yoke as in automated tests.
Which is not enough of a methodology.
The "Lean for software" page gives contradictory definition of waste - you're supposed to minimize defects while at the same time minimizing rework. I'd like a crystal ball that enables it. Plus you cannot apply it without absolute control over the whole development process. Any place that is a black box (say, both set of features and deadlines are given) the process. Thus it fails in corporate environment.
Likewise, general agile methodologies are easily perverted into what I just described - by skipping refactoring and redesign parts in service of deadlines.
That model works only if you throw things away like startups do or the project is small and self-contained.
Usually small projects are low value or grow big. C.f. Twitter or YouTube when it started and now. Even worse if you get to interact with quickly changing parts controlled by another team you do not control in even a medium sized project.
It also doesn't attempt to minimise rework (it values iterative approaches), and is strong on exploratory prototypes.
Again, I think what you're criticising is "Agile as implemented in dysfunctional organisations" rather than actual Agile.
I'm building a startup, though. My definition of "good" software is probably different to yours (and that's as it should be).
This might still be less wasteful when compared to building an entire product and finding no customers. But, it is taxing on the technical founder! Lean washing shouldn't set a wrong expectation for the technical founder involved in startups that follow the rapid prototyping approach.
> It's not that my first attempt was rubbish, it's that I didn't understand the problem as well as I did the second time
I think thats too soft, I know my first attempt will be rubbish so I intend it to be so. To me the point of a prototype is to help you learn the problem more than to solve it.
If you plan to keep your prototype if it works out then I think you have missed a trick, a prototype should aim to fail quickly.
If your prototype is useful then I think it fails its point as a prototype.
I would go so far to say there isn't actually a dichotomy here -- you should be swapping between launching something with a hypothesis (in lean mode) then gathering feedback and considering alternatives as you are proven correct/incorrect (hammock mode). I think Galls Law  is also relevant here:
"A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system."
If all you do is think and think, then you open yourself to mis-timing a solution, feature and scope creep, and risking "unknown unknowns". If all you do is launch and incrementally iterate you'll be stuck solving very narrow problems.
Linus built the working prototype/self-hosted in 3 days mixing a lot of his learnings from bit-keeper and his knowledge of disk management .
To me, that's rapid prototyping. It's enough domain knowledge to make it work for himself well. He didn't spend a bunch of time thinking nor coming up with solution since he was actively building Linux at the time. The key is he employed the help of others to build Git and eventually take it over since he wanted to focus on Linux.
This all comes with a huge caveat in that Linus's 3 days == 1000 of mine. His 'just enough' knowledge is near expert level.
As others have asked, what are you trying to build? A technical solution or an end-user solution.
Technical solutions do require a lot more domain knowledge than a twitter/airbnb (at the early stages).
In the end, I believe in rapid prototyping and failing fast. Learn just enough, whether technical or end-consumer to launch fast.
The thing I agree 100% is though, don't break user-space . I believe this applies to end users of products, whether developers or customers. Once people start consuming something, don't break it. Doesn't matter whether you believe it to be 'correct' or a 'bug'. Expectation management of slow and easy depreciation.
So, for me the distinction between the two approaches (prototyping -vs- hammock driven) is lately about whether you are solving a largely known/understood problem (equivalent to having domain expertise, in an absolute sense) -vs- solving problems to which you don’t know the answers. In the latter case, there is no shortcut getting around thinking time.
Or, as they say: “A month in the laboratory could save an hour in the library”
One is using a top-down approach versus an iterative approach. The other is about the nature of your problem: do you have product risk or market risk?
The lean approach is about eliminating waste, which, in the context of startups, often means building something small and talking to users. But that's only because most startups have market risk. If you have product risk, you should still iterate on your solution instead of building it in one go.
I feel like you are asking for examples where the market-risk was addressed. The most interesting companies would be those where the first test was a total miss and they solved a totally different problem in the end.
Git definitely doesn't fit the first approach. Not sure why you would state that.
Maybe the core of Clojure, with the persistent data structures, fits the first approach, but I doubt the rest of it does (speaking as an outsider to th eproject).
"Implement the best one" belies a lot of sweat and places where it could have gone wrong. In other words, the initial thinking is not even addressing half of the problem or doing half the work.
The philosophy of Clojure itself is very much based on iteration and interactive programming. You need a lot of action, feedback, and iteratino in addition to the "think very hard" part.
* Twitch, started as one guy streaming his life then they realised lots of gamers were watching, and that they'd like to be able to stream https://www.youtube.com/watch?v=FBOLk9s9Ci4
* Segment, started as a thumbs up/down tool for professors in lectures to work out when students are getting confused. They realised everyone just went to Facebook instead, then they wondered why they couldn't tell this when they were remote! https://www.youtube.com/watch?v=l-vfn97QTr0
If you don't know your problem there is nothing to prototype, no minimal viable product. Nothing.
If you spend years analysing and planning you get nowhere.
You need to have an idea, a problem which has to be solved, but should not be lost in the forest.
At a tactical level, feature level, you mix both. You state your problem (or get it stated to you), you think it through, hopefully considering at least some related work and doing some hard thinking, come up with multiple possible solutions and evaluate their trade-offs... by implementing their prototypes as fast as possible, because that's the only real way to discover the trade-offs. Depending on how much in a hurry you are, you might pick the first prototype that isn't a total disaster and build your feature from it, then test it, and repeat.
See how "top-down" and "rapid prototyping" is interwoven here. This approach can be expressed as: think before you do, but remember that you only learn the true scope of a problem by attempting to solve it.
I think it's on it's third rewrite or something right now, and runs circles around the only other service in this space regarding bang for the bucks (guess my budget, its smaller than that).
A lot of times it’s useful building products by rapidly iterating because you see flaws, holes in your thinking, and get feedback immediately from people who are going to use it.
Immediate (or shorter term) feedback can be very helpful.
But to answer your question, YC talks a lot about Twitch being an example of the second approach.