I was re-reading the book Antifragile which led me to thinking about the relationship between Semantic Compression and Antifragility. The large up-front design is clearly fragile since a small unpredictible error in the design could lead to huge costs and potentially project failure. OTOH when you work in trial-and-error mode such as Casey does, you benefit from uncertainty and stand to benefit from the evolution that occurs constantly, and no mistake is ever fatal.

The same is probably true about large code-bases compared to small ones, where a small code-base is antifragile and could always be evolved or even re-written should the times/technology change or a new crucial feature materialize where a large code-base is harder to evolve/re-write.

This is also a big advantage to writing everything from scratch. There are no unknown traps that could turn out to make an important feature impossible to implement without a huge cost which could happen if you rely on a pre-fab engine/library.

*For those not familiar with the concept of Fragility/Antifragility I am referring to the definitions coined by Nassim Nicholas Taleb.