Computational Irreducibility
Originally published on Medium on May 9, 2018
Path Discovery
I have recently become fascinated by the study of complex systems. The Santa Fe Institute and the New England Complex Systems Institute in particular produce some great writing.
One key idea that has emerged from the study of complexity is that of “computational irreducibility”. Brilliant physicist Stephen Wolfram describes computational irreducibility as follows:
“While many computations admit shortcuts that allow them to be performed more rapidly, others cannot be sped up. Computations that cannot be sped up by means of any shortcut are called computationally irreducible. The principle of computational irreducibility says that the only way to determine the answer to a computationally irreducible question is to perform, or simulate, the computation. Some irreducible computations can be sped up by performing them on faster hardware, as the principle refers only to computation time.”
To discover new things, humans have to attempt feats that are low probability and seem ridiculous on their face. Capitalism as a giant machine that rewards people for solving problems that other people know are problems but can’t solve themselves. To solve problems, or discover problems human didn’t realize they had, humans have to try and fail at a lot of new things. No formula can predict what we need unless the formula is more complex than our (very complex) world.
People need to be applauded for big, audacious risks. Even if that person fails, a new potential path has been opened up, or disproven. Computational irreducibility guarantees that if we humans don’t try, we won’t ever solve big problems. Big, public failures are guaranteed for anyone trying to do anything outside of established norm.
But those audacious flops shouldn’t be ridiculed. They should be celebrated. They are the computations necessary to solve problems, carried out in real time.