Software Eats Design Next
Originally Published on May 13, 2019
We’ve all heard it. “Software Is Eating The World” or so sayeth Marc Andreessen. Andreesseen postulated in 2011 that as microprocessors allow us to automate more and more tasks, every business will eventually become a software business. Coding will obviate the need for repeated and non-replicable human decision-making. So far, he’s largely been proven right. Manufacturing, driving, trading, marketing, even writing have all since been partially consumed by the increasingly voracious beast known as software.
From conversations I’ve had at several dinner meet-ups in the Bay with People who Know Things about Computers, advances in Artificial Intelligence (AI), Machine Learning (ML), and Natural Language Processing (NLP) are real and just getting started. Exponentially more information can be processed than ever before, with machines making many of the necessary resource allocation decisions previously left to humans. We shall term computer replacing humans’ in a decision making context “HRBC”-Humans Replaced By Computer.
We see HRBC in higher and higher levels of decision making.
Yet there is still one area of society, that is arguably the most important for long term economic growth and human flourishing, that is largely untouched by the rise of HRBC-that of designing physical objects. We have software that augments human ability, sure. AutoCAD helps designers of houses and physical objects create 3D mockups. CNC Machines and 3D printers take that CAD file and create objects out of metal, or plastic. But none of these are replacing the human who is initially designing the object and engineering the final process. Why can’t the design and engineering steps be automated? Previously, the complexity of the designs forced algorithms to hit an asymptote, namely available compute restraints, but as of late compute restraints are much higher.
As Christopher Alexander points out in his gem of a book A Pattern Language pre-1930s architecture was created by laymen based on a simple set of interdependent “pattern languages.” Those languages need a human to decide if they live and breathe-but what keep us from using AI/ML to create and tweak house designs based on those pattern languages (which are really just non-codified algorithms)? Furthermore, why do we need mechanical engineers to lay out duct work, or electrical engineers calculating loads and lay-outs? Those lay-outs in fresh builds are a complex but decipherable set of rules: amps need to match a wire’s ampacity, air flow rules are based on building’s square footage, volume, and dimensions. Why do we need humans to design and engineer fresh builds with all of our available computing power right now? Furthermore, over time, algorithms can and will build more efficiently than anyone else; code constraints limit creativity as is, which reduces complexity.
The same can be said for design. Let’s say I want to design a new type of camera with a known dimension, battery rating, and an external design that has been completed. Why do we need a designer to complete the entire interior layout? For that matter, why can’t we use AI/ML to design chipsets and remove the need for engineers to design our processors as we continue to wage our battle against Moore’s law?
Within 10 years, computers will first design, then soon thereafter design and engineer many of the physical devices we use. Eventually, they’ll invent new ones too.