19_9_large.png
16_33_large.png
18_34.png
17_9.jpg
18_39.png
18_29.png
18_36.png
18_15.png
17_4.png
16_34.png
 

A.I. Generated Exterior Designs

year: 2019-2020
team: sage elliott, michael hasey
type: research  
GitHub: click for source code

Gan-itecture is a research based project developed by Michael Hasey and Sage Elliot between 2019 and 2020 that leverages the latest computer-deep-learning neural networks to autonomously generate new architectural design imagery in the style of Zaha Hadid, one of the most renowned architects of recent time.

To do this, they trained powerful W-GAN algorithms (Wasserstein Generative Adversarial Networks) to learn then recreate the unique patterns, styles, and geometries inherent in 6,355 images of her original work.

Though the exterior designs above may seem genuinely at a glance, they are in fact “fake” Zaha Hadid designs that have been autonomously and instantaneously generated by our W-GAN process.

 
ganitecture_banner.jpg
 

Original Zaha Hadid exterior images used to train our W-GAN algorithms

Screen Shot 2020-06-22 at 11.14.22 PM.png
Screen Shot 2020-06-22 at 11.19.42 PM.png
Screen Shot 2020-06-22 at 11.22.56 PM.png
Screen Shot 2020-06-22 at 11.16.40 PM.png
Screen Shot 2020-06-22 at 11.21.11 PM.png
Screen Shot 2020-06-22 at 11.17.35 PM.png
Screen Shot 2020-06-22 at 11.16.12 PM.png
 

Infinite Variation

GANS have the ability to create infinite variations of the learned styles within Zaha Hadid’s work.  Once trained, they can hypothetically generate millions of these variations within extremely short span of time.  In addition to re-creating existing styles, WGANs can create new and emergent styles of its own through the uncovering of previously hidden patterns and phenomena found within her original body of work. See below for a sample of the many interior styles generated by our algorithmic process.

 
multi-layer tower style

multi-layer tower style

flowing bands style

flowing bands style

tower style

tower style

wave style

wave style

new emergent style

new emergent style

 

What are GANs?

Generative adversarial networks (GANs) are deep learning neural networks comprised of two algorithms that train each other to “learn” then “re-create” patterns of statistically significant phenomena found within pre-existing training sets of original data.  Original data may include images, music, text, and so on.  What makes GANs special is their ability to recreate new data that is incredibly similar to the originals, yet unequivocally unique.

 

How do GANs work?

The innovative nature of GANS is simple at its core.  Their basic primary software architecture hinges on two algorithmic networks; the generator and the discriminator.  In our study, the generator’s task is to attempt to create images of “fake” Zaha Hadid buildings that look convincingly similar to her real designs.  In this case, we have compiled more than 10,000 images of Zaha Hadid buildings into a training set which is fed into the generator portion of the GAN.  The discriminator’s task is to then decide whether the images of fake Zaha Hadid buildings appear convincingly real or not.  If the images do not look real, for example, look more like a cat then a building, the discriminator assigns a low score to the generated image.  This low score indicates to the generator that it must improve its effectiveness and produce more convincing images of “fake” Zaha Hadid styled buildings in hopes of fooling the discriminator next time.  As a result, the generator will correct its mistakes and will generate more and more convincing building design images over time.  As the generator gets better at creating convincing images, the discriminator must then improve its ability to identify fake images in order to outperform the generator. Through this back and forth, cat and mouse competition of trying to outdo one another, both the discriminator and the generator improve at their craft over thousands of iterations or what are called “epochs”.  In this way, GANs can, over time, “learn” to generate unique and extremely convincing images of Zaha Hadid styled buildings that are nearly indistinguishable from the originals.

 
GAN_diagram.jpg

Image Optimization Over Time

“After many epochs, the discriminator and generator will have found an equilibrium, that allows the generator to learn meaningful information from the discriminator and the quality of the images will start to improve.  By observing images produced by the generator at specific epochs during training, it is clear that the generator is becoming increasingly adept at producing images that could have been drawn from the training set.” - David Foster, “Generative Deep Learning - Teaching Machines to Paint, Write, Compose and Play”, 2019.

Below is a graph illustrating how GAN performance increased in our own hands over time.  The values below represent the decreasing loss  function as the GAN improves after thousands of epochs (individual feedback loop cycles of the GAN algorithm).  The loss function is used by neural networks to compare its predicted output (generated Zaha designs) to the ground truth (real Zaha designs).  The lower the number, the better the network is performing.  As you can see, the decreasing loss function corresponds with increasingly accuracy and quality of generated Zaha Hadid building designs.

 
GAN Diagram.jpg
 

Next Steps

The work described in this report, i.e. convincingly replicating the designs of a well known architect, is intended primarily as a proof of concept. In the future, similar algorithmic tools or approaches may  identify, quantify and combine the defining elements of many architects, architectural styles and even elements from the natural world such as landscape and flora to create highly original architectural designs. In this study, we took an “outside - in” approach, by exploring AI’s ability to independently create external expressions of architectural form.  Other researchers such as Stanislas Chaillou [1], Nathan Peters [2], and Zheng & Huang [3], took an alternate “inside - out” approach by starting at the level of the floor plan.  Within their studies, they took rigorous steps to layout a process for AI to accurately recreate functional and rational spaces.  In contrast, our “outside - in” approach allowed us to jump directly into an environment of augmented creativity whereby emerging AI driven architectural languages massively amplify the creative potential of the human architect in an almost entirely unexplored manner where human and machine complement each other, something which may have never been attempted before. These AI-driven methods allowed us to identify and draw inspiration from the deepest layers of pattern and phenomena that underly successful pre-existing architectural designs.  Moving forward, a deeper and more thorough understanding of AI’s application in both the internal & external realm may lead to greater efficiency, accelerated creative development and perhaps even the emergence of an entirely new design process and style of the 21st century.

 

Our Source Code

Our source code can be accessed on our GitHub page here.

 
Screen Shot 2020-06-22 at 6.31.58 PM.png