Screen Shot 2019-10-31 at 1.09.35 PM.png
Screen Shot 2019-10-31 at 1.57.44 PM.png
Screen Shot 2019-10-31 at 1.14.52 PM 2.png
Screen Shot 2019-10-31 at 12.56.30 PM.png
Screen Shot 2019-10-31 at 12.55.46 PM.png
Screen Shot 2019-10-31 at 2.06.46 PM 2.png
Screen Shot 2019-10-31 at 1.09.04 PM.png
Screen Shot 2019-10-31 at 2.00.16 PM.png
Screen Shot 2019-10-31 at 1.19.51 PM.png
Screen Shot 2019-10-31 at 1.14.24 PM.png
 

A.I. Generated Interior Designs

year: 2019-2020
team: sage elliott, michael hasey
type: research  
GitHub: click for source code

Gan_itecture is a research based project developed by Michael Hasey and Sage Elliot between 2019 and 2020 that leverages the latest computer-deep-learning neural networks to autonomously generate new architectural design imagery in the style of the late Zaha Hadid, one of the most renowned architects of recent time.

To do this, they trained powerful W-GAN algorithms (Wasserstein Generative Adversarial Networks) to learn then recreate the unique patterns, styles, and geometries inherent in 6,355 images of her original work.

Though the interior designs above may seem genuinely at a glance, they are in fact “fake” Zaha Hadid designs that have been autonomously and instantaneously generated by our W-GAN process.

 
ganitecture_banner_interior.jpg
 

Original Zaha Hadid interior images used to train our W-GAN algorithms

Screen Shot 2020-06-22 at 5.14.45 PM.png
Screen Shot 2020-06-22 at 5.12.30 PM.png
Screen Shot 2020-06-22 at 5.22.20 PM.png
Screen Shot 2020-06-22 at 5.15.28 PM.png
Screen Shot 2020-06-22 at 5.14.06 PM.png
 
Screen Shot 2020-06-22 at 6.05.59 PM.png
Screen Shot 2020-06-22 at 5.12.57 PM.png
 

Infinite Variation

GANS have the ability to create infinite variations of the learned styles within Zaha Hadid’s work.  Once trained, they can hypothetically generate millions of these variations within extremely short span of time.  In addition to re-creating existing styles, WGANs can create new and emergent styles of its own through the uncovering of previously hidden patterns and phenomena found within her original body of work. See below for a sample of the many interior styles generated by our algorithmic process.

 
bright white style

bright white style

flowing light style

flowing light style

warm gold style

warm gold style

blue steel style

blue steel style

dark style

dark style

 

What are GANs?

Generative adversarial networks (GANs) are deep learning neural networks comprised of two algorithms that train each other to “learn” then “re-create” patterns of statistically significant phenomena found within pre-existing training sets of original data.  Original data may include images, music, text, and so on.  What makes GANs special is their ability to recreate new data that is incredibly similar to the originals, yet unequivocally unique.

 

How do GANs work?

The innovative nature of GANS is simple at its core.  Their basic primary software architecture hinges on two algorithmic networks; the generator and the discriminator.  In our study, the generator’s task is to attempt to create images of “fake” Zaha Hadid buildings that look convincingly similar to her real designs.  In this case, we have compiled more than 10,000 images of Zaha Hadid buildings into a training set which is fed into the generator portion of the GAN.  The discriminator’s task is to then decide whether the images of fake Zaha Hadid buildings appear convincingly real or not.  If the images do not look real, for example, look more like a cat then a building, the discriminator assigns a low score to the generated image.  This low score indicates to the generator that it must improve its effectiveness and produce more convincing images of “fake” Zaha Hadid styled buildings in hopes of fooling the discriminator next time.  As a result, the generator will correct its mistakes and will generate more and more convincing building design images over time.  As the generator gets better at creating convincing images, the discriminator must then improve its ability to identify fake images in order to outperform the generator. Through this back and forth, cat and mouse competition of trying to outdo one another, both the discriminator and the generator improve at their craft over thousands of iterations or what are called “epochs”.  In this way, GANs can, over time, “learn” to generate unique and extremely convincing images of Zaha Hadid styled buildings that are nearly indistinguishable from the originals.

 
GAN_diagram.jpg

Image Optimization Over Time

“After many epochs, the discriminator and generator will have found an equilibrium, that allows the generator to learn meaningful information from the discriminator and the quality of the images will start to improve.  By observing images produced by the generator at specific epochs during training, it is clear that the generator is becoming increasingly adept at producing images that could have been drawn from the training set.” - David Foster, “Generative Deep Learning - Teaching Machines to Paint, Write, Compose and Play”, 2019.

Below is a graph illustrating how GAN performance increased in our own hands over time.  The values below represent the decreasing loss  function as the GAN improves after thousands of epochs (individual feedback loop cycles of the GAN algorithm).  The loss function is used by neural networks to compare its predicted output (generated Zaha designs) to the ground truth (real Zaha designs).  The lower the number, the better the network is performing.  As you can see, the decreasing loss function corresponds with increasingly accuracy and quality of generated Zaha Hadid building designs.

 
GAN Diagram.jpg
 

Next Steps

The work described in this report, i.e. convincingly replicating the designs of a well known architect, is intended primarily as a proof of concept. In the future, similar algorithmic tools or approaches may  identify, quantify and combine the defining elements of many architects, architectural styles and even elements from the natural world such as landscape and flora to create highly original architectural designs. In this study, we took an “outside - in” approach, by exploring AI’s ability to independently create external expressions of architectural form.  Other researchers such as Stanislas Chaillou [1], Nathan Peters [2], and Zheng & Huang [3], took an alternate “inside - out” approach by starting at the level of the floor plan.  Within their studies, they took rigorous steps to layout a process for AI to accurately recreate functional and rational spaces.  In contrast, our “outside - in” approach allowed us to jump directly into an environment of augmented creativity whereby emerging AI driven architectural languages massively amplify the creative potential of the human architect in an almost entirely unexplored manner where human and machine complement each other, something which may have never been attempted before. These AI-driven methods allowed us to identify and draw inspiration from the deepest layers of pattern and phenomena that underly successful pre-existing architectural designs.  Moving forward, a deeper and more thorough understanding of AI’s application in both the internal & external realm may lead to greater efficiency, accelerated creative development and perhaps even the emergence of an entirely new design process and style of the 21st century.

 

Our Source Code

Our source code can be accessed on our GitHub page here.

 
Screen Shot 2020-06-22 at 6.31.58 PM.png