logo 2.png

Using the latest AI object detection algorithms to reveal new insight into our built environments.

 

 
Splash.png
 

 

ARCHI VISION is a proposed automated AI tool that can rapidly detect, analyze and quantify physical urban housing data over entire neighborhoods, cities, or regions.

Traditionally, architectural data over large areas has been collected manually and contains relatively few descriptors beyond house price, sf, # of bathrooms, and so on. As a result, housing styles and features have remained largely under described and quantified.

Archi-Vision attempts to solve this by using Deep learning methods (object detection / classification) to extract far more detailed housing data over larger areas.

 

 
 
Final_Houses.gif
 

Real Time and Static Image Object Detection

Archi Vision is capable of detecting architectural features that exist in both static and real-time image formats.

houses_detected.jpg
 

 
 

You Choose What To Detect

Archi Visions feature detection option allows users to pre-select what architectural components they want to identify, aggregate and quantify.

Components may include:

  • Storey count

  • doors / window count

  • roof type

  • architectural style

  • cladding material

  • stair count

  • building shape & dimensions

 
features.png
 
 

 

Collect Data Where You Want

Users can specify the region, city or neighborhood that they wish to extract architectural data from. A user friendly GUI with map allows regions to be searched for, highlighted via a dynamic selection tool, and selected for detection and analysis.

 

 

A Simple & Intuitive Interface

The proposed Archi Vision product will be hosted online and will exist as a simple and straightforward interface.

 
 

 

Model Customization Made Easy

Archi Vision integrates an intuitive, simple and straightforward graphic user interface that allows users to quickly and easily modify the AI model being used for object detection. Basic controls include prediction threshold (sensitivity), # of detections per image, detection speed, and model type. Expert users have the ability to control more refined features of the model architecture such as activation functions, convolutional layer types and counts, and feature masks. In all situations however, Archi Vision will always suggest the best model and control combinations to achieve the best results per the task on hand.

 
controlled_model.png
 

 

A “White Box” Model: Improving Transparency and User Confidence Through Human-AI Interaction Techniques

The proposed Archi Vision object detection model incorporates a number of features that provides transparency to model operation and output. As mentioned previously, users have direct control over various model parameters, allowing them both coarse and fine control over model performance, architecture and operation. More importantly, Archi Vision incorporates a unique human-AI feedback loop that allows users to review and improve model performance over time based on their own input and corrections to the model.

By presenting the user with potential object misclassifications and detections, mistakes can be identified by the user and fed back into the training loop in order to correct and improve the model over time. Users can also set a threshold that determines which misclassified images are presented to them for review based on prediction confidence rates. For example, a skeptical user may specify that any detection below a 90% confidence level must be presented to them for review. A less skeptical user may want to set the threshold at 60%, so any object detection and classification below 60% needs to be reviewed, but anything above does not.

 
Review.png
Review_Bar.png
 

 

Our Training Dataset

The Archi Vision model was trained on thousands of post-processed images of working class house facades taken from various neighbourhoods within Pittsburgh, Pennsylvania. Images were sourced from the open source Google Streetview platform and were dated from approximately 2020.