Making a foray into AI for applications in the AEC industry

Making a foray into Artificial Intelligence for applications in the AEC industry – Part 2

Artificial Intelligence has become a buzzword, most people either get fascinated or anxious by the current achievements. In parallel, Machine Learning (ML) is being mentioned with excitement and misinformation as it is not easy to understand how a machine can “learn” and “improve” at performing tasks. When you try to think, essentially, what does it mean for a machine to learn?

If we want to answer that question without any ambiguity, we need to stick to formalism. When we adopt the formal language, we can talk about relations between variables, deduce implications using logic and do reasoning in much explicit and non-subjective manner, thus creating a sophisticated yet consistent system.

Mathematics is the language of all sciences, and the ability to speak it fluently opens many avenues. To design algorithms that will show learning behaviour, we will draw heavily from mainly the fields of linear algebra, multivariable calculus and statistics. We will try to simplify all the maths behind the scenes throughout our next articles and make it intuitive enough to help understand what the algorithm is doing.

Traditional algorithms vs Machine Learning algorithms

An algorithm is a well-defined set of instructions that the user feeds the computer to complete the desired task. Traditionally, we design an algorithm based on the best knowledge acquired from the experience we have gathered at the time.

However, if we were to gain more experience, perhaps we could have designed a better one. Now imagine a scenario where we could transfer this human quality of learning from experience to the computers (constantly making inferences from data that we provide continuously), in other words, we could teach the algorithm to learn to do the right thing based on the provided data.

When designing an ML system, we feed the algorithm loads of data to make “it” gain experience and perform the task by improving each time with a new experience – that’s precisely the process of learning! This helps us overcome our limits as humans to learn the patterns in a vast amount of data that we have access to in today’s time. ML algorithms can majorly make predictions for new input values, classification into pre-defined categories, a grouping of similar data samples into clusters without our supervision and so on.

No alt text provided for this image

Now that we want the algorithm to learn the right things fast, a big part of our job revolves around constructing and cleaning the raw data before providing the algorithm (we will come back to this in a bit).

First, we go through some jargons of Machine Learning:

  1. Labelled and Un-labelled data: A label is anything that you would like to have a prediction for. For example, given a real recorded data of “area of a house” and corresponding “prices” we want a price prediction for some new value of the area, in that case, “price” is the label and all the values of “area” whose prices are known are said to be labelled.  So, a dataset with prices known for all values of the area is labelled while a dataset of just list of areas is unlabelled.
  2. Task, experience and performance: Considering the same scenario, for the algorithm task would be to “assign a price to the given input value of area”, the experience will be the provided matched pairs of area-price, and performance will be a measure of how correct the price prediction was.
  3. A Model: ML is all about pattern searching. We are always looking for a hidden model (mathematical function) to represent that pattern so well that that model can make predictions for us where required.

So formally, what is Learning?

Tom Mitchell defines a well-posed Learning problem as a computer program is said to learn from experience (‘E’) with respect to some task (‘T’) and some performance measure (‘P’), if its performance on (‘T’), as measured by (‘P’), improves by experience (‘E’). Thus, Machine learning problems define the variables T, E and P and ensure P increases with E.

Machine Learning is said to be a branch of Artificial Intelligence that deals with algorithms and systems performing specific tasks using patterns and inference. Rather than explicitly programmed instructions (in a sense that we don’t specify the path to take to achieve the goal, the algorithm tweaks parameters to land on the best possible path), a method system learns from data and self-improve. According to MIT Technology Review, “Machine-learning algorithms use statistics to find patterns in massive amounts of data. And data, here, encompasses a lot of things—numbers, words, images, clicks. If it can be digitally stored, it can be fed into a machine-learning algorithm”. 

Types of Machine Learning

Machine learning types are generally classified under these subheadings

  1. Supervised Machine Learning
  2. Unsupervised Machine Learning
  3. Reinforcement Learning

Supervised Learning – Refers to the use of labelled data to train a model. In this type of learning, the input and desired output data are being provided. The data (input and output) are being labelled to provide a learning base for future data to be predicted. It merely means feeding the model with data that has the correct answers and expecting it to answer correctly to new questions. Examples include spam detection, price prediction, image recognition, and many more.  

The two types of problems solved using supervised learning are regression and classification. Regression is when a prediction can take any value in a continuous range, like price value prediction, whereas classification is about predicting discrete values or classes like spam filtering (spam or not spam). 

Workflow for Supervised learning

Unsupervised Learning – It involves training a system to utilise data that is not labelled and still gain some insights about the inter-relations. It provides unanticipated patterns in a data set that has no pre-existing labels, giving us new insights into data, and algorithms are along these lines permitted to characterise, mark and additionally group the data according to similarities and differences. Unsupervised learning algorithms permits you to perform progressively complex handling task contrasted with supervised learning (though it tends to be more unpredictable). Some examples of unsupervised learning include social network analysis, market segmentation etc.

No alt text provided for this image

Reinforcement Learning (RL) – We have here an environment to interact with, where if it works admirably, you reward it, else you punish the algorithm. The model makes a sequence of decision as it learns from its own experience by interacting with the environment. RL learns what to do on its own based on how it was treated on its previous decisions. The reward can be positive or negative, and punishment likewise. All rewards improve the probability of that behavioural response. All punishment declines the probability of that behavioural response. It assigns positive qualities to the ideal activities to energise the agent and negative attributes to undesired practices.

Reinforcement learning Process

Steps of the Machine Learning process

The steps listed below represent the workflow to follow in solving problems using machine learning.

  1. Data collection
  2. Data processing
  3. Feature engineering
  4. Model selection
  5. Model training
  6. Model validation
  7. Model persistence

Data scientists spend 80% of their time on data preparation. It is impossible to create a working machine learning model without having to feed it with correct data. Therefore, the collection, pre-processing, and preparation of data is very essential to the whole process and is also the first step to be taken. Through several iterations, you can get programmed arrangements dependent on all the past data/projects you have been dealing with. The more data you feed your database, the more experience (insights from data) model will gain, more accurate models will be in the sense of enabling it to find patterns, similarities, and differences.

Data sourcing/collection – This is the first step to consider in making your data ready for ML. Simply put, it is the process of finding or creating data for training a machine learning model. The source of the data must be identified, either it is being extracted off a web platform or an application database. An example is obtaining schedules from Revit’s database for use in the form of schedules.

Data pre-processing – It is of paramount importance that we identify the type of data to be either structured or unstructured data. This involves cleaning and making your data prepared for training, this incorporates arranging and organising, normalising, and managing missing data. Usually, data can come in several formats and often mixed.

It is necessary to identify their different types (e.g., all text files, photos, etc.) and if that is not possible, it does not mean that the data is not useable, though utilising incomplete or unprocessed data can cause a variety of errors and reduce accuracy in results/predictions.  The algorithms can check if there are outliers in our data set that we will need to remove or if there are any missing values in our set that we need to include.

Feature Engineering (FE) – According to KD Nuggets, this involves the art and science of transforming raw data into features that better represent a pattern to the learning algorithms. FE helps in distinguishing input variables that are generally applicable to the task and deriving new variables from available data. While data pre-processing is a method of refining data, include designing is the way toward making highlights to upgrade it. FE permits you to characterise the essential data in your dataset. This may mean breaking data into various parts to explain specific connections.

Data can either be structured or unstructured. Structured data is the one that is organised in ways that makes it easily searchable and extracted, most times stored in rows and column format, especially for numerics. They can fit into relational databases while unstructured data is simply the opposite. A much larger percentage of data in the world is unstructured. There is no organisational structure in unstructured data which makes it challenging to work with (collect, manage, process, analyse, etc.). Getting data from sources in AEC is quite tricky as there is no unanimous standard yet for data science workflow. Some companies have large data sets, but since there is no defined way for collaboration yet, most data experiments are inhouse development.

Since we have point cloud automation goals at DiRoots, we expand a bit on point cloud data processing. One of the recently flourishing kinds of data is point clouds because today, they have applications in most sectors like robot navigation and perception, autonomous driving systems, medical imaging, geographic information systems, etc. In the AEC industry, they are prominent because they can be used to record buildings and structures shape to be reassembled in its original shape, they can contribute for a repository of data from important infrastructures, can be used to confirm in situ the decisions made during the design phase, can be used to track changes on structures along the years, to capture and expose GIS data, etc.

Point clouds are sparse order-invariant sets of interacting points defined in coordinate space and sampled from the surface of objects to capture their spatial-semantic information. Due to their sparse nature, they are computationally efficient and less sensitive to noise compared to volumetric and multi-view representations. Moreover, multisampling and data compression is achieved in point clouds, giving them an advantage over other forms of representation. To make our machine make sense of all the structure in the point cloud automatically, we can make use of various machine learning algorithms.

In 3D computer vision, automating comprehension of point cloud environments entail doing things like classification, object detection, 6-DoF pose estimation, (scene, object or part) segmentation, reconstruction etc. But again, for all that to be achieved, there is need for pre-processing and initial crucial step of what we described above, choice of features (in this case: position, normal, curvature or higher-order derivatives, colour, motion vector, etc.) to discriminate structures with the algorithm. If we wish to implement supervised deep learning, we have many recent networks based on PoinNet architecture that make it possible to handle the unstructured nature of point clouds directly. We require to also prepare train, cross-validation and test datasets of point clouds which fulfil the specific requirements crucial for training the neural network on, for performing segmentation or classification.

Model Selection – This involves picking a machine learning model that statistically can solve the said problem we are trying to tackle among the series of machine learning models available. A perfect model is not feasible, but the ML model chosen must be “good enough” to solve the problem.

Model Training – This involves providing the machine learning model with training data to learn from. The training data is used to train the ML model’s capacity to predict the solution to the problem.

Model Validation – It is essential to validate every ML model built. This refers to the evaluation of the trained ML model with a set of testing data set. It is a method of comparing the results of the trained model to a real system to know how valid/true the results of the model is.  Model validation helps check the capabilities of the trained dataset.

Model Persistence – this stage serves more like a re-evaluation of the model validation stage where the model is not just good enough, but the effort is made to make sure the model is best at what it is doing.