Architectural design via machine learning

Introduction

The built environment organizes flows of people, resources, and ideas. Social infrastructure has long involved architecture but has also more recently included network computing. The latter tends to augment rather than replace the former; architecture has acquired a digital layer. As with past layers of technology, such as electrification, mechanical equipment, and transportation, so now digital technologies extend architecture’s reach. (McCullough 2005, 47) The era of cloud computing, algorithms, machine learning, and artificial intelligence has replaced or recontextualized the 1990’s novel forms of making associated with computer-based modeling and manufacturing. The convergence of design and thinking associated with the high speed of technology makes the gap smaller between the design process, the testing phase, and resulting constructability. Simulations, real-time modeling, evaluations, and analysis become all part of the same process; thinking is completely embedded with the capacity to compute data in real-time and at high speed. The governance of the algorithm in design celebrates data as its primary matter. Algorithms are sequenced sets of rules, data are their inputs. (Melendez, Diniz and Del Signore 2021, 11) And it can be said Digital and physical architectures are increasingly inseparable from one another. (Steenson 2017, 224)

Architects have been reluctant to use the computer as a schematic, organizing, and generative medium for design for fear of stigma and relinquishing control of the design process to software. For instance, architects such as Frank Gehry and Zaha Hadid have only used computers during the latter stages of their conceptual process. Furthermore, distinguishing the “analog” person from the “digital” person. When the analog person boots up his computer in the morning, he knows exactly what he wants the computer to do that day. In contrast, the digital person turns on her computer with the expectation that the computer will generate ideas. have been observed that while the computer is not a brain, machine intelligence demands one to develop a systematic human intuition about the connective medium and to engage with the computer as an intelligent tool in its own right. (D’souza 2020, 37) Can you tell if a design has been generated by a machine or a person?” – Simply looking at a single instance, drawing, or model would be very hard indeed. How such a machine would respond to context and unique site conditions would be an immediate issue, where algorithmic approaches might fail, for there is little consensus on any rules or even general principles that might distinguish a human designer versus a programmed response. (Cantrell and Mekies 2018, 109)

This paper will try to be understood how the machine can design. First investigated the past of this field to realize the right question then in each section prime presented the base theory and real case of its.

Literature review

Machine, Design, Human Relation Context

Negroponte and Architecture Machine Group (AMG) attempted to achieve architecture machine the first machine that can appreciate the gesture. (Negroponte 1973, Preface)Instead of being a slave, an architecture machine would serve as a bridge between the human and the computational, a personal computer that would learn through dialogue with its user. (Negroponte 1973, 13)And Eventually, Negroponte said about the future of architecture machines: “ They won’t help us design; instead, we will live in them.” (Negroponte 1976, 5) Cedric Price’s proposal for the Gilman Corporation in 1979 was a series of relocatable structures on a permanent grid of foundation pads on a site in Florida. John and Julia Frazer have produced a computer program to organize the layout of the site in response to changing requirements, and in addition suggested that a single-chip microprocessor should be embedded in every component of the building, to make it the controlling processor. This would have created an ‘intelligent building which controlled its organization in response to use. If not changed, the building would have become ‘bored’ and proposed alternative arrangements for evaluation, learning how to improve its organization based on this experience. (Furtado C. L. 2008, 63-66) The Universal Constructor, working model of a self-organizing interactive environment that In a typical experimental application the system will request an interactor to configure an environment consisting of cells, in different states. Using lights, the model then indicates its proposed response by asking the interactor’s assistance in adding or removing units. The participator can in turn modify the environment. (Frazer 1995, 49) In The first decade of the twenty-first century, More research towards human-center thought, machines used in the design, not for the design. (Martens and Brown 2005) (Leach 2009) That not is the focus of this paper.

Figure 1 The Universal Constructor 1990 (Frazer 1995, 49)

Margaret Boden as a cognitive scientist aims to study the idea of AI-based simulation of creativity from a philosophical viewpoint. She begins by setting out two taxonomies of creative behavior, in two orthogonal ways. First, she makes the distinction between H- and P-creativity: creativity which is ‘historical’ or ‘psychological’, respectively, the latter being interchangeable with ‘personal’, should that be more natural for the reader. The distinction is, respectively, between the sense of creating a concept which has never been created before – ever, anywhere – and a concept that has never been created before by a specific creator. Second, in Boden’s work, there is the distinction between exploratory and transformational creativity, which is directly relevant here, and so needs a little more explanation. Boden conceives the process of creativity as the identification and/or location of new conceptual objects in a conceptual space. Subsequent authors have sometimes imagined the conceptual space to be the state space of ‘Good Old-Fashioned AI’, though it is not clear that Boden intends her proposal to be taken so specifically or literally. (Veale and Cardoso 2019, 22.23) Of Course, Boden in her last work description about the deep neural network (It will be explained) and the relation it’s with brain creativity and finally Concludes that is far from the human brain. (Boden 2016, 78-92) 

Figure 2 Attempts to build an artificial cognitive system can be positioned in a two-dimensional space, with one axis defining a spectrum running from purely computational techniques to techniques strongly inspired by biological models, and with another axis defining the level of abstraction of the biological model (Vernon 2014, 4)

A cyborg is a cybernetic organism, a hybrid of machine and organism, a creature of social reality as well as a creature of fiction. (Haraway 1991, 149) Code/Space is another view of the relation between machine and design but especially space, Code/space occurs when software and the spatiality of everyday life become mutually constituted, that is, produced through one another. Here, spatiality is the product of code, and the code exists primarily to produce a particular spatiality. In other words, a dyadic relationship exists between code and spatiality. For example, a check-in area at an airport can be described as a code/space. The spatiality of the check-in area is dependent on software. If the software crashes, the area reverts from a space in which to check in to a fairly chaotic waiting room. There is no other way of checking a person onto a flight because manual procedures have been phased out due to security concerns, so the production of space is dependent on code. (Kitchin and Dodge 2011, 16-17) Also, that big data generated by people through the course of their day often carries with it latent information about how they perceive their physical environment that understood by sensors, app, ML, which can be a valuable tool in the long-term analysis of physical environments that can be evaluating the building. (Davis 2016) for instance, Using Bluetooth Low Energy (BLE) for user indoor collection and produce the map of the intensity of the connectivity by ML to achieve of Post-Occupancy Evaluation(POE) which help optimize building operation and improve the accuracy of simulations. (Hu and Park 2017) Continuation of the previous two issues (Code/Space and Cyborg)  Smart city is defined algorithms, artificial intelligence, robotization, and cyborg-type assemblies between biological organisms and machines. (Picon 2015, 12) and Finally Today many more situations come with expectations for interactivity. Dematerialization seems less of a concern than it was imagined twenty years ago in the days of “cyberspace.” and supposed automation often actually requires participation (McCullough 2020, 157-159). In the field of human and machine for teaching architecture to students, El-Zanfaly in MIT School of Architecture using the machine for an embodied feedback loop as well as produces novel and advanced results in a short time. (El-Zanfaly 2017)

On the other hand, The application of a fabrication machine in architecture allows a direct coupling between information and construction. In digital fabrication, the production of building parts is directly controlled by the design information. (Kolarevic and Klinger 2008, 104) Articulated arm robots are used as cost-efficient fabrication machines that are at once both reliable and flexible. (Gramazio, Kohler and Willmann 2014, 15) and robotic swarm printing as a robotically controlled additive fabrication by any method such Print-in-Place, Cable-Suspended 3D Printing, Templated Swarm Printing (Oxman, et al. 2014) is partial from using the machine for making.

Figure 3 Machine, Design, Human Relation timeline

AI, Machine Learning and Generative Design

Russel and Norvig explain some definitions of artificial intelligence in four categories: Acting humanly (Turing Test; A computer passes the test if a human interrogator, after posing some written questions, cannot tell whether the written responses come from a person or a computer.) Thinking Humanly (need to get inside the actual workings of the human mind), Thinking Rationally (The “laws of thought” approach), Acting rationally (operate autonomously, perceive their environment, persist over a prolonged time, adapt to change, and create and pursue goals) machine learning is one of six approaches in acting humanly definitions. (Russell and Norvig 2016, 2-5)

Machine learning is defined as a set of methods that can automatically detect patterns in data, and then use the uncovered patterns to predict future data or to perform other kinds of decision-making under uncertainty. (Murphy 2012, 1) Machine learning admits a very broad set of practical applications, such as Text or document classification, Natural language processing, Speech processing applications, Computer vision applications, Computational biology applications. (Mohri , Rostamizadeh and Talwalkar 2018, 2)

The term Generative Design can be described as a design method where the generation of form is based on rules or algorithms. (Agkathidis 2015, 14) Generative design fundamentally changes the design process: the designer shifts from being a performer of tasks to being a conductor, effectively orchestrating the decision-making process of the computer (Gross , et al. 2018, preface). Most used generative techniques are cellular automata (a system of cells located on a grid and evolve according to rules associated with the cells neighborhoods), genetic algorithm (mimics the natural selection process to automatically solve problems beginning with a high-level statement of requirements), shape grammar (used as an analysis tool to study existing designs to unveil the underlying patterns), Lindenmayer systems (mathematical model to generate plant-like shapes), and swarm intelligence (emergence of patterns on the global level of a system due to the collective behavior of its interacting agents) and multi-agent (computer simulation approach to study the behavior of autonomous agents, individually and collectively, in a given environment) societies. (BuHamdan, Alwisy and Bouferguene 2018, 4-5)

Figure 4 Generative design process diagram (Agkathidis 2015, 16)

In any researches presented major approaches about ML and GenerativeDesign.Enable alternative modes of differentiation and search (the designer has not explicitly) optimization and simulation for high-performance solutions i.e Structure and energy efficiency (supervision learning) Assess large datasets, find patterns or anomalies and make ultimately better-informed decisions and the applied concepts (Image Generation, Shape Grammer) Adapting ongoing behavior – Modular curiosity-based machine learning behavior is supported by a highly distributed system of microprocessor hardware integrated within sensors and actuators (Reinforcement Learning) Making – adapting fabrication occur throughout the design and fabrication process to short circuit the design process, enable different structures for assessment and evaluation of design artifacts, and extend the scope of the design process. (Tamke, Nicholas and Zwierzycki 2018, 1-5) (Bidgoli and Veloso 2018, 177) (Belém, Santos and Leitão 2019, 4). In the following, types of ML are used for paper ordination because some approaches used any type and method of ML but the opposite is not the case.

Supervised and Unsupervised Learning

Machine learning is divided into two main types. In the predictive or supervised learning approach, the goal is to learn a mapping from inputs x to outputs y, given a labeled set of input-output pairs D= {(xi, yi )}i=1N. Here D is called the training set, and N is the number of training examples. xi are called features, attributes, or covariates. In general, however, xi could be a complex structured object, such as an image, a sentence, an email message, a time series, a molecular shape, a graph, etc. Similarly, the form of the output or response variable can in principle be anything, but most methods assume that yi is a categorical or nominal variable from some finite set, yi∈ {1,…,C} , or that yi is a real-valued scalar (such as income level). When yi is categorical, the problem is known as classification or pattern recognition, and when yi is real-valued, the problem is known as regression. Another variant, known as ordinal regression, occurs where label space Y has some natural ordering, such as grades A–F. The second main type of machine learning is the descriptive or unsupervised learning approach. Here we are only given inputs, D= {(xi )}i=1N, and the goal is to find “interesting patterns” in the data. This is sometimes called knowledge discovery. This is a much less well-defined problem, since we are not told what kinds of patterns to look for, and there is no obvious error metric to use. There is a third type of machine learning, known as reinforcement learning (Murphy 2012, 2) (Shalev-Shwartz and Ben-David 2014, 4) will be explained later.
ML framework within the Grasshopper to make the inner workings of ML more transparent and thus comprehensible to architects and designers is the topic of one research by a neural network as supervised learning. A ‘framework’ refers to the abstraction of a computational process, which can be altered to suit a multitude of applications. (Khean, et al. 2018, 240-242)
Another research using Graphs (kernel1 built) represents spatial hierarchies in the form of nodes (rooms) and edges (connections) and following a “supervised learning” or “classification” approach to the dataset includes information about each building such as area, connectivity, isovists, for distinguishing the type of building(monasteries and mosques). buildings’ spatial structures codified as graphs have a great analytical potential which can be used to automatically distinguish different types of buildings, as well as to explore other architectural characteristics. Remarkably, our method to classify building types based on their spatial structures performed at 93% accuracy. (Ferrando, et al. 2019)

Figure 5 3D scatterplot of the monasteries and mosques, in red and blue respectively. On the three axes, the number of connections, area, and centered isovist area. This analysis shows that the rooms with the highest number of connections to other rooms are more likely to belong to mosques. (Ferrando, et al. 2019, 5)

A genetic algorithm (GA) is a class of algorithm inspired by the principles of natural selection and search algorithms that act on a population of possible solutions. The algorithm mimics the processes of genetic mutation, crossover, and selection to iteratively improve the performance of solutions to the optimization problem. Along with evolution strategies and genetic programming, GAs are instances of evolutionary algorithms. Genetic algorithms are stochastic search algorithms that are often used in machine learning applications and that is used in reasoning as well as learning, and genetic algorithms are used in this context as well. Genetic algorithms are important in machine learning for three reasons. First, they act on discrete spaces, where gradient-based methods cannot be used. They can be used to search rule sets, neural network architectures, cellular automata computers, and so forth. In this respect, they can be used where stochastic hill-climbing and simulated annealing might also be considered. Second, they are essentially reinforcement learning algorithms. The performance of a learning system is determined by a single number, the fitness. This is unlike something like back-propagation, which gives different signals to different parts of a neural network based on its contribution to the error. Thus, they are widely used in situations where the only information is a measurement of performance. In that regard, its competitors are Q-learning, TD-learning, and so forth. Finally, genetic algorithms involve a population, and sometimes what one desires is not a single entity but a group. Learning in multi-agent systems is a prime example. (Adriaenssens, et al. 2014, 178) (Shapiro 1999, 146-147)

Figure 6 Genetic algorithm applied to gridshells (Adriaenssens, et al. 2014, 172)

At a conceptual level, the recipe for a GA is: 1.Define the fitness function(s)…what performative metric(s) is(are) being optimized?, 2.Define a genome logic (the number of characters in the genome string and the relationship between the individual characters and the expression of the design geometry), 3.Randomly generate an initial population of genomes, 4.Test the fitness of the designs that are generated from each genome, 5.Identify the top performers; these will become the selection pool for the next generation 6. From the selection pool build the population of the next generation, using methods of cloning, cross-breeding, mutation, and migration, 7.Test all the new genomes in the new population, 8. If the performance appears to be converging to an optimal condition then stop; otherwise repeat starting from step #5. (Besserud and Cotten 2008, 239-240) For instance, Training a neural network based on user selection, and performing a search of the remaining parameter space of the user-created script. To better integrate this functionality into the workflow of architectural computational designers, this system is designed to be added to scripts created by Autodesk Dynamo users. At first genetic algorithms create random masses and in the second score and learn by three labels (Blue and purple adjacent or near, yellow on the ground level, minimum overlap – supervision learning) to create more closely meet the design criteria. (Sjoberg, Beorkrem and Ellinger 2017, 555-559)

Figure 7 Massing designs returned by the system after user training and search. suggests that the system can begin to emulate a generalized set of desired relationships between parameters in the design, however, it did not perform as well at recreating exact values for parameters that might be recognized by the user. (Sjoberg, Beorkrem and Ellinger 2017, 559)

For Material Computation and Embedding Physical Constraints into Digital Form-Generation, the combination of parametric geometry and material constraint definitions permit the creation of an integrated design system that allows the designer to broadly explore design options without losing the critical properties of constructibility and fabrication constraints that are embedded within the model. For example, in the CATIA system, these are referred to as ‘checks’, ‘rules’, and ‘reactions’, and constitute a form of user-defined AI and ML that can be used to govern the detailing of a design within a finite set of possible outcomes. (Ceccato 2012, 100)

Figure 8 Zaha Hadid Architects, Galaxy SOHO, Beijing, 2012., top: Vertical fascia production model. Naming, unfolding and the setting out of panels was fully automated. bottom: Automatically generated A0 tender sheet including the setting out and naming of panels. The workflow was established and automated; 144 drawings were generated in 35 minutes. (Ceccato 2012, 100)

Generative solution for brick ruin for achieving more statical state and create porous stable brick tower is a sample of using genotype for optimization. (Harrison 2016)

Figure 9 Detail of sinusoidal plan variation. The relative overlap of each brick is controlled by the solver’s genotype; greater overlaps sometimes allow for high degrees of corbelling but ofter introduce instability into the system. (Harrison 2016, 75)

Hybrid Sentient Canopy using prescripted and curiosity-based2 learning algorithms to automatically generate interaction behaviors. The prescripted system is deterministic, involving predictable behaviors employing actuation tightly coupled to sensor stimulus, linear translations of sensed data into actuation response, propagating into a neighbor and global responses, modified by timings and filtered damping, and the curiosity-based system involves exploration by employing an expert system composed of archives of information from preceding behaviors, calculating potential behaviors together with locations and applications, executing behavior and comparing the result to prediction. For mechanism, An Arduino-compatible node-based electronics system was designed and implemented that exposes dense sensing and actuation capabilities to a flexible array of sub-node (device) level modules allowing for the inclusion a wide range of low-voltage peripheral devices including shape-memory alloy and DC motor-based mechanisms, amplified sound, and high-power LED lighting. (Beesley, et al. 2016, 369-370)

Figure 10 modeling and form-finding through laser-cut acrylic thermoforming, integrated with computational controls and proprioceptive functions. The latter functions support cycles of sensor and actuator data which enables it as a physical environment for machine learning. (Beesley, et al. 2016, 362)

At Bridge Too Far project machine learning is interfaced with multiscale simulation strategies. At the scale of the element, a genetic algorithm is used to optimize the topology of the cable network for both performance and fabrication (0.5 mm Aluminium) requirements, and at the scale of the structure, an artificial neural network trained with backpropagation is used to bypass structural simulation. (Thomsen, et al. 2021, 255)

Deep Learning and Generative Adversarial Networks

by looking at the brain seem many levels of processing. It is believed that each level is learning features or representations at increasing levels of abstraction. (For example, in the standard model of the visual cortex, the brain first extracts edges, then patches, then surfaces, objects, etc.) This observation has inspired a recent trend in machine learning known as deep learning(DL). Deep models often have millions of parameters. Acquiring enough labeled data to train such models is difficult. To overcome the problem of needing labeled training data, On Unsupervised learning will be focused. The most natural way to perform this is to use generative models; three different kinds of deep generative models are directed, undirected, and mixed. (Murphy 2012, 995)

Figure 13 Some deep multi-layer graphical models. Observed variables are at the bottom. (a) A directed model. (b) An undirected model (deep Boltzmann machine). (c) A mixed directed-undirected model (deep belief net). (Murphy 2012, 996)
Figure 14 Illustration of a deep learning model. It is difficult for a computer to understand the meaning of raw sensory input data, such as this image represented as a collection of pixel values. The function mapping from a set of pixels to an object identity is very complicated. Learning or evaluating this mapping seems insurmountable if tackled directly. Deep learning resolves this difficulty by breaking the desired complicated mapping into a series of nested simple mappings, each described by a different layer of the model. The input is presented at the visible layer, so named because it contains the variables that we can observe. Then a series of hidden layers extract increasingly abstract features from the image. These layers are called “hidden” because their values are not given in the data; instead, the model must determine which concepts are useful for explaining the relationships in the observed data. The images here are visualizations of the kind of feature represented by each hidden unit. Given the pixels, the first layer can easily identify edges, by comparing the brightness of neighboring pixels. Given the first hidden layer’s description of the edges, the second hidden layer can easily search for corners and extended contours, which are recognizable as collections of edges. Given the second hidden layer’s description of the image in terms of corners and contours, the third hidden layer can detect entire parts of specific objects, by finding specific collections of contours and corners. Finally, this description of the image in terms of the object parts it contains can be used to recognize the objects present in the image. (Goodfellow, Bengio and Courville 2016, 6)

To complete the discussion about DL and take a look at Actual machine performance is presented sample code in the Tensorflow library for understanding image features. TensorFlow is a powerful library for numerical computation, particularly well suited and fine-tuned for large-scale Machine Learning . (Géron 2019, 368) In TensorFlow, each input image is typically represented as a 3D tensor of shape [height,width,channels].A mini-batch is represented as a 4D tensor of shape[mini-batch size,height,width,channels]. The weights of a convolutional layer are represented as a 4D tensor of shape [fh,fw,fn’,fn ] .The bias terms of a convolutional layer are simply represented as a 1D tensor of shape 〖[f〗_n] (Moroney 2020).The following code loads two sample images. The pixel intensities (for each color channel) are represented as a byte from 0 to 255, so was scaled these features simply by dividing by 255, to get floats ranging from 0 to 1.Then were created two 7 × 7 filters (one with a vertical white line in the middle, and the other with a horizontal white line in the middle), and was applied them to both images using the tf.nn.conv2d()function, which in strides is equal to 1, but it could also be a 1D array with 4 elements, where the two central elements are the vertical and horizontal strides (sh and sw). The first and last elements must currently be equal to 1. They may one day be used to specify a batch stride (to skip some instances) and a channel stride (to skip some of the previous layer’s feature maps or channels). Padding If set to “VALID“, the convolutional layer does not use zero padding, If set to “SAME“, the convolutional layer uses zero padding if necessary. In this case, the number of output neurons is equal to the number of input neurons divided by the stride, rounded up and may ignore some rows and columns at the bottom and right of the input image, depending on the stride Finally, was plotted one of the resulting feature maps. (Géron 2019, 439-440)

# Load sample images
china = load_sample_image("china.jpg") / 255
flower = load_sample_image("flower.jpg") / 255
images = np.array([china, flower])
batch_size, height, width, channels = images.shape
# Create 2 filters
filters = np.zeros(shape=(7, 7, channels, 2), dtype=np.float32)
filters[:, 3, :, 0] = 1 # vertical line
filters[3, :, :, 1] = 1 # horizontal line
outputs = tf.nn.conv2d(images, filters, strides=1, padding="SAME")
plt.imshow(outputs[0, :, :, 1], cmap="gray") # plot 1st image's 2nd feature map
plt.show()
Instead of manually creating the variables, however, could be used the keras.layers.Conv2D layer as trainable variables which is created a Conv2D layer with 32 filters, each 3 × 3, using a stride of 1 (both horizontally and vertically), SAME padding, and applying the relu activation function to its outputs. (Géron 2019, 441)

Instead of manually creating the variables, however, could be used the keras.layers.Conv2D layer as trainable variables which is created a Conv2D layer with 32 filters, each 3 × 3, using a stride of 1 (both horizontally and vertically), SAME padding, and applying the relu activation function to its outputs. (Géron 2019, 441)

conv = keras.layers.Conv2D(filters=32, 
kernel_size=3, strides=1,
padding="SAME", activation="relu")

Reinforcement Learning

Discussion and Conclusion

Notes

  1. Kernel machines are a class of algoritms for pattern analysis,whose best known member is the support-vectoe machine (SVM) ↩︎
  2. Curiosity-based learning algorithms were originally proposed in the field of developmental robotics, to enable robots to learn about their capabilities and the environment, emulating childhood development ↩︎

Leave a Reply

Your email address will not be published. Required fields are marked *