Sustainability starts in the design process, and AI can help
Artificial intelligence helps build physical infrastructure like modular housing, skyscrapers, and factory floors. “…many problems that we wrestle with in all forms of engineering and design are very, very complex problems…those problems are beginning to reach the limits of human capacity,” says Mike Haley, the vice president of research at Autodesk. But there’s hope with AI capabilities, Haley continues “This is a place where AI and humans come together very nicely because AI can actually take certain very complex problems in the world and recast them.”
And where “AI and humans come together” is at the start of the process with generative design, which incorporates AI into the design process to explore solutions and ideas that a human alone might not have even considered. “You really want to be able to look at the entire lifecycle of producing something and ask yourself, ‘How can I produce this by using the least amount of energy throughout?’” This kind of thinking will reduce the impact of, not just construction, but any sort of product creation on the planet.
The symbiotic human-computer relationship behind generative design is necessary to solve those “very complex problems”—including sustainability. “We are not going to have a sustainable society until we learn to build products—from mobile phones to buildings to large pieces of infrastructure—that survive the long-term,” Haley notes.
The key, he says, is to start in the earliest stages of the design process. “Decisions that affect sustainability happen in the conceptual phase, when you’re imagining what you’re going to create.” He continues, “If you can begin to put features into software, into decision-making systems, early on, they can guide designers toward more sustainable solutions by affecting them at this early stage.”
Using generative design will result in malleable solutions that anticipate future needs or requirements to avoid having to build new solutions, products, or infrastructure. “What if a building that was built for one purpose, when it needed to be turned into a different kind of building, wasn’t destroyed, but it was just tweaked slightly?”
That’s the real opportunity here—creating a relationship between humans and computers will be foundational to the future of design. “The consequence of bringing the digital and physical together,” Haley says, “is that it creates a feedback loop between what gets created in the world and what is about to be created next time.”
Show notes and references
“What is Generative Design, and How Can It Be Used in Manufacturing?” by Dan Miles, Redshift by Autodesk, November 19, 2021
“4 Ways AI in Architecture and Construction Can Empower Building Projects” by Zach Mortice, Redshift by Autodesk, April 22, 2021
Full transcript
Laurel Ruma: From MIT Technology Review, I’m Laurel Ruma, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace. Our topic today is about how to design better with artificial intelligence, everything from modular housing to skyscrapers to manufactured products and factory floors can be designed with and benefit from AI and machine learning technologies. As artificial intelligence helps humans with design options, how can it help us build smarter? Two words for you: sustainable design.
My guest is Mike Haley, the vice president of research at Autodesk. Mike leads a team of researchers, engineers, and other specialists who are exploring the future of design and making.
This episode of Business Lab is produced in association with Autodesk.
Welcome, Mike.
Mike Haley: Hi Laurel. Thanks for having me.
Laurel: So for those who don’t know, Autodesk technology supports architecture, engineering, construction, product design, manufacturing, as well as media and entertainment industries. And we’ll be talking about that kind of design and artificial intelligence today. But one specific aspect of it is generative design. What is generative design? And how does it lend itself to an AI-human collaboration?
Mike: So Laurel, to answer that, first you have to ask yourself: What is design? When designers are approaching a problem, they’re generally looking at the problem through a number of constraints, so if you’re building a building, there’s a certain amount of land you have, for example. And you’re also trying to improve or optimize something. So perhaps you’re trying to build the building with a very low cost, or have low environmental impact, or support as many people as possible. So you’ve got this simultaneous problem of dealing with your constraints, and then trying to maximize these various design factors.
That is really the essence of any design problem. The history of design is that it is entirely a human problem. Humans may use tools. Those tools may be pens and pencils, they may be calculators, and they may be computers to solve that. But really, the essence of solving that problem lies purely within the human mind. Generative design is the first time we’re producing technology that is using the computational capacity of the computer to assist us in that process, to help us go beyond perhaps where our usual considerations go.
As you and I’m sure most of the audience know, people talk a lot about bias in AI algorithms, but bias generally comes from the data those algorithms see, and the bias in that data generally comes from humans, so we are actually very, very biased. This shows up in design as well. The advantage of using computational assistance is you can introduce very advanced forms of AI that are not actually based on data. They’re based on algorithmic or physical understandings of the world, so that when you’re trying to understand that building, or design an airplane, or design a bicycle, or what it might be, it can actually use things like the laws of physics, for example, to understand the full spread of possible solutions to address that design problem I just talked about.
So in some ways, you can think of generative design as a computer technology that allows designers to expand their minds and to explore spaces and possibilities of solutions that they perhaps wouldn’t go otherwise. And it might even be outside of their traditional comfort zone, so biases might prevent them from going there. One thing you find with generative design is when we watch people use this technology, they tend to use it in an iterative fashion. They will supply the problem to the computer, let the computer propose some solutions, and then they will look at those solutions and then begin to adjust their criteria and run it again. This is almost this symbiotic kind of relationship that forms between the human and the computer. And I really enjoy that because the human mind is not very good at computing. The popular idea is you can hold seven facts in your head at once, which is a lot smaller than the computer, right?
But human minds are excellent at responding and evaluating situations and bringing in a very broad set of considerations. That in fact is the essence of creativity. So if you bring that all together and look at that entire process, that is really what generative design is all about.
Laurel: So really what you’re talking about is the relationship between a human and a computer. And the output of this relationship is something that’s better than either one could do by themselves.
Mike: Yes, that’s right. Exactly. I mean, humans have a set of limitations, and we have a set of skills that we bring together really when we’re being creative. The same is true of a computer. The computer has certain things like computation, for example, and understanding the laws of physics and things like that. But it’s far better than we are. But it’s also highly limited in being able to evaluate the efficacy of a solution. So generative is really about bringing those two things together.
Laurel: So there’s been a lot of discussion about how AI and automation replacing workers is a fear. What is the AI human collaboration that you’re envisioning for the future of work? How can this partnership continue?
Mike: There’s an incredibly interesting relationship between AI and actually not just solving problems in the world together with humans, but also improving the human condition. So when we talk about the tension between AI and human work, I really like to look at it through that lens, so that when we think of AI learning the world, learning how to do things, that can lead to something like automation. Those learnings—those digital learnings—can drive things like a robot, or a machine in a factory, or a machine in a construction site, or even just a computer algorithm that can decide on something for you.
That can be powerful if managed appropriately. Of course, you’ve always got the risks of bias and unfairness and those kinds of things that you have to be aware of. But there’s another effect of AI learning: it is now able to also better understand what a human being is doing. So imagine an AI that watches you type in a word processor, for example. And it watches you type for many, many years. It learns things about your writing style. Now one of the obvious automation things it can do is begin to make suggestions for your writing, which is fine. We’re beginning to see that today already. But something it could also do is actually begin to evaluate your writing and actually understand, maybe in a very nuanced way, how you compare to other writers. So perhaps you’re writing a kind of fiction, and it’s saying, “Well, generally in this realm of fiction, people that write like you are targeting these sorts of audiences. And maybe you want to consider this kind of tone, or nature of your writing.”
In doing that, the AI is actually providing more tuned ways of teaching you as a human being through interpretation of your actions and working again in a really iterative way with a person to guide them to improve their own capability. So this is not about automating the problem. It’s actually in some ironic way, automating the process of training a person and improving their skills. So we really like to put that lens on AI and look at that way in that, yes, we are automating a lot of tasks, but we can also use that same technology to help humans develop skills and improve their own capacity.
The other thing I will mention in this space is that many problems that we wrestle with in all forms of engineering and design are very, very complex, and we’re talking about some of them right now. Those problems are beginning to reach the limits of human capacity. We have to start simplifying them in some ways. This is a place where AI and humans come together very nicely because AI can actually take certain very complex problems in the world and recast them. They can be recast or reinterpreted into language or sub problems that human beings can actually understand, that we can wrestle with and provide answers. And then the AI can take those answers back and provide a better solution to whatever problem we happen to be wrestling with at that time.
Laurel: So speaking of some of those really difficult problems, climate change, sustainability, that’s certainly one of those. And you actually wrote, and here’s a quote from your piece, quote, “Products need to improve in quality because an outmoded throw-away society is not acceptable in the long-term.” So you’re saying here that AI can help with those types of big societal problems too.
Mike: Yeah, exactly. This is exactly the kind of difficult problem that I was just talking about.For example, how many people get a new smartphone, and within a year or two, you’re tossing it to get your new one? And this is becoming part of just the way we live. And we are not going to have a sustainable society until we actually learn to build products, and products can be anything from a mobile phone to a building, or large pieces of infrastructure, that survive long-term.
Now what happens in the long-term? Generally, requirements change. The power of things change. People’s reaction to that, again, like I just said, is to throw them away and create something new. But what if those things were amenable to change in some ways? What if they could be partially recreated halfway through their lifespan? What if a building that was built for one purpose, when it needed to be turned into a different kind of building, wasn’t destroyed, but it was just tweaked slightly? Because when the designer first designed that building, there was a way to contemplate what all future users of that building could be. What are the patterns of those? And how could that building be designed in such a way to support those future uses?
So, solving that kind of design problem, solving a problem where you’re not just solving your current problem, but you’re trying to solve all the future problems in some ways is a very, very difficult problem. And it was the kind of problem I was talking about earlier on. We really need a computer to help you think through that. In design terms, this is what we call a systems problem because there’s multiple systems you need to think of, a system of time, a system of society, of economy, of all sorts of things around it you need to think through. And the only way to think through that is with an AI system or a computational system being your assistant through that process.
Laurel: I have to say that’s a bit mind bending, to think about all the possible iterations of a building, or an aircraft carrier, or even a cell phone. But that sort of focus on sustainability certainly changes how products and skyscrapers and factory floors are designed. What else is possible with AI and machine learning with sustainability?
Mike: We tend to think normally along three axes. So one of the key issues right now we’re all aware of is climate change, which is rooted in carbon. And many, many practices in the world involve the production of enormous amounts of carbon or what we call retained carbon. So if we’re producing concrete, you’re producing extra carbon in the atmosphere. So we could begin to design buildings, or products, or whatever it might be, that either use less carbon in the production of the materials, or in the creation of the structures themselves, or in the best case, even use things that have negative carbon.
For example, using a large amount of timber in a building can actually reduce overall carbon usage because at the lifetime that tree was growing, it consumed carbon. It embodied the carbon inside the atmosphere into itself. And now you’ve used it. You’ve trapped it essentially inside the wood, and you’ve placed that into the building. You didn’t create new carbon as a result of producing the wood. Embodied energy is something else we think of too. In creating anything in the world, there is energy that is going to go into that. That energy might be driving a factory, but that energy could be shipping products or raw materials across the world. You really want to be able to look at the entire lifecycle of producing something and ask yourself, “How can I produce this by using the least amount of energy throughout?” And you will have a lower impact on the planet.
The final example is waste. This is a very significant area for AI to have an effect because waste in some ways is about a design that is not optimal. When you’re producing waste from something, it means there are pieces you don’t need. There’s material you don’t need. There’s something coming out of this which is obviously being discarded. It is often possible to use AI to evaluate those designs in such a way to minimize those waste-ages, and then also produce automations, like for example, a robot saw that can cut wood for a building, or timber framing in a building, that knows the amount of wood you have. It knows where each piece is going to go. And it’s kind of cutting the wood so that it’s sure that it’s going to produce as little off cuts that are going to be thrown away as possible. Something like that can actually have a significant effect at the end of the day.
Laurel: You mentioned earlier AI could help, for example, something writing, and how folks write and their styles, etc. But also, understanding systems and how systems work is also really important. So how could AI and ML be applied to education? And how does that affect students and teaching in general?
Mike: One of the areas that I’m very passionate about where generative design and learning come together is around a term that we’ve been playing around with for a while in all of this research, which is this idea of generative learning, which is learning for you. a little bit along the lines of some of the stuff we talked about before, where you’re almost looking at the human as part of a loop together with the computer. The computer understands what you’re trying to do. It’s learning more about how you compare to others, perhaps where you could improve in your own proficiencies. And then it’s guiding you in those directions. Perhaps it’s giving you challenges that specifically push you on those. Perhaps it’s giving you directions. Perhaps it’s connecting you with others that can actually help improve you.
Like I said, we think of that as sort of a generative learning. What you’re trying to optimize here is not a design, like what we talked about before, but we’re trying to optimize your learning. We’re trying to optimize your skillset. Also, I think underlying a lot of this as well is a shift in a paradigm. Up until fairly recently, computers were really just seen as a big calculator. Right? Certainly in design, even in our software here at Autodesk. I mean, the software was typically used to explore a design or to document a design. The software wasn’t used to actually calculate every aspect of the design. It was used really in some ways as a very complex kind of drafting board, in some sense.
This is changing now with technologies like generative design, where you really are, like I talked about earlier, working in the loop with the computer. So the computer is suggesting things to you. It’s pushing you as a designer. And you as a designer are also somewhat of a curator now. You’re reacting to things that the computer is suggesting or providing to you. So embracing this paradigm early on in education, with the students coming into design and engineering today, is really, really important. I think that they have an opportunity to take the fields of design and engineering to entirely different levels as the result of being able to use these new capabilities effectively.
Laurel: Another place that this has to be also applied is the workplace. So employees and companies have to understand that the technology will also change the way that they work. So what tools are available to navigate our evolving workplace?
Mike: Automation can have a lot of unintended side effects in a workplace. So one of the first things any company has to do is really wrestle with that. You have to be very, very real about what’s the effect on your workforce. If automation is going to be making decisions, what’s the risk that those decisions might be unfair or biased in some ways? One of the things that you have to understand is that this is not just a plug it in, switch it on, and everything’s going to work. You have to even involve your workforce right from the beginning in those decisions around automation. We see this in our own industry, the companies that are the most successful in adopting automation are the ones that are listening the most closely to their workforce at the same time.
It’s not that they’re not doing automation, but they’re actually rolling it out in a way that’s commensurate with the workforce, and there’s a certain amount of openness in that process. I think the other aspect that I like to look at from a changing work environment is the ability to focus our time as human beings on what really matters, and not have to deal with so much tedium in our lives. So much of our time using a computer is tedious. You’re trying to find the right application. You’re trying to get help on something. You’re trying to work around some little thing that you don’t understand in the software.
Those kinds of things are beginning to fall away with AI and automation. And as they do, we’ve still got a fair way to go on that. But as we go further down the line on that, what it means is that creative people can spend more time being creative. They can focus on the essence of a problem. So if you’re an architect who is laying out desks in an office space, you’re probably not being paid to actually lay out every desk. You’re being paid to design a space. So what if you design the space and the computer actually helps with the actual physical desk layout? Because that’s a pretty simple thing to kind of automate. I think there’s a really fundamental change in where people will be spending their time and how they’ll be actually spending their time.
Laurel: And that kind of comes back to a topic we just talked about, which is AI and ethics. How do companies embrace ethics with innovation in mind when they are thinking about these artificial intelligence opportunities?
Mike: This is something that’s incredibly important in all of our industries. We’re seeing this rise, the awareness of this rise, obviously it’s there in the popular society right now. But we’ve been looking at this for a while, and a couple of learnings I can give you straight off the bat is any company that’s dealing with automation and AI needs to ensure that they have support for an ethical approach to this right from the very top of the company because the ethical decisions don’t just sit at the technical level, they sit at all levels of decision making. They’re going to be business decisions. They’re going to be market decisions. They’re going to be production decisions, investment decisions and technology decisions. So you have to make sure that it’s understood within any corporate or industrial environment.
Next is that everybody has to be aligned internally to those organizations on: What does ethics actually mean? Ethics is a term that’s used pretty broadly. But when it actually gets down to doing something about it, and understanding if you’re being successful at it, it’s very important to be quite precise on it. This brings me to the third point, that if you are going to announce, if you’ve done that, and you now have an understanding of what it is, you now need to make sure that you’re solving a concrete problem because ethics can be a very, very fuzzy topic. You can do ethics washing very, very easily in an organization.
And if you don’t quickly address that and actually define a very specific problem, it will continue to be fuzzy, and it will never have the effect that you would like to see within a company. And the last thing I will say is you have to make it cultural. If you are not ensuring that ethical behavior is actually part of the cultural values of your organization, you’re never going to truly practice it. You can put in governance structures, you can put in software systems, you can put in all sorts of things that ensure a fairly high level of ethics. But you’ll never be certain that you’re really doing it unless it’s embedded deeply within the culture of actually how people behave within your organization.
Laurel: So when you take all of this together, what sorts of products or applications are you seeing in early development that we can expect or even look forward to in the next, say, three to five years?
Mike: There’s a number of things. The first category I like to think of is the raise-all-the-boats category, which means that we are beginning to see tools that just generally make everybody more efficient at what they do, so it’s similar to what I was talking about earlier on about the architect laying out desks. It could be a car designer that is designing a new car. And in most of today’s cars, there’s a lot of electrical wiring. Today, the designer has to route every cable through that car and show, tell the software exactly where that cable goes. That’s not actually very germane to the core design of the car, but it’s a necessary evil to specify the car. That can be automated.
We’re beginning to see these fairly simple automations beginning to become available to all designers, all engineers, that just allow them to be a little bit more efficient, allow them to be a little bit more precise without any extra effort, so I like to think of that as the raise-all-the-boats kind of feature. The next thing, which we touched on earlier in the session, was the sustainability of solutions. It turns out that most of the key decisions that affect the sustainability of a product, or a building, or really anything, happen in the earliest stages of the design. They really happen in this very sort of conceptual phase when you’re imagining what you’re going to create. So if you can begin to put features into software, into decision-making systems early on, they can guide designers towards more sustainable solutions through affecting them at this early stage. That’s the next thing I think we’re going to see.
The other thing I’m seeing appears quite a lot already, and this is not just true in AI, but it’s just generally true in the digital space, is the emergence of platforms and very flexible tools that shape to the needs of the users themselves. When I was first using a lot of software, as I’m sure many of us remember, you had one product. It always did a very specific thing, and it was the same for whoever used it. That era is ending, and we’re ending up seeing tools now that are highly customizable, perhaps they’re even automatically reconfiguring themselves as they understand more about what you need from them. If they understand more about what your job truly is, they will adjust to that. So I think that’s the other thing we’re seeing.
The final thing I’ll mention is that over the next three to five years, we’re going to see more about the breaking down of the barrier between digital and physical. Artificial intelligence has the ability to interpret the world around us. It can use sensors. Perhaps it’s microphones, perhaps it’s cameras, or perhaps it’s more complicated sensors like strain sensors inside concrete, or stress sensors on a bridge, or even understanding the ways humans are behaving in a space. AI can actually use all of those sensors to start interpreting them and create an understanding, a more nuanced understanding of what’s going on in that environment. This was very difficult, even 10 years ago. It was very, very difficult to create computer algorithms that could do those sorts of things.
So if you take for example something like human behavior, we can actually start creating buildings where the buildings actually understand how humans behave in that building. They can understand how they change the air conditioning during the day, and the temperature of the building. How do people feel inside the building? Where do people congregate? How does it flow? What is the timing of usage of that building? If you can begin to understand all of that and actually pull it together, it means the next building you create, or even improvements to the current building can be better because the system now understands more about: How is that building actually being used? There’s a digital understanding of this.
This is not just limited to buildings, of course. This could be literally any product out there. And this is the consequence of bringing the digital and physical together, is that it creates this feedback loop between what gets created in the world and what is about to be created next time. And the digital understanding of that can constantly improve those outcomes.
Laurel: That’s an amazing outlook. Mike, thank you so much for joining us today on what’s been a fantastic conversation on The Business Lab.
Mike: You’re very welcome, Laurel. It was super fun. Thank you.
Laurel: That was Mike Haley, vice president of research at Autodesk, who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review, overlooking the Charles River. That’s it for this episode of Business Lab. I’m your host, Laurel Ruma. I’m the director of insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology. And you can find us in print, on the web, and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com.
This show is available wherever you get your podcasts. If you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Collective Next. Thanks for listening.
Click here to learn how Autodesk partners with customers across industries to imagine bigger, collaborate smarter, move faster, and build better.
This podcast episode was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.