AI strategist Tatjana Dzambazova talks about machine learning, data and the future of construction
To support the drive towards gender balance in the industry, Middle East Consultant and meconstructionnews.com are highlighting female construction professionals in a series of profiles. By telling their stories and sharing their experiences and knowledge on our print and digital platforms, we hope to inspire more women to join this vibrant industry.
Here, Jason Saundalkar talks to Tatjana Dzambazova, AI strategist, Office of the CTO at Autodesk, about machine learning, data and the future of construction…
Autodesk University Middle East 2018 was held in Dubai earlier this year under the theme of ‘The Future of Making Things’. The two-day event served as a business networking platform, shared technical knowledge, discussed cross-industry opportunities and sought to offer solutions to unique business challenges.
Several global industry veterans were in attendance including Tatjana Dzambazova, AI strategist, Office of CTO at Autodesk. Dzambazova is a 14-year Autodesk veteran with over a decade of experience in architecture and design in both Vienna and London. Today, she focuses on sharing stories about technology that empower professionals, while simultaneously leading product development with a view to convert technology into tools that a wider range of people can use. She also leads the development of Autodesk Memento and looks to empower professionals through the digitisation of captured reality. Here, Jason Saundalkar talks to her about the tools that could transform the way projects are designed and executed.
Data is at the heart of machine learning systems. How do you source the data you need? Are companies open to sharing data, or is that a challenge to get signed off?
New innovations and technology will always have an uncomfortable adjustment period and make people feel concerned. The BIM 360 Project IQ example I talked about during AU Middle East is using the data of customers who volunteered to participate in the Beta program of this tool because they realised that to get a risk prediction tool working well, you need a lot of examples and data; many more than what they would have from their own projects.
Project IQ uses machine learning to automatically identify construction quality and safety issues that pose the biggest risk to a project at any given time. That enables teams to act quickly, prevent catastrophes, and avoid downstream problems that create cost issues and schedule delays.
As a company, you only have so many examples; so even if you have smart IT people who know how to develop deep neural networks, you can only really learn from the examples you have access to. In the case of my colleagues on the BIM 360 Project IQ, they realised that customers have logged 27-million issues that could be learned from and that could teach a computer system to learn patterning and classifications.
We explained to customers what’s possible with this technology and one by one, they signed up and shared 3D models, data etc. Contracts are involved obviously but it’s basically customers volunteering their data – not just the models, it’s the issues and all the data of the ecosystem. They understood that if they shared their data and everyone else does the same, the modeling of the system becomes smarter, meaning they could predict better quality, safety, schedule, cost risks, so the money they save and the overall value is so much more important than keeping the data private. The world is moving towards openness and there’s a choice to stay within the limits of your organization and knowledge or participate and enjoy the benefits of world knowledge.
The other thing to remember with systems like this is, nobody else can see your models or the actual data itself. Algorithms are learning from it, find patterns, relationships and understanding the construction context and what can possibly go wrong. Everyone can benefit from the shared learnings but only you can see your data. Autodesk doesn’t see the original data and no other customer sees the other customer’s data. What everyone sees and benefits from is what the system learned from that data.
It’s true that machine learning, unlike other technologies, will require regulations and different types of contracts. In that regard, we want absolute transparency and to be clear about what we do and don’t do. I think data openness is what will bring us to a completely new and smarter way of working, one that is data driven, goal driven and informative. You’ll eventually have this extremely smart assistant that will advise you before something happens, rather than after.
So, getting real customer data shared explicitly by the customer is one data acquisition avenue. Another avenue for getting data is simulation. For certain types of problems, we can also use the powerful simulation tool that we at Autodesk have to generate data that serves as an input into machine learning systems.
Some companies may not want to share data because it may involve sharing mistakes they have How do you make your case to these customers?
For customers who have had issues with their projects, I think it’s even easier to make the case to start participating and leveraging the intelligence. With this technology, they will be able to catch things before they happen; predict safety, quality, scheduling risk and most importantly, predict cost issues etc. Cost is a magic word for construction companies; working on projects is so unpredictable (in the construction industry, the ration between predictable and unpredictable issues is 1:4, the absolute opposite of manufacturing ratio) and things do go wrong so, at the end of the day, if there’s a tool to advise you about what might happen and that’s tied in with historic costs, they will know exactly how much money they can save if they address issues before they actually happen.
But again, remember: We pseudonymize the data, so there is no relationship between a customer and his data, and more importantly, no one sees the actual data, so a concern that a customer might have that he would be judged for many mistakes is simply not even a topic.
We are trying to create a smart system to address the chronic inefficiency of an industry that is high risk low margin; According to some studies, large projects across asset classes typically take 20% longer to finish than the schedule and are up to 80% over budget which makes the challenge to solve this really appealing. 60% of construction projects fail to meet cost and schedule targets, 30% of all construction cost is rework. And construction accounts for 40% of all waste. Lot of challenges!
On a more general level, with AI and machine learning integrated in our tools, what we are trying to do for our customers is to create a Knowledge Graph of all things from Design to Make. There will be specialized graphs and BIM 360 IQ as example will be the construction knowledge graph. It will pull all types of data (text, image, voice, sensor data) from whatever other sources the customer has, since many sources are not part of the BIM model, but they are still important – (There are so many elements that are crucial to the success of projects, but they are not currently part of the model) – So, if our system can access all that data and start finding patterns, understanding context, you can then connect the risks with the cost/schedule/safety/quality change. Customers can gain knowledge on what makes buildings a success or not and what makes an issue tragic or not.
Tell us about Autodesk’s Forge Platform and some of the things happening there.
Autodesk is moving into a direction where we understand that we cannot build all the solutions in the world. By creating a platform with an open API, many companies , small or big, customers or third parties, who have expertise, curiosity or knowledge to build solutions are given a chance to build new, custom or specialized solutions. Now, data is shared and can be mixed and matched and learned from. If you need to remember one thing about Forge is that it’s about Data. The data is in the center, data in various file formats, and in the future the incompatibility of various file formats will be solved, all through this new data powered, data centric platform.
SmartVid.io for example, works with Autodesk BIM 360 and the Forge Platform to take real life footage from a construction site, find an issue and then come back with risk predictions. All construction projects have a large amount of photos and videos taken every step of the way. Smartvid.io has developed a “smart photo and video management platform” that uses synthetic vision and deep learning to tell you important things about your project like people on the job site not wearing hard hats or safety glasses?
Others are using VR and AR empowered with ML; they’re bringing the BIM model to construction site and overlaying it in a mixed reality experience. When we talk about AR and VR, it’s important to say that yes, of course helps you visualise things in context, but what’s really interesting is that while we’re augmenting the construction worker on site and helping him out with additional contextual data, the system is also capturing what the workers do in certain situations. That data can train the neural network to say: ‘When the worker saw a wet corner, he logged an issue’. Here, two types of information become connected with geolocation information, so you have much richer data.
AR and VR is not just about looking through the goggles and getting information, it’s also about capturing the activity of the user so it can then be used to train machine learning systems. I think that’s interesting because no one has ever really captured that – we certainly never captured what somebody did and why, so we repeat the same things all over again.
To recap: construction sites must get digitized and to do that we must built tools that improve the project outcomes by improving the construction processes and the flow of project information.
What are your thoughts on modules and pods? How do you see this progressing, and how fast do you think the industry will be able to scale up to larger sizes?
The AEC industry is starting to learn from and apply manufacturing methodology to the process of building buildings.
Automating the production of building components or of the construction site is inevitable if we want to achieve the volume of the buildings we need to build for the ever growing urban population.
One aspect of it is to build buildings in factories, producing modular, multidiscipline modules / pods that are then assembled on site. The other is full size robotic automation on site.
The industry began with modular construction a while ago, producing in controlled, factory environment things like toilet blocks or kitchen units. This now moves towards building fully equipped modules with furniture, MEP and other elements in factory. This method is much more efficient, but it will require a new approach to design buildings, DfMa (design for manufacturing, or with the manufacturing constraints and methods in mind)
As far as scaling up, I don’t think the industry is there just yet. More and more, I’m seeing pods being used around the world, especially in places like Europe. In Sweden they’re building hospitals very quickly using pods because sometimes the conditions are harsh and sometimes it’s just because it’s more precise and predictable to build this way.
At the moment, the limitations – at least in Sweden – come from the size of the truck that can carry the pod. In some countries, they use ships to carry the pods. Generally speaking, the limiting factors for pod-based construction is the size of the transportation vehicle, how far the site is and how much weight the cranes can handle safely.
Building with pre-fab components or pods is not applicable to all projects however. The alternative and next step is full robotic automation on the construction site itself. Robots will be doing any type of repetitive, dangerous or laborious work on site, humans will simply monitor and guide.
Given what you’ve said, is additive building/3D printing or on-site construction more likely to take the lead in the construction industry?
This is very interesting, because they are two different and competing trends but may yet come together. On one side you have 3D printing building on-site and on the other you have building parts of buildings in factory, off-site. With the former, it’s still heavily dependent on weather conditions and large-scale robotics that have not yet been invented; however, you can make the system mobile and take the robot and put it in a clean space and let it work. So now you’ve made the factory mobile and that’s a combination of both things. Robotics can be used for large-scale addictive construction, so I think all these things will come together eventually.
What are your thoughts on using 3D printing/additive building for MEP?
I can see certain things being printed, but mostly when it comes to complex geometries and unusual, bespoke situations. Personally, for pipes, channels, sewage and things like that, I don’t think it will actually be faster or better than what we have now. I think the question should always be: what can I do with additive or 3D printing that I cannot do otherwise? The second question should then be: is additive in any way better/faster/cheaper than the previous method? Those are the only reasons you would do it; otherwise you’re doing technology for technology’s sake, and that’s not why we are here.
Going back to your presentation, I noticed a lot of skeletonisation in the examples you used. Materials look like a very important element of the future of building things, be they structures, parts,
Material optimisation is great for the planet – we can use less raw material, less embedded energy to produce the materials, less waste and it can make structures lighter and stronger. Weight is a challenge in every industry, not just aerospace and automotive. But In the construction industry, I hope to see emergence of new materials that might change the game altogether. There are experimentations with concrete mixed with graphene that makes it so much stronger – then you can make walls that are much thinner yet better performing.
There are also experiments with self-healing materials. This is very interesting, especially when combined with additive methods where you can not only print with smarter materials but you can print sensors, optical fibres and other equipment as part of the building components, making buildings speak, breathe, be responsive, be alive.
We discussed the learning process for modern tools. In your experience, how quickly do people adapt to these new tools?
This is important – the relationship between the growing sophistication of the tools we make and use and the growing complexity of actually learning those tools – which is why I included it in my presentation. When we started Revit, almost everybody was on AutoCAD and maybe 10% were on Architectural Desktop, which is what I was teaching before. In teaching people about Revit, I can tell you that it was very quick to teach it to somebody who understands how buildings are built. If, however, you were a CAD drafter and you were there just to make drawings and had no idea about how buildings work, that’s a different story. But in comparison with the tools of today, Revit was much, much easier because it functioned as if you were building in real life.
The next state of computational design has a level of abstraction that separates you from the tool – you need to learn much more, and you even need a different type of learning. As tools get smarter, the bigger the gap between us and the sophisticated tools will be. You’ll have less and less people who can access and use those tools that can be really only handled by experts. We need a way to change this. Sophisticated tools should not be only accessible by educated, smart experts. We can do better than that!
We all started in the Stone Age with those old tools that everyone could use after minutes of observation, but now, because the tools are so sophisticated, there’s complexity of learning that is an impediment to adoption and use. This observation, however, is what makes us most excited about machine learning and AI (generative design being part of AI approach), because the tools that used to do, while we were the ones thinking, are now taking part of the thinking and reducing the gap between them and us.
In the future, you’ll be able to design things just by speaking to the computer – there will be no learning complex geometry or math paradigms, or need for programming, etc. We will interact with the tools the same way we interact with an assistant, only those assistants will know more, understand context, know us and keep in the background all that tech that we won’t need to learn anymore.
I think that’s one thing we really want to emphasise. It’s about the fact that we believe that we will bring smart tools to many more people, which will also let us tackle the fear people have of losing jobs to machines. Today, you cannot even have some of those design or factory jobs because you require a certain level of expertise.
So, in fact, things like this can create more jobs. I think it was Apple’s Tim Cook who said something like “The problem is not that we will lose jobs to the robots, the problems is that people are doing jobs that robots can do.” If you think about factory jobs and people working on products there, you probably never thought about it from your point of view, because you’re not in that situation. On a wider scale as well, very few people question that. When you do think about it, you really just have to ask yourself: do I want to spend my life in a factory doing this?
That is something that we must change, but of course we must plan for people who are doing those jobs. I believe that those same people can be trained and educated to look at a robot and make sure that what it’s doing is correct, rather than allowing it to make a mistake. Additionally, those robots will need inspections and maintenance, etc – those are all potential job opportunities in the future.
With generative design for buildings and using a computer system to create models, is there a danger that structures will ultimately all look the same because the computer is creating model using certain parameters (cost, efficiency, etc)? I see this to some level in the automotive industry – cars are increasingly looking similar to each to other as brands pursue fuel efficiency and make their products safer.
You’re right, a lot of cars look the same these days. I think I bought a VW Beetle because it’s the only car that doesn’t look like anything else. You might be correct that this may happen, but I think that will only happen if you leave the computer to its own devices and don’t use the tools as tools but as the end product.
I personally think that generative design is just the beginning of design, but not the final design. It’s a tool that helps me focus on what I want something to be and do rather than how to look. It helps me explore in a much more informed way, understand trade-offs, and it gives me all the options outside of the limitations of my knowledge and bias. But at the end of the day, in my humble opinion, that’s just the beginning – an informed, data-driven, rich exploration.
And we will push that concept further and enable personal creative control. We want you to be able to say: No, I’d like to go in this direction now and be smart about it. From there, the system gives you another 10 options and you can then say: From those 10 options, I’d like to combine this and that, and now I want to start going in that direction. Generative is the beginning of things, and I think it’s an exciting methodology for working. It should not be viewed as the final solution for a design.
At Autodesk, how do you decide which projects go ahead?
At Autodesk Research, we look at various technological developments and try to predict what they might mean for the future. We research, test, prototype, validate and try to find out if something is worth pursuing. In some cases, things that we thought were good ideas end up not going anywhere, so we shelve them. In other cases, like with generative design, we researched, probed, resisted the non-believers, and it turned out to be a valid new paradigm and we pursued it.
But it’s important to experiment, test, explore, probe visions and think about the future. As a company, we can’t afford not to experiment with the unknown, and I think it’s the same with our customers. You cannot bet on the safe horse all the time; you have to bet on some newcomers, or else there won’t be any innovation.