A Blueprint For The Future: Navigating AI’s Role In Revolutionizing Construction With Daniel Hewson | Ep. 271

Construction Genius | Daniel Hewson | AI In Construction

 

As we enter an age of rapid digital transformation, we can no longer hold off AI from entering the way we do our business. Construction is not an exception. How do we navigate this AI revolution? In this episode, Eric Anderton is with Daniel Hewson, the data capability manager at Elecosoft where he oversees the development of overall data and AI strategy. Daniel shares his expertise to help us understand how AI is impacting construction, shedding light on the misconceptions of this technology. He also talks about how we can use AI better in business and what are its limitations in terms of the quality and accuracy of its output and more. Plus, Daniel offers a guide for when you’re dealing with AI vendors, partnering with technology companies, and more. Join in on this timely conversation and find yourself equipped with a blueprint for the future as you embrace the inevitability of AI in this industry.

Watch the episode here

 

Listen to the podcast here

 

A Blueprint For The Future: Navigating AI’s Role In Revolutionizing Construction With Daniel Hewson

Welcome to the show. As you know, from time to time, I like to have folks on the show talking about the various technologies that are impacting construction. A technology that is impacting construction and many other people is AI or Artificial Intelligence. What is AI? How is it impacting construction specifically? What are its limits? What are the misconceptions associated with AI? How should you be going about figuring out how to use AI in your business?

These are all questions that I ask my guest, Daniel Hewson. Daniel is the Data Capability Manager at Elecosoft. He oversees the development of overall data and AI strategy and focuses on discovering how AI can be leveraged to improve project planning and to identify inherent project risk. The thing I like about Daniel is that he is a realist. We talk very much about the limitations of AI. One of them being the fact that your AI is only going to be as good as the quality of information that you feed it.

Another limitation, which is very interesting and something that we have to embrace when it comes to using AI models, whether they be chat GPT or other types of models. The output you’re going to get from an AI is not 100% accurate all the time and you have to just accept that. A hundred percent accuracy is something you’re not going to get with these models because they’re not designed to do that. We talk about that in some great detail.

We also talk about some key questions that you need to ask AI vendors. Whatever they’re vending to you, when they’re knocking on your door telling you, “Here’s the AI easy button that’s going to change your construction company.” We also discussed the importance of construction companies forming partnerships with technology companies in order to be able to over time exploit some of these technologies to help build projects more effectively and productively. It’s a very practical and a very straightforward conversation that gives us a good overview of AI as it pertains to construction. Feel free to share this with other people in your organization and other people that you know. Please give us a rating or a review. As always, thank you for reading the show.

Daniel, welcome to the show.

Thanks for having me.

I’ve had a number of guests over and we’ve been diving into some of the new technology that’s coming out particularly around AI. I want to continue to dive into that and tap your expertise from my audience. What do you mean by AI?

I’ll start by giving the technical definition. Artificial Intelligence is a very broad area. Formally, it’s defined as any system that acts with an intelligent way. This could mean basically anything. Technically, using the formal definition you can get an Excel spreadsheet is acting in an intelligent way. Therefore, it’s AI, which isn’t what people mean.

When I’m talking about AI, usually what I mean is machine learning. That’s where it’s basically a piece of software or a process which has learned how to do what it is that it’s doing from real world data examples. It’s something that hasn’t been programmed with explicit rules and there’s been data that’s been gathered. Say you want to try to identify maybe failure on a construction site.

You have examples of pylons that have been failing. You go and you take photos of them and you classify them into things that are not failing and things that are. An algorithm looks at these photos and learns what it needs to look for to detect that failure. If someone hasn’t told explicitly the rules and that’s quite liberating. It means we can move from problems where we understand how it is that you need to code something up or tell a computer how to do it to going, “I’m going to show you lots of examples about how to do this. You’re going to figure out, hopefully, how to solve this problem for me.”

In that process there, it’s a neat way of describing the difference. What are the limitations or the challenges that your typical AI comes across in this machine learning environment when it comes to being useful?

A few challenges there. First is not just the AI, but the actual process around it. When you’re moving towards an AI-driven approach, you’re moving into the world where things are not guaranteed. You’re going to have what we call true positive. Where you think that the AI should tell you something and it does, but when you’re also going to have false negatives and false positives. That’s to say that if you’re going to tell you an answer, but it’s going to be wrong.

When you're moving towards an AI-driven approach, you're moving into a world where things are not guaranteed. Click To Tweet

This means that you have to move it in a way where you go, “I’m going to use this technology. It’s going to enable me to automate some process, some bit of work that I’m doing.” It’s not going to be 100% right, which isn’t necessarily the end of the world. It’s about going, “What processes do I need to put around this such that it can tell me a thing?” Maybe it’ll be wrong 5% of the time or 10% of the time, but it’ll have quirks.

If anyone’s used ChatGPT, you’ll see that it’s giving a good answer then it goes off the rails and tells you something that you’re like, “No, that doesn’t make sense.” It’s the case of going, “This technology automates something for me, but it has got issues and it will give me erroneous results.” I need to shift to a process where I go, “This is going to automate something, but it’s slightly unreliable.” I need to have a way of validating and checking. that and only using it where it makes sense that maybe some things will slip through the cracks or I have an ironclad way of making sure that they don’t.

Not for the first time, but I can clearly articulate this now. I have in mind like an Excel spreadsheet where I’m calculating numbers and I do the little drag down thing. I hit the button and I know my number is correct. What you articulated there is if we’re going to use AI effectively, we have to have a fundamental mindset shift because we’re so used to interacting with software and everything being bulletproof that comes out of it. With AI, it’s a different mindset that I have to take.

When it comes to identifying and doing your root cause analysis to, “This went wrong. Why did it go wrong?” In your example, you look at the spreadsheet and you go, “There’s a case I needed an if statement. There was this one case where I was calculating freight on something. This was international and local.” We didn’t capture that. You can come away with a very clearly articulated reason of, “There was this error in the code or the system I was using. Here’s what we need to do about it.”

With AI, you still have things that you can do about it, but tracking it back isn’t as easy because they’re what’s known, not always, but often as black boxes. Where you can imagine something goes wrong and you look back. What you might find in an AI example is, “It never saw examples of this. Therefore, it never had a chance to learn how to do it.” What you can do is you can try to then give it those examples, but it’ll be a case of saying, “Sometimes, it’s going to pick that up and some time it’s not.” You’re not going to get as much of an ironclad. This is why and we fixed it for sure. It’s going to be, “We’re going to take these steps to attempt to remediate it, but it may not work 100% of the time.”

It’s so interesting because whenever you don’t get something out of a, so to speak, traditional software program, you immediately assume operator error. With AI, that is not the case. This is interesting because if you can have that mindset shift then allow the limitations of AI embrace those or accept those, then you can be more effective in using AI for what it’s good for.

It’s shifting from some people say, “This is going to learn to do all these amazing things.” Sometimes, that’s true but usually, it’s going, “We’re going to move to a case where it’s allowing us to now solve problems where I don’t know exactly how to tell it solve the problem but I need to be able to validate that it did solve the problem.”

What is the role of the expertise of the user? I suck at math but if I know I put the right input into Excel, then I’m going to get the right answer. I’m not an expert, but I can get expert answers and I can know that they’re correct. What is the role of an expertise in effectively using AI?

There are two key roles. One is around developing it. There are two mindsets when it comes to developing these new themed angled AI applications. One is we’ll give it lots of data and it will discover how to do these things. That’s one approach that is quite successful admittedly in a lot of areas. Language models are born out of this. The other is saying, “We’ve got these people that are experts in this subject matter and there are certain things they look for.”

We’re going to build in to the model, the idea of looking at these important data features. You imagine that you gather some information. An example that I had was I was talking to a company where they were looking at identifying failure of equipment. One of the things they had is an organization where they had inland operations and marine operations.

Now, the marine ones corroded a lot more, so they failed more. One of the things they went away was they had location data, which would be like saying, “This is located in this position. Maybe they had GPS tags, but it didn’t have that categorical bit of information of going inland versus marine.” What the experts did was they went, “You need to know whether it’s in a marine environment.” They went and transformed that data from GPS location data to where this is located then that can feed a model. That’s one in terms of the development.

The second is in terms of utilizing the AI. What it allows you to do is, it allows you to broaden the base of whoever’s trying to do this application. It supercharges your experts and it makes them more productive. Again, they’ve shifted from, “I have to go and do this process,” to “I have to let some tool do a process,” whatever it might be.

It might be gathering leads, getting quotes, or doing takeoffs from a drawing and going, “I’m not going to have to do that manual process anymore. I’m going to look at the results of it. Now, I’m going to validate it.” They’re critical for a couple of reasons. One, they’re able to check if this is correct or incorrect. When it comes to liaising with whoever your company’s AI provider is for that tool saying, “This is wrong.”

As an expert, you’re going to probably have some ideas for why it’s not getting something. If we’re talking about takeoffs, it could be that maybe the way something’s laid out is always a bit different because maybe the subcontractor and that domain does something a bit different. They’re able to liaison and help explain a bit the process of how they do it and that can often help the development.

The expertise then is in play when you are interpreting the information the AI gets you. How about the expertise in play in terms of prompting the AI initially so that you get some meaningful results from it?

A good example of that is around language models, so ChatGPT. You need two bits of experience there. One, you need domain experience, going, “What does good language look like around the question I’m asking?” One of the things that they’ll do is they will pattern match to where they’ve probably seen examples of this text before. If you ask it in a very open-ended way like maybe as a child would ask. It’s going to give you a childish answer back.

If you ask in a very formal and technical way, what will happen is it will pattern match back to where it was reading something technical literature. You’ll often find and they’ve done research on this, if you’re asking something like ChatGPT questions. You want to very much match the tone of the answer that you want. If you write it in a way like a formal paper would be asking a question, you’re going to get a very formalized answer back. It’s going to try to match back to that. That’s where the second expertise is understanding a little bit about how these systems work.

Construction Genius | Daniel Hewson | AI In Construction
AI In Construction: If you’re asking something like ChatGPT questions, you want to very much match the tone of the answer that you want.

 

Once you understand that ChatGPT is a language model. What it’s doing is it’s trying to pattern match bits of text that it’s seen before. Therefore, if I want a formal detailed answer, we’re trying to drill out some technical information. I’m going to ask it in that way and it’s going to give it back to me. On the other side, if I’m trying to use it for marketing or trying to spice up a report, again, understanding how it works. You can shift your language into how you prompted so you can get maybe a more marketing oriented response.

You know how these things go, the hype comes and AI is going to change everything. What exactly in terms of construction are the areas where AI is going to have or is already having a noticeable impact?

There are probably 3 or 4 areas where it’s going to have a big impact. Some of the examples are around your site progress and site monitoring. There are a lot of companies looking at ideas of, “We can put cameras on site and different field devices. We can have drones walk through and they can take camera footage.” That can be used for a couple of ways. It can be used to try to work out where we’re going in terms of site progress. It can be used to identify potential safety hazards.

That’s one area where we’ve got a lot of data where we’re gathering all this information, all this video footage, and camera from site. It’s able to look at that and process it to get lots of insights from people. That’s an interesting application. It’s still very much in the early days. They’re like, “This isn’t a simple problem. It’s not going to be solved quickly,” but that’s starting to generate some momentum. Another one which is what I’ve been looking at is around the planning space. The idea of, you go away and you develop a plan for how you’re going to execute a project.

The idea of using tools that will go and look at your project plan and say, “We think that you’re overestimating or underestimating how long it might take to do certain activities.” As humans, we tend to be very optimistic about how long things should take. There are ideas around using artificial intelligence to go and look at previous projects very similar to the ones that you’re working on then pulling out how long this take historically. Using that inform, maybe you have a good reason for saying that the scope of work is only going to take months. Historically, it took you three months then flagging to like, “This is probably going to delay your project.” We’re seeing it very much around the planning space as well.

Let me ask you about that because immediately, I thought of some limitations. For instance, every construction project is unique in one way and it’s done in a different geography. That geography then has a lot of determining factors associated with that in terms of supply and the quality of your subs. What value or how do you interpret the value of the data of previous projects that may be similar to the one that you’re thinking about in terms of using that data to help you with the planning?

Some things are different and some things are the same. Let’s say, you’re doing a railroad or a better one would be tunneling. If you’re doing tunneling, the rocks and the geology of where you’re working is going to be radically different and going to enforce constraints in your program. If you look at a schedule and you see they didn’t do a geotechnical study. You already know that there is going to be a high risk of delay because they haven’t done a geo-tech study. That’s usually how the story starts.

There was a tunneling project and it got delayed by twelve months because they didn’t do that geo-tech study. It’s in things like that where the outcome of that geo-tech study will be different, but you can already highlight that a large percentage of time you’re at risk of delay because you haven’t done it. It could be that there might not be a delay because you’re in good soil. It’s not a concern. By looking at that, we can say, “This is something that’s going to increase the risk.”

In other things, you’re right. It very much depends in terms of the industry or what it is exactly that you’re doing. I’m from Australia, so it’s like if we’re working in Central Australia. Your supply chain is radically different because everything’s on the post. To get it in there, you’re talking that if you miss certain things, it’s going to be huge delays. Whereas, if you’re working on a project that’s coastal in Australia where you’re near all your concrete fabrication and everything’s built near and shipped to you. You’re going to have a very different picture.

That’s something where you are able to learn lessons where things are similar, but you also need to feed in not just a schedule but you need that meta information. You need to be telling all like that, or they need to be picking up this is where the project is happening. It doesn’t have to be precise geography. It just has to be able to relate it to similar projects of that kind.

You talked about site progress, site monitoring, and planning space. What’s another application that people are looking at in construction?

The other ones are around the ideas of safety monitoring. An example that I gave was in the site monitoring, but you can also see it around different tools that are being used to do quality checks around, I don’t know what the term is in American industry. We have safe work method statements like checking that you’re identifying all the potential hazards.

AI is very good at identifying patterns and finding things that should be there. This is a perfect application. It’s something that usually you have to do and identify certain hazards in the very common or maybe someone misses it. That’s the thing where it can pick that up. It is also being used around similar parts of construction like around your accounting thing like doing auditing. Checking that there are no funniness going on in terms of subcontract and identifying fraudulent payments.

The last way I would say is, there are probably two other ways, which is around the tools that are being built up now are being powered very much by AI. There are lots of tools that are leveraging things like ChatGPT to go, “If you’re going to review this contract, you can review it but why not use a language model to go and pull-out bits of information?” You can imagine that you drop that contract and all that document and you can ask a question.

It’s like tooling that’s not instruction specific but it’s helping construction where we’re under pump. We don’t have the time to go and often review things as in-depth as we need being able to have these tools that can feed up that process and enable people to ask a question and pull out, “Does this talk about? When am I meant to be getting to site? When am I meant to be doing this? What spec do they want for this to be able to pull that out quickly?” Those tools are helping a lot.

The last one is probably for larger companies, which is the idea around daughter analytics. The idea that a lot of companies are getting a bit more switched on in terms of like, “This AI thing seems to be getting a bit of steam. We’re going to record some information.” Having your analysts try to pull things out which could be simple things like doing basic statistical analysis and going, “Every time we don’t do this on a project, we end up doing that.” It could be the example of geotechnical study but doing that analysis is to look back over past data in ordering, planning, and being able to model out and get some ideas for what are the things that caused us to do poorly or caused us to do well.

What are some of the limits of AI that aren’t going to be overcome anytime soon?

This is a hard one to pick because they’ve been making jumps and bounds. I’ll go out on a limit of a few things. The most obvious one is the idea of moving from a process that’s going to be right every time to one that you’re going to have to check the outcome. That’s fundamentally built into the DNA of AI or machine learning. Now, what you’ll see is you’ll see, it’ll get a lot more accurate. It’s not going to be as much an issue but it is still very much going to be there.

Another one, I’m quite dubious around language models, so things like ChatGPT. There are a lot of people that’ll say, “It’s learned this and it’s understood.” That’s not true. They’re incredible models that are amazing, but all that they’re doing is they are pattern matching and trying to predict what the next likely word is.

Now, to do that, they’ve built impressive models of the world but they don’t understand things. That’s an area where we’re going to get better at using them, but I don’t think language models in their current form are going to truly understand things. They’re still going to be quite limited. From that point of view, that’s quite good and that means they’re not going to magically learn to understand things and you’ll have them automating everything. People are still going to be in the loop or just acknowledging what they can do then understanding that people are going to have to be in that loop to validate what they’re doing.

Language models in their current form are not going to truly understand things. They're still going to be quite limited. Click To Tweet

That leads me to my next question is, the one of misconceptions and obviously, you get the whole spectrum, AI is going to change everything for the better, change everything for the worse, and everything in between. What are some of those common misconceptions that people have in terms of AI?

There are a few of them that I’d like to go through. One of them is when companies decide to embark down the path of developing it in-house for some application. It could be the example of maybe you’re a company that goes and does an installation of power poles on something like that. For some reason, some of them are not done correctly and that leads to a failure.

You want to go, “We’re going to take photos. We’re going to try to identify and automate the process.” What people don’t realize is when you get go through the journey of doing AI, about 5% to 10% is the part of building this model and having it do the mass modeling part. About 70% or 60% is just in gathering and cleaning the data. When companies get in this path, “I’m going to take this bit of information. I’m going to put it into a model and it will learn things.” That’s simply not true.

What happens is when you start looking at the data because data is generated by humans. We’re very clever at solving problems, but we’re very terrible at following processes and procedure. What you find is the data often isn’t standard, there are bad examples in it, and cases that are just flat out wrong. This will apply for whatever application because I’ve done a few of these myself.

Whatever application you’re looking at, you’ll find that a lot of the examples that you have are wrong. Maybe you’re monitoring some process and the sensor failed. Now that data is completely garbage. Most of the time is spent cleaning it. By    cleaning it, trying to understand what the data is telling me and at the end, you’re modeling it.

What I’m hearing there is that if you’re going to use a tool, you should be looking for a tool that has had a tremendous amount of upfront work put into it to make it useful. Is that right?

Yes, the good tools that will be very transparent about that, you’re going to have to go away and do some data cleaning. Maybe they have it where they have a lot of these processes automated, but there are significant work in going and making sure that it can understand real-world datasets because real-world data is very messy.

This is important. When it comes to technology, construction contractors have vendors knocking on their door every single day saying, “Here’s my easy button and if you hit it, your life will change.” Now, we have these AI vendors coming in with various solutions. I’m sure there’s, again, a spectrum of how effective they are. Let’s say the President of a $100 million contractor, what are some of the key questions they should be asking these people to begin to get an understanding of perhaps the quality of the system they’re being presented with?

There are several layers of that. What you want to do is you want to start off by going, “You’re saying this is a magic AI thing. What part of this is AI and why does it need that?” That often takes a lot of people by surprise. It’s got AI. The marketing people are sure that you wanted this. Once you say, “We know a bit about AI, so why does it need it? How confident is it? What’s the failure rate?”

Starting to ask how often is it correct and how often is it incorrect? Quite quickly, you’re not going to be able to know just from that answer, but the way in which they go about answering that question is going to tell you a lot. If they go, “Ours works all the time. It works all the time.” I don’t know any tool that works all the time, so I’m already a little bit intrigued.

The other thing that I look at is I also try to go, “How often does it fail? What data prep do we need to do? How do we need to make sure that it’s going to be able to understand our data? Are there some areas where it’s not performing well?” It’s all around feeling out like, what have your previous clients been doing and how similar is it to what I’m doing? Does it not handle this area well? Again, how they answer that question is going to tell you a lot.

The final thing that I try to look at, particularly if it’s some AI tool is, what research have you done? It’s a very technical field, have you folks published papers? Do you have any white papers? What literature do you have around your tool showing that it’s had a real-world benefit? You’ll find that a few of the vendors we’ve looked at, in terms of partnering. They’ll have white papers and research showing that, “We use this over this many projects and we’ve used it by so much percent.”

It doesn’t have to be an amazing technical paper, but as soon as you say, “They’re fairly legitimate.” They’ve gone away and they’re trying to measure. They have some way of assessing, “People use this tool and it did improve it.” We’re not just putting an AI. It’s going to press here and solve your problem. It’s like, “Here’s how it did it.” You’ll find that all of the time it may not work, but 50% of the time, it improves the outcome. That’s a great thing. By understanding that, then you can go, “This tool is going to improve things, but we understand how realistically it’s going to improve things.”

You have to take a skeptical approach to it and ask those tough questions. I’m curious, if you could wave a magic wand, let’s say five years from now. Again, we’re not going to hold you to this in any way. Where do you think the industry will be constructed in terms of the use of AI realistically speaking? Is it going to be something where there’s been an exponential adoption of it or are we still going to be in the early stages?

We’ll be coming out of the early stages. At the moment, we’re very much in the part where there are lots of startups trying to get ideas working. They’re at the very early stage in need of a bit of polish. Five years is a good timespan for a lot of these companies to work out their various teething issues. In five years time, it’s going to be a much more significant factor. It’s not going to be everywhere. It’s not going to have solved every problem but things like site progress are going to be in a much better state.

A lot of the processes are going to have finished being digitized. I know having worked in construction at the start of my career as an engineer and not just a data scientist. It was very common that you’d get emailed and scanned PDFs of an actual site plan, which is great. When you’re trying to do take-offs, it’s not so hard. Things like that are still probably going to be a part for smaller contractors, but for most people, they’re going to be moving more into that digital workflow.

Once we have that, you’re going to see that these tools can start to bear fruit. The AI landscape is going to have improved quite a lot. The hype over the years has probably been around language models. The direction they’re going is in the direction of what’s called agents. It’s what they’re trying to work on. The idea is you have a language model and you can ask it questions, but if you ask it to do things like mathematics. Sometimes it’ll get it wrong and a lot of the times, it’ll get it right because it’s not built to do that. It’s built to try to predict words and not do actual calculation.

The idea around an agent is going, “These tools can’t do these complicated technical tasks, but they can give us good plans.” If you ask me, “Give me a plan to do this.” It can do that. The idea is, “What if we ask them, “Give me a plan to do it,” then we give them tools to execute each of those bullet points. What you’re going to see there is you’re going to see tools like ChatGPT. Other language models have developed this agent concept.

You’re going to find that a lot of bits of software are going to be able to integrate with it. The example we give is we imagine that for our products around project management. You might be able to go to an agent, a chatbot basically. You’ll ask questions about your project and it will be able to answer them. You won’t have to go through harassing your planning guy and say, “When does this bit of work start or why can’t I go and get access to the site yet? What’s driving this?”

You’ll be able to ask it and it’s going to use those tools to go away and say, “This is what’s happening on this date and it’s being held up by this task.” Not all of the tools, but certainly in five years’ time, a lot of them are going to be integrating with this. The way in which we’re using our tools will change quite a lot. We’ll be doing it in a more conversational way.

Tell us a little bit more about yourself and the work that you do in Elecosoft.

I’ll give you a brief rundown. I started off, I did a master degree and did engineering. I started working as a mechanical engineer. I worked on projects doing backup power and renewables. I decided after a few years of doing that, I wanted to be a bit more technical. I decided to go back and upscale on data science.

I did a little bit of work with the Australian Navy trying to do predictive maintenance, tried to do a PhD but worked out that PhDs aren’t for me. I came back and I started looking at this problem of project planning where I met the guy that I currently work with, Mark Chapman, who’s head of innovation for Eleco itself. I started somewhere else, but we worked together. I left and he eventually headhunted me.

What we’re doing is we’re trying to understand how we can help our clients and improve the process around at the moment, planning but not just planning, tendering and also asset management. How can we distill that expertise like we’re talking about earlier? The idea is that we can go and look at a bunch of previous schedules and talk to planning experts and go, “We often know that there are lots of things that drive projects successful failure.” I gave the example of a geo-tech thing.

It’s the case of going, “How can we try to put some intelligence in our products that’s going to let people make smarter decisions? Try to identify those risks.” We’re focused around trying to improve that planning process and put more knowledge in the hands of our users so that they can make better decisions. Eventually, the end goal is that we’ll use artificial intelligence. Not just highlight risk, but suggest ways people can get out of problems.

The idea being that if we identify that your project’s at risk, maybe we’ll suggest, “You’re doing some tunneling work, you should probably do a geotechnical survey.” The idea is that we’ll be suggesting things like that. My work is very much around understanding what’s the roadmap and what things we should do.

There are so many potential things that you can do with artificial intelligence. At the moment, it’s a matter of going, “We have finite resources.” What things will probably deliver the best value to our clients early. At the moment, we’re using tools like ChatGPT to have a better onboarding process. We’ve built a prototype chatbot that has access to our help documentation and support documentation.

When users have problems, they can go on our site and it can go on and answer them, “Here’s how you do this. Here’s how you do that.” I use that tool quite successfully to build our AI roadmap. We’re also looking at partnering with construction companies, saying, “We have a product that helps you plan projects but it’s your planning data. We’d like to work with you and get some of that planning data and start building these AI features.”

It’s a matter of, we want to partner with them because we need their data but these are the guys that are on the ground doing these projects. They often know what they should be looking at, but they don’t have time to look at it. We’re trying to partner with people and work with them to understand what shakes do they do, what things do they miss, and how we can help automate that for them.

It’s an very interesting opportunity for construction companies in that realm of partnering because they’re not just the customer in a certain way. They have that vital information that you, as a software-developing company and a data company that you need. That leads me to one of my final questions here is, as the CEO of, let’s say a commercial construction company that turns over $100 million plus a year. How should I begin to approach this journey into using AI? Partnering with companies is one way. Can you suggest some other ways that would be beneficial?

One way that the construction industry in general needs to look at is this idea of working groups and consortiums. I had a meeting with a consultancy firm in Australia. They’d been doing lots of work for the oil and gas industry. We’re talking through various things that I gave you examples of, but they knew they had to go and do geotechnical surveys.

That would be basically a thing that they would check. What they’d done is they had formed a working group with BP, Shell, and Chevrolet. All of these math conglomerates had formed a working group and they’d shared project data in a very high level of detail about project data. From that, they had hundreds and thousands of examples of previous projects and the ways in which they’ll plan, where they were located, where they were working, and what were the specifics or nuances of that project.

They had a very robust data set who go and make statistical judgments saying, “Historically, there is some variance between different companies because there are different management strategies.” We can tell for sure between all of you, if you’re doing this wrong. You’re behind the April. By having that working group and sharing data, they’re able to better improve their practices. That’s a big thing that the construction industry needs to look at.

Going around and doing talks with many of the ministers. They look and go, “Data is in your oil and we own the data. We don’t want to share this.” It’s like, “If data is in your oil, are you Shell or BP?” What are you going to do with it? One of the things there is this, getting them open to the idea that we need to partner with people. We need to understand that this is going to be an evolutionary change where we’re going to need to understand that this is going to be a process where we need to look at going, “It’s moving us to where we need to validate what it’s telling,” rather than just automating a thing.

Construction Genius | Daniel Hewson | AI In Construction
AI In Construction: We need to partner with people. We need to understand that this is going to be an evolutionary change.

 

Also, understanding we probably don’t have enough data on our own to answer all the questions we may have. We might have to share and work with our competitors in some sense. It’s not going to help them anymore that it’s going to help us. It’s a way of going, “We need to start just considering doing this. What data could we share? What would not be giving them our secret source? What in sharing could we help us learn where our real gaps are in our knowledge and the mistakes that we’re making?”

That’s one of the probably the biggest ones I would say, being open to the idea. You don’t have to show them everything. You don’t have to share something like, “Here’s their internal process. Here’s how we win all these jobs.” That’s ridiculous, but by being open to going, “What are the key areas where we want to try to establish some standards and we want to try to share information?” The idea of a lot of companies in Australia and the UK and moving BIM. I think America is starting to get there in terms of integrating BIM information going, “Let’s make sure that we’re always standardizing in how we’re doing these things so these AI tools can use that data.”

It’s very interesting that analogy of oil because as you have to extract the oil, you have to refine it and convert it to use. You have to do the same thing with data. Tell us a little more, Daniel, about how people can get in touch with you.

To get in touch with us, the best way is probably reaching out through email, so [email protected]. Another good way would be to reach out to our team in America, David Hernandez, heads that up. I have his email, but probably through email or from calling the American brand. We don’t have that number at the moment.

That’s fine. We’ll have links to the website. Daniel, who would you like to get an email from in terms of the work that you’re doing from the construction industry?

For me, it would be an email from ideally that CEO that we’ll have about before or the head of planning to say, “We heard you on the show. We have a lot of data. We want to try to improve the way we’re doing things. Can we work together?” That, to me, would be a dream like actively looking for companies to work together.

We’re not going to solve this in five seconds. We want to work with you and try to build this up. It’s going to improve our product’s value, but it is going to improve the efficiency in which you do things. That would be fantastic, being able to partner with some larger companies in America to say, “Here’s how we run projects previously. Here’s our planning data. What can you tell us? What can you tell us about what we did before? How is this going to help us going forward?” That would be fantastic.

That’s tremendous. I appreciate you joining us on the show, Daniel. Thank you for taking the time. I appreciate the clear explanation of some of these AI matters that people may be a little cloudy on. Thanks very much.

Thank you very much for having me, Eric.

Thank you for reading my interview with Daniel. Reach out to him on his email, [email protected] and feel free to go to Elecosoft.com and learn more about their business. I hope you found this interview useful. I know I found it very interesting. Feel free to share it with other people. Thanks again for reading the show. I’ll catch you in the next episode.

 

Important Links

 

About Daniel Hewson

Construction Genius | Daniel Hewson | AI In ConstructionDaniel Hewson is the data capability manager at Elecosoft. In his role, he oversees the development of overall data and AI strategy and focuses on discovering how AI can be leveraged to improve project planning and to identify inherent project risk.

His current focus is on Elecosoft’s flagship product, Asta Powerproject, to understand how AI can be leveraged to improve the way project planning is done. The focus is on leveraging AI to allow for better understanding of risk inherent in projects.

Daniel comes from a strong technical background with degrees in Mathematics and Mechanical Engineering (with honors). He has also spent time as a PhD candidate.