The Smart City Podcast

How AI Is "Easy-To-Use" & How That Leads to Profitability and Sustainability - A Talk with Kevin Smith of Canvass

July 26, 2022 The Smart Cities Team at ARC Advisory Group Season 7 Episode 3
The Smart City Podcast
How AI Is "Easy-To-Use" & How That Leads to Profitability and Sustainability - A Talk with Kevin Smith of Canvass
Show Notes Transcript

Kevin Smith of Canvass AI discusses how his company has made  AI easy-to-use so that the industrial sector can accelerate their transformation into a profitable and environmentally sustainable industry.  This starts with empowering industrial workforces to leverage data themselves – not data scientists or 3rd parties.  

Thus  the power of AI is put directly into  the hands of engineers. Processes  for industrial engineers to build, deploy and scale AI have been greatly simplified across their operations – so that they can augment their expertise with data-driven insights.  

Kevin also talks about some of the world’s largest industrial companies and how they use  Canvass AI to optimize processes, reduce waste, and cut their CO2 emissions.

  

Would you like to be a guest on The Smart City Podcast?

If you have an intriguing, thought provoking topic you'd like to discuss on our podcast, please contact our host Jim Frazer

View all The Smart City Podcast episodes here: https://thesmartcitypodcast.buzzsprout.com/

ARC Advisory Introduction:

Broadcasting from Boston, Massachusetts the Smart Cities podcast is the only podcast dedicated to all things smart cities. The podcast is the creation of aarC advisory group Smart City practice. aarC advises leading companies, municipalities and governments on technology trends and market dynamics that affect their business and quality of life in their cities. To engage further please like and share our podcasts or reach out directly on Twitter at Smart City viewpoints or on our website at WWW dot aarC web.com backslash industries backslash smart dash cities

Jim Frazer:

Welcome to another edition of the smart city podcast by aarC advisory group. I'm thrilled this afternoon to be joined by Kevin Smith, the Chief Commercial Officer for Canvas AI. Welcome, Kevin, how are you today? I'm very good. How you doing? Great. It's well, it's great to have you. Can you tell us a little bit about yourself and your organization.

Kevin Smith, Canvass AI:

Like you said, I'm the Chief Commercial Officer for cannabis AI based out of Toronto. Cannabis is an industrial AI company founded in 2016. And they really got started with an initial initial vision of enabling industrial users to get full value for their data as they evolved and build an AI machine learning platform for their use of their clients that ultimately turned into what the platform is today, which is all about enabling the end users. So process engineers, unit engineers, reliability means engineers, how they get value on a day to day basis from that data, because we're grounded in the belief that the real adoption of AI is gonna be driven by those types of users in the plants.

Jim Frazer:

Great. Well, Kevin before it before we jump into Canvas, AI, what what attracted you to Canvas AI? And can you tell us a little bit about your background in the oil and gas and chemical industries?

Kevin Smith, Canvass AI:

Yes, I've spent the majority of my career but 25 years now, as an executive consultant, oil and gas industries have literally worked all over the world, six continents, 42 countries total. And, interestingly enough, I didn't start on the software side, I didn't come from a technology background, I came from a technical background, I started my career as a unit engineer a long time ago. And so I came at the this world from a technical consulting and an organizational consulting standpoint. So a lot of what I do, and I've done over the years for clients is help people, starting with kind of big operational excellence programs, you know, 20 years ago, and then that's evolved over the years into helping people with digital transformation programs. So if you give some interesting examples over the years, a number of years ago, worked with a large Australian gas company, and helping them set up a central control facility for all of their coal seam gas wells and their GM like to joke, we have a plant the size of Delaware, we're going to control them 200 miles away in Brisbane. So I had a team on the ground that helped them with all of our processes, all the technology selection, all the roles, responsibilities, all the organization work to get that control center up and running. Another good example, in the Middle East, the this was helping them set up an optimization facility for all of their chemical facilities spread out across kingdom, so is helping link that that concept of a central optimization facility with the interface required to actually get value from that guidance into the plants itself. And then again, a little more closer to home. Another good example, this little more recent one is working with a corporate business process team on a large chemical company. And their job was really to link with the corporate digital teams were doing with the sites to ensure that everything that digital team is doing is adding value in the immediate term, not just some kind of future promise of value when the technology is brought to play. So when I when I came into to Canvas, it was actually through a mutual connection. Some I used to work for me, another company. And they originally came and were talking to me just about my oil and gas experience as they're scaling up into the industry. And what I found very exciting about Canvas is they are at this great part of their journey. As as a company I was six years old, they are still still very much growing, which creates lots of opportunity and ability to shape the offering and the approach to the industry in a way that you're adding value at every stage. It's not just a big solution. That's a big promise but something that It becomes real and tangible everyday so it's it's been very exciting with Canvas access started working with them at the start of this year and it's been fantastic. So,

Jim Frazer:

um, you know, for we have a broad range of of listeners here on our on our podcast. For those who might not be terribly familiar with AI, particularly industrial applications of AI, can you can you help help us with, you know, a foundational just perspective of that, and maybe how that supports the range of digital transformation efforts?

Kevin Smith, Canvass AI:

It's a fantastic question is Tucker give you the kind of real cash short version of artificial intelligence, the so a couple of things with artificial intelligence. First off, in real simple terms, you're just teaching a computer to learn very much the way humans learn. It's, it takes all the data and it's learning based on relationships, it sees multiple variables, and with no, no preconceived notion says, Well, if I see this variable responding this way, I see these others responding accordingly. And it learns by that correlations, how to analyze and interpret a system. So that standpoint, it learns very much the way children learn children learn as they interact with their world. And one of the great things that brings to it in an industrial setting is it takes out the biases that people like myself, as engineers, often bring with us. So it can look at large quantities of data at the same time, with no bias, it just says, Hey, I saw these things are reacting together. And that insight in itself opens up lots of opportunities around now, one of the things becomes interesting is we have been conditioned by the media and quite frankly, movies and TV to view AI in a very specific way. We think of artificial intelligence as replacement for people. And we get movies like The Matrix, and we get the sky net, and we get all those great things are great for movies, and we get this perception of what AI is. And what we miss is AI, when it's done. What done well, isn't about replacing people. It's about augmenting people, it's about enabling people to get real value from their data that allows them to think and think creatively, because that's one of the things AI does not do. It's very, very concrete. And it's thinking it says, if I see this, I expect this, humans have the ability to take that analysis, and apply the level of creativity that you cannot recreate otherwise, and really start shifting how we work in our industrial space. So we think of AI and you think machine learning, it really kind of underpins a lot of other technologies. But it's really about how we take analytics from the data we have.

Jim Frazer:

That's thank you for that. Um, you know, as you know, as we think about AI, you know, no, often it's thought of as not necessarily a near real time or a real time application. You know, the the only real time near real time application I often think of of course, is autonomous vehicles, and not your navigation. And that's not exactly there yet. You know, what, what part of the ecosystem really does Canvas AI play it because the AI is so broad in its in its application areas, it touches just about everything, just about everything we could think of?

Kevin Smith, Canvass AI:

So that's a great question. If you look at this one, a lot of people, when they initially pro approach canvas, or quite frankly, other AI companies, they approach it with the idea of I want AI to solve most complex problems. And they I want to apply something that you can look through this mountains of data, and come back with answers and insights that my people just can't, can't come up with on their own because there's so much data involved. And this is where you get this not real time approach to things because I got to deal with mountains of data, I have to bring them into this system, etc. And in while those problems are interesting, those are not what's going to change and transform how people work in our industry. And so we're Canvas has really come in is focusing on more discrete problems. And so on one hand, we do get involved in applying AI to deal with bad actor list from a reliability side. So very much an offline application to support a real time troubleshooting to deal with bad actors. But we also get involved with setting up AI engines to help move things along more quickly. So just to help people improve people's efficiency. So as a good example, in the refining space, yield accounting, is something refiners will talk about a lot is simply put, do I know where every molecule that barrel of oil is going? And make sure it balances out so I'm, I'm accounting for everything I can sell in real simple terms. So you'll find unit engineers, a regular activity for them on a weekly basis is just closing a material balance. So right now we are working with a customer to build an AI solution that doesn't do the material bounce for them. But it highlights when it can't get too close, when there's an issue, it finds which sensor is off, which is, which is the sensor that's first creating the problem not allowing the material bounce close. And then from there, using the data, it's available, making a basically an educated prediction. So a Soft Sensor to create the data needed to close the balance. So this is a good example of a real time application, something that people are doing weekend week out. From an engineer standpoint, can add just runs. Now to take that all the way to another example with a client where they they built an AI engine for that was getting their production runs. And they started in the very beginning pieces, they were using it to check quality, because there was a flower delay on there from the lab analysis around it. And they want to say we need to get this to real time real time sensor so we can make, we don't lose four hours of production. If we're off. And we very first started with it, it really was running behind the scenes on data. And then then they started put it online, we still had an operator involved and in getting people to a point of looking at the data, but I'm not really taking the advice until they started developing real trust in the analysis. And then you have something that comes integrated and how they work. So now I have a soft sensor, that becomes a data point that the operators are using day in day out to control the unit. And we've even had clients take it all the way to closing the loop fan, dropping this off sensor that's created about AI into their control system. So we have the whole range that we find to really be successful. While the big complex problems are interesting, the things that impact how people work day in day out, those are the ones that really put companies on a transformation journey, and really start getting value from that data. Yeah, I

Jim Frazer:

mean, um, you know, just thinking about the three pillars, pillars of digital transformation, of course, there's the technology, there's the new business processes that evolve at, you know, we're using that technology. And then the third one, and perhaps the most the thorniest is training your staff to embrace both the technology and those new business processes. And I could see with AI it, it can help be a guide to the to two folks on on both of those issues about learning that technology experiencing and working with it as well as knowing what to do with this evolve business process that you might have, you know, evolving from that for our feedback in the lab to, you know, a near real time few minute response rate. That's interesting. So what's the mass? The what's the current situation regarding AI and machine learning? Are we in the infancy of this industry? Are we nearing some inflection points where there will be mass adoption? Where do you where do you think we are? It's, it's,

Kevin Smith, Canvass AI:

it's interesting. So I'd say we're in the infancy has been going on for a couple decades. You know, the immigrant very first got involved in discussions around at the time of their calling early event detection, which is a as stepped in to fill some of that space. And that was probably 20 to 25 years ago with companies like Exxon and shell and Chevron. So as an industry, we've been at least thinking about this for a long time, the problem has been having the computing horsepower, and the data to really make that, you know, a real time application around. So I would still say we're very much in infancy. But we're in infancy from an adoption standpoint. And in some of that is a reflection of the technology. Some of that is reflection activity horsepower. Some of that, frankly, is a reflection of the people. Because we're looking at industrial applications for AI. It's like thinking about a self driving car. Is that something that everybody thinks about and think of AI these days? Because it is it's present in the media, we see the reports of car crashes, you know, because of the AI. And quite frankly, to be the fair to that space. It's less the AI it's more about the variability from all the human drivers. But that ultimate gives us a view on really where people's minds are at against that trust associated with you know, do I really put a lot of stock in what a computer is telling me to do? And I want to take action based on that because is that viewable? I didn't think about my own computers telling me this can acuter actually be right? So you start thinking about that. The the adoption unfussy some others this this idea of the risk analysis we all we all go through and looking at what we adopt and what we don't quite make use of. So one of the big mistakes I see And quite frankly, I think I think is keeping industry very much in its infancy is, I think we're focusing on the wrong problems, we're focusing on very complex issues, and sometimes complex. From a technology side, I need lots and lots of data, it's the type of data I need, more often than not complex from a change management side. So as an example, a lot of people approach AI, applications and industrial space from a predictive maintenance standpoint, except it's a perfect application, if I can predict a piece of equipment failure, then I can start planning for the repair of that head of the curve, take it down, repair it, lots of small fix versus its fail catastrophically get it it makes perfect sense. The problem we start running into is when we go down the predictive maintenance side, if we think holistically is that's a significant change in how people work. And we're not talking about a small shift in how I think about my job, how I think about how I manage my plant, you're fundamentally shifting from a reaction based or even a time based approach to to maintenance to one where I'm trusting a computer to tell me when is the right time. And the back of my mind, I'm thinking, Well, what if it's wrong? What if it's wrong, and that piece of equipment fails catastrophically because I didn't take care of it appropriately. And as those are all the very real valid concerns people have, and those are the things some of the things that keep us in this infancy. Now, a few things have happened recently, one, the COVID, in the last couple years, has caused cause industries start thinking about technology a little bit differently, as we see the great, great resignation, and we see expertise leaving our country and our companies and and we are asking the question, okay, now, how do we augment, you know, the new employees are coming in? How do we lower operating costs, so they're more competitive or more sustainable, if something happens to the, you know, our demand for our products, all these things are shifting how companies think about technology and recognizing there's a real business need is pushing this. So now I have one hand, an industry that is risk averse for very good reasons. But now I have this external pressures, everything from cost to sustainability, to maintaining that experienced expertise in the company. And so it's getting people to really put, you know, energy and investment and time into the whole digital transformation. And this hardware Aon, and they'll come in. And it quite frankly, is one of the mistakes I still see people making, I think we're starting to move out of the Infancy because we're starting to walk a little bit. But we're still walking from a digital perspective, I think when the industry really start to start running, I think we're probably looking at that next, somewhere between two to five years, is when we start approaching it from an end user perspective. And always ask the question, how do I get value today? So not just how I get value some time in the next three to five years? Or sometimes when I get the data in place, or when I get this application place? But how do I get value for the users today, in a way that sets me up for the next step tomorrow, that's when we go from crawling to walking to this hit a dead sprint. And we really see the industry take off in terms of its adoption of technology.

Jim Frazer:

Kevin, it's it's interesting, because I come from the transportation space. And there have been some substantial psychological studies about autonomous vehicle driving. And it's really arguably at present, it is safer to now have a computer drive your car today. There that's that's pretty well documented. But for mass adoption, it requires the average person to embrace the idea of the computer driving your car. And for that reception to take hold. The results of some studies have been that the safety factor needs to be not just incrementally better, but better by orders of magnitude, so that the population believes it is in fact, far, far safer and not just safer.

Kevin Smith, Canvass AI:

You've asked a really, really good question on this one. And this. This actually applies when you're talking about AI or quite frankly, any change to how people work is an interesting psychological effect. We talk about organizational change, just change principles in general. When people make a decision about their personal risk, that they make the decision their risk tolerance is very, very different than if they feel like somebody's making a decision for them. Think about a simple example, we talked about driving a car, the Think about how much of it, how many of us are probably guilty of driving a little too fast. The occasionally, the maybe having too much fun because we get granted a sports car, we own a sports car, the now think about the exact same driving style if somebody else is behind the wheel. And we put her hand up on the dashboard, you know, we grip the armrests a little bit tighter, the and they're not taking any risks that we wouldn't take ourselves, the difference is is who's in control, the feeling of control is huge. Now, difference becomes if you're with somebody you've driven a lot with, and you know, they're good, rather, you know, they're as good or better than you are, then, you know, you don't grip the armrest really tight, you don't you know, double check and make sure your seat belt is not just double fastened and everything is you're relaxed a whole lot more because your perception of risk has changed because I have trusted that individual. And we forget that when we start talking about things that have a disruptive change to how we work, something that fundamentally shifts how we work, how we think about our jobs, and quite frankly, also thinks about how we add value in our jobs. Those things that challenge our perception of control. That's where we get and get hung up. And so one of the big things when we look at, okay, how we use Canvas approach a client is bringing that organizational change perspective in, because it's not really it's not about the technology from that standpoint, the technology, lots of great things. But from a real adoption standpoint, it is addressing the organizational change aspect of it, it's helping people feel like and remember that they are still in control. I'm just changing how they control. So a lot of your adoption journeys. When people look at it. What they're sometimes not thinking enough about is how does that adoption journey and those initial entry points into it, allow the organization to trust, the AI trust or whatever the technology is, for that matter. And you build that trust over time to the point where I start, now, I'm willing to like take that sensor at face value. And I'll react to that sensor at face value not see it magical prove to myself that that's actually right. You have to let the organization go through all those change processes. It's Meteor example of self driving cars. You know, every car on the road is self driving, you get rid of most your accidents, you get rid of most your traffic jams, the because you have to take the technology sort itself out and you take out the variability comes from all the people making good and bad decisions while they're driving. But you have to get through that trust barrier before you had enough people willing to let the car drive itself because in the back of everybody's mind, is the pictures I've seen on the media have a frankly, at Tesla's, they seem to get the immediate more than others, where you see the car has had horrific accident. And it doesn't matter if statistically speaking, those are rare. That's what's burned in people's minds, you have to get past that point and get to the point people say, You know what, I've learned that the AI is doing as good a job as I would have in driving the car. So you have to think about that in steps, not just because I don't know about you, I'm probably not gonna do a self driving car the first day and lay the seat back go to sleep. And his wife like it there probably will not happen. It will come with with steps along the way.

Jim Frazer:

Know exactly what Kevin, we we talked about, you know, AI in a very granular level. But how about more foundationally operations across a facility or a plant? In particular, I was thinking that I remember some insights from Gartner that it found 85% of machine learning and AI projects fail and only a little over half make it into production. What what is AI look like today from you know, organizational adoption level in terms of adoption and, of course, perceived success.

Kevin Smith, Canvass AI:

When you start looking at it, those statistics and I think they're they're bringing true for a lot of people. Basically, I see you guys question Why do somebody fail? So you have a couple of questions. Sometimes, quite frankly, it's a platform. And there's a lot of people coming into the space. And not all platforms are created equal. So there is there is that piece of it. The but if you assume that your base platform isn't like a canvas is sound, proven technology, then it really comes down to two basic questions and me both of them come to the same spot. One is do you have sufficient data for the ad to work because remember what We talked about this earlier said AI learns very much the way children learn early on in terms of how they experience the world, they interact with it, and learn, basically lots of if then relationships. So data, so AI needs a sufficient amount of data to do that. Now, to be clear, doesn't mean actually AI needs hordes and hordes of data, it needs data that's sufficiently description and descriptive. And is has nothing Tom worth of data to actually learn from as an example, we have a client who actually built a very successful ai ai model of a single data point, it was just a very rich data point. There's very descriptive about the process. And they had a lot of time, time for that data. The so that's your first question, do you have sufficient data? The next piece to it then is level of engagement of the people who are directly involved in the project. And it goes back to the discussion we're just having around the change management side. And one of the things that I've told clients for years on organizational change perspective is, if you're really looking at engaging people, and getting people invested in any project doesn't really matter what it is fun things that have emotional value to them, things that impact their day to day life, from a work standpoint, and make their job easier, that deal with a problem that's been nagging them for a long time, that allows them to accomplish something they couldn't before, something that has value to them, not just value their business but value to the employees. Now, this comes from an AI perspective, handy in two places, one, inevitably, if you embark on a project, you're going to have things come up along the way to not anticipate whether it is data streams that you're expecting to be good that weren't as good an alternative, whether it is a redefining of the problem you started with because you found something better, or you found that your first problem didn't quite work out the way you thought things will change. So if people are invested in what you're trying to solve, because they know that the solution has value to them, they're going to adapt that change a whole lot better, they're going to react to things differently. And that involvement becomes huge on the adoption side. The the other side of this, my look at look at it, look this from a data perspective on this. And it's interesting, as we started down this path of collecting more and more data, a good 20 plus years ago, and give you a really good example of this and industry size, people started putting give me field operators, handheld devices to go collect reams of information about their equipment with the idea that that would help me improve reliability. Now, the problem was collecting all this data, but we didn't really know what we're going to do with it. And so again, that the data did not have value as employees, oftentimes the data went into a black hole, because it sounded good, we collect the data, it'll be great if we have it, if in this event, I mean, think about how we created value for those individuals. And so and what we did is we taught people, we really don't care about the data, we just care about kicking out the plant and looking at stuff and the data secondary. So as a result, the data streams got corrupted. Or when we look at this and instrumentations, on the on the plants we're prioritizing, what do we fix, if we don't value the data, it won't be the instrument. So if you come back in, you're always going to have that focus on what's going to add value to the end user. So people who are charged with building, maintaining, and then using the technology, then a few things happen again, they'll change more readily when things come up, they'll adapt, but then the others now I have value for these data streams that are coming in. And so now I can help not just, you know, figure alternatives if there are gaps in the short term. But now I have real value in making sure these newest data streams I want to add on to enrich the AI as I go forward. Now, I know what's been done with this. And now I'll make sure the quality is there, I'll make sure the instrumentation is maintained. So that focus on people and things that have emotional value to people becomes important. And it's just kind of to close out on that one, you go back to our self driving car analogy around it, it's what is your interest, if you like driving a car, chances of you to get in the self driving car drive itself is probably relatively low because you're enjoying the process of driving. However, if you're sitting stuck in traffic on the 405, in California, and you're doing nothing and a car can basically drive itself and you pull your computer out and get other work done, and you're trusted to do it. Now that activity now has value to you. You're much more likely to adopt it in that in that scenario. So the more immediate value something has, the more rarely people will adopt it. And so again, companies really benefit from focusing on what has value to my employees and start there and recognize over time that will create value for the business versus thinking about what has the most value to business and starting their independent, the value house for employees.

Jim Frazer:

Fascinating. Let's say my organization is embarking on on an AI or digital transformation project more more more generally. You know, how does a cannabis approach that project from the from the beginning? And how does your value proposition? Play to those, you know, to to that project? And how do you differentiate your sales from say, you know, the other competitors in the ecosystem?

Kevin Smith, Canvass AI:

So, I shall start with your last question first, because that kind of defines a lot of the rest of the approach is one thing, the Canvas team, we have a fairly broad background, we clearly have the traditional data scientist. But then we have people who bring in controls experience, and we have people who bring in people like myself, who bring an organizational experience, we have people who bring reliability experience in the company. And so one of the things that differentiates us is, is not the services we offer, because we're a software company primarily. But because of that, you know, in user experience, we approach it from a user standpoint. So when oftentimes when people come to to Canvas, initial discussions are around this promise of what AI can do it, they've seen a scene or podcasts like this, or they've read an article, I said, Something says, Well, what if what if I can know something two weeks in advance, that piece of equipment is going to kind of fail or even better if I knew a month in advance, I can change your production plans accordingly. And I get enamored with that. So we'll get calls similar to that. Okay, can you help us on journey? And the questions and answer questions? Yes, we can. Yeah, the Canvas was built as a enterprise level platform, so it can support that type of solution. And we do get involved in building some of those out. But where we start engaging? Initially, I think, when I says it's part is understanding the big picture, let's just talk a little bit, start with bit more simple things that add value to people. So understanding how the organization wants to use it, understanding what the organization's vision is, and helping them think through how do you build that vision? Instead of thinking through what's what's the quickest way I can get to, quite frankly, you using the platform on enterprise basis? It's the what's the quickest way I can get you to see value from the platform? Because those are two different things. I'm approaching it from a enterprise sale standpoint, I'm thinking about how do I get this the broadest application possible, as quick as possible? The kind of approaching it from how do I add value? As soon as possible thinking, okay, I can solve big problems. Are there other things that the head more day value to you know, the example I gave referenced earlier? I think I referenced earlier where the case we're using to help clients use AI just to help close material balance issues on in their process. And you know that as a as a business case, if I run the business case on that, yeah, it's interesting. Does it fundamentally change a business in terms of performance? Now? It doesn't? Does it impact the engineers time? It does, it's anywhere from four to 812 to 16 hours a week that they're getting back. Now, what that does, starting by something small, that adds value, that makes the rest of the adoption journey a whole lot easier. So when we start with the client, we start with that initial focus, what is what's going to add the most value to the end users first? How do you balance that against that longer term vision? So you're mapping out that adoption journey over time?

Jim Frazer:

So is it I think what I'm what I'm hearing here, surmising is that when you do enter a discussion with a potential client, you are cataloging and documenting a comprehensive set of needs of user needs, probably over the entire lifecycle of the of the project. But then what you choose to implement is really the quickest, quickest ROI projects. Is that a fair assumption?

Kevin Smith, Canvass AI:

It's a fair assumption I would use and use the word value versus ROI. And the reason I do that is if we do have we have some clients who are very mature in their AI journey for looking for just a very specific application. And we'll approach those in very much focused on a specific application because we recognize that client is fairways down their journey. However, people are a bit newer to the to the world. The reason this the reason I use the word value versus ROI it goes back to we're talking about a second ago is I want the employees to get value from it. The focus on ROI will always focus on the business, and business first and employee. Second. The if you're real, to really thinking about AI as a journey over time, your employees are ultimately the ones we're gonna take you down that path. So if you focus on applications that immediately have value to them, and show them the real potential value in AI, because what you want to do is you want to, you want to create an environment where your employees are saying, hey, you know, what, can we use AI to solve this problem? Or what if, what if we did this? Or what if what if we had this is great, but what if I brought this new data stream in, when you start getting people excited about this new tool, and how it how they can interface with it, then they'll start coming up with use cases you didn't think of? They will, they will approach it with a level of passionate energy that they won't, if you just, you know, this is just a project going after business value. So a lot of that initial interest, and a lot of the initial discussions are, okay, let's understand, you know, the, if we're focusing on reliability engineers, or unit engineers, what's gonna add value to their day? Right off the bat? What's the quickest time to value for them as individuals? And then how do we grow that into what you want to do from a business perspective? So it really is blending that organizational change background into how we map out that approach? With a client?

Jim Frazer:

Well, that's, that's great. That led me my my next question really was what role do people play in the adoption of, of your technology and of AI in general? And, and perhaps a little more broadly, who are the stakeholders that you might prefer to participate? Is it just the the plant engineers? Or is it there? Or is there a range of stakeholder communities that might influence what you do first, second, and third?

Kevin Smith, Canvass AI:

So the answer ultimately, is very much range. So first and foremost, you definitely want ideally, sponsors at a site management level, so that they are prioritizing things accordingly, making a lot of people have the time to work on projects. So again, the project has to add value to them. But then, very much, again, engaging with those users. And sometimes that is two different groups, we have people who build and maintain the models. And sometimes those are different from the people are using using the analysis from it. So as an example, and oftentimes you'll see layers of users in there. So a lot of times, our models are built with the reliability engineers, the processor unit engineers, and they'll be one recipient of that information, they'll use an insights to help guide their jobs and make decisions support troubleshooting, to support optimization efforts, the and then sometimes you'll also see another community coming from the operation staff, that is actually managing the day they plant, so the insights will feed to them as well. So both in terms of how you engage people early, so they're being educated on what AI is looking at how it's approaching the problem. And teaching people how AI actually learns to think that you're thinking about and how you provide the insights to them. And what form isn't information being presented, because the way I present to an engineer is different than the way I present an operator. So as an example, we'll see a lot of people who will take the AI engine, and take the results from it, and then integrate that through a different dashboarding tool, something that's already familiar with people. So if they're using something like as example, Power BI, to bring in, bring in data analysis for their engineers, or for their operators, and they'll just feed the analysis from the AI engine into Power BI, to make it usable. So understanding who those users are, who's going to help develop and maintain, and always start with the user base, you know, what's going to get value for them. Because again, if the people, if technology is not being used to transform how people work, it doesn't really have value. So I'll come back to those users and always start there and think about how they get value from it. And then only think about the people who have to build and maintain it, how they get value from it, how make it easy for them. And that's really where we start mapping out that journey.

Jim Frazer:

Wow. That's, that's interesting. Um, you know, you, you talked about, you know, Power BI and simplified interfaces for say, operators. Is there a spectrum of display devices? And I'm thinking here of, well, you know, augmented reality situations and things like that.

Kevin Smith, Canvass AI:

That's a great question. One of the things that we find a lot of people in industry they view a as a standalone solution. We have a slightly different view of it. Well, I can be canvasses built as a platform be a standalone solution, because we recognize some people use it that way, we actually would encourage people to integrate it with their other other types of technology. So, and our long term vision is, AI becomes behind the scenes connecting lots of different bits of technology. You know, for instance, while you're talking about augmented reality, and they asked regarding the analysis that feeds a data stream into augmented reality, and because that often gets into more more than just the analytics, it gets into information access in the field, or it gets into just be able to pull up, you know, real time real time engineering diagrams, so I have access where I'm at. So it's, again, it's a good example for it. But AI has more than just, again, a think of it more than just as a standalone tool. But how do I connect different technologies, whether it's 3d virtual plants, whether it's augmented reality through a heads up display, or off of an iPhone, that people are looking to send things like that, or the displays I'm using on the console, or how I'm connecting and augmenting my first principles models around it. So AI really should be seen as a connecting technology, because it allows you to do more with the data than just as a standalone platform. And that's where there's a lot more value behind it. It's take your example of the self driving car, they is behind the scenes connecting all the different systems to the user.

Jim Frazer:

So that leads me to believe that there's a large assemblage of interoperable API's that allow me to plug into all these different user interfaces.

Kevin Smith, Canvass AI:

So the short answer is, yes, they are making an open platform so you can connect with anything that's out there. And that is coming like Canvas, it's an ever growing library, because you're always looking to how do I stay connected to what this the software technology other people are using? That? That's, that's exactly

Jim Frazer:

right. Um, you know, just that whole concept of AI, to many of the uninitiated really is a bit daunting. What What can you say to those who might be in plant operations, facility management? You know, the financial managers of those of those properties, in terms of, you know, what's real about adopting AI today? And where are we evolving to in the next, you know, in the short term, because I know, Moore's Law doesn't slow down. So, you know, where are we today in terms of really the business case and the value proposition for, you know, for Canvas AI, in particular, and AI in general plan operations?

Kevin Smith, Canvass AI:

So I guess two things, the first thing I tell anybody is, start small. And as you get started, have realistic expectations. think when you're going back to when your earlier questions around the Gartner study, there's a lot of things fail, because people start with, with expectations are too big. And they create problems that are that are very difficult for people to get their heads around. And they start consuming too much time and people aren't getting value for that energy right away. So it goes back to the common throughout all this, if you start small and things that have value today, then that's putting a much better position to deal with more complex problems, because people now have faith that energy is building to a solution that's going to make sense for them. And you know, the other side of this is I say, start with realistic expectations of a good friend, who's a Chief Digital Officer for an industrial power company. And we're catching up a few weeks ago, and he made a comment that stuck with me. So likewise, I said, you know, what, if I get an AI solution that gets my engineers 50% of the way to the right answer, the does half the work for them, and lets them really can engage with any engineer to engage. And it really didn't take in that initial analysis, and thinking through those different connections to solve the problem. To me, that's still value added. I've saved half of their time on that solution. So again, coming back to having realistic expectations as you get started, it's your self driving car when you don't expect to go from driving a stick shift to a self driving car all in one step. You start with cruise control, the adaptive cruise control, and you kind of build from there, you know, approach AI from the same same standpoint, start small, realistic expectations. Quite frankly, we have clients who started with AI applications that honestly you could have done with a different different system. We had somebody who started with a set of boilers and was just using a a burner management system for AI. Now burner management technologies around long time as not something itself that you needed AI to solve. Their view of it though was twofold one, when I got started, it was, hey, you know what this conditions my organization to start using. So I'm starting to learn what AI is so allows me to get on the path I know when the long term, but then they built on it, but more so Okay, once I get the basic memory management system, then they did something that can't do with existing, they brought in ambient conditions, because ambient conditions are very hard to model with other other technology platforms. So they started with something that was very manageable could have done, quite frankly, with a lot of other types of technology. Then once I got used to that, then I brought in another layer of another layer of data that couldn't process and now I've done two things, I'm conditioning people to AI. And now I've showed them real value that couldn't get otherwise. And I've started that journey. So as you get started, think through kind of small, small things impact people day to day to start conditioning them to use it, and then work your way up to more complex solutions.

Jim Frazer:

I'm sensing that this is far more of a marathon than a sprint, and that at every iteration, and every iteration, there will be a delivery of increasing value as each of the algorithms compound upon each other.

Kevin Smith, Canvass AI:

That's it exactly. And it's one of to one of two approaches, there are, there's one approach where people start in with more limited datasets and a smaller problem. And then, as they mature with their use of AI, they'll add in additional datasets and enrich the the that problem they can solve. Other people will apply AI to discrete problems and lots of different kind of similar discrete problems to start building an overall ecosystem of AI use the data and start connecting the dots with later on. And different people's AI journeys are different. But starting smaller, and really creating a pull from the company, for the people in the company. That's where you're gonna get value from. So it's remembering that you are to your your analogy, you are on a marathon, you don't want to try to run yourself out and exhaust yourself in the beginning, you want to build that endurance into the organization, you want to build that vision of where you're going and starting with those first few steps. And you add to it as you go. And realistically well tell anybody is the why you start with a clear vision and where you end where you want to end in AI. But I guarantee you is the path that you'll end up taking is not gonna be the same path you think you'll take, things will change, the something will come up, you will learn something, you will fail somewhere, things will come up, they will cause you to change your path a little bit extra people are engaged. Ultimately, they'll navigate all those twists and turns if the people are just rolling something out, because quite frankly, is in their objectives to roll it out. But it doesn't have value to them. Then when I hit the roadblock, throw your hands up. And hey, we've done the best we could. And so when you see that failure rates, engaging people early, starting small, make sure people's expectations are realistic. But again, focusing on things that have value and ideally things that have emotional value to your employees, at the end of the day, that's going to yield the best results for you.

Jim Frazer:

Well, Kevin, this has been very enlightening. Today, we're nearing the end of our time together. Do you have any last comments for our audience out there?

Kevin Smith, Canvass AI:

I think last comments. And certainly we're happy to follow up with anybody, whether you want a discussion on Canvas, or you want to have a discussion on some organizational components to it. You know, we we approach AR the same way we approach our customers, we're looking to add value. So if you approach your project the same way, think about how you add value to people. And that's it's embedded in everything we do. Or Canvas is very much happy to help in any way we can.

Jim Frazer:

Great. Well, lastly, Kevin, if someone would like to reach out and reach out and speak with you, can you share some contact information?

Kevin Smith, Canvass AI:

Yes, you can reach me at K smith@canvas.io. Or my phone number is 720-257-4053. And you can clearly do an internet search for Kansas AI and you will find our website.

Jim Frazer:

That's great. Well, once again today Today our guests on the smart city podcast was Kevin Smith, the Chief Commercial Officer of our canvas AI has been based out of Toronto, and it was a great deaf 45 minutes. Thanks, Kevin. And hopefully we can have you on again soon.

Kevin Smith, Canvass AI:

Great. Thanks a lot, Jim. Appreciate it. Take

Jim Frazer:

care. Thank you bye.

ARC Advisory Introduction:

Broadcasting from Boston, Massachusetts. The Smart Cities podcast is the only podcast dedicated to all things smart cities. The podcast is the creation of aarC advisory group Smart City practice. aarC advises leading companies, municipalities and governments on technology trends and market dynamics that affect their business and quality of life in their city. This to engage further please like and share our podcast or reach out directly on Twitter at Smart City viewpoints or on our website at WWW dot aarC web.com backslash industries backslash smart dash cities