The Sustainability Podcast

AI for Good: IBM Chief AI Officer Seth Dobrin and Urban Institute CIO Graham MacDonald

February 25, 2022 The Smart Cities Team at ARC Advisory Group Season 6 Episode 7
The Sustainability Podcast
AI for Good: IBM Chief AI Officer Seth Dobrin and Urban Institute CIO Graham MacDonald
Show Notes Transcript

What are the practical applications of AI in smart cities today that can actually benefit citizens? In our latest Smart City Podcast, we interviewed Seth Dobrin, chief AI officer for IBM, and Graham MacDonald, CIO and VP of technology and data sciences at the Urban Institute about their collaborative AI-based approach to mitigating the adverse effects of gentrification in American cities and communities. 

The theme of this podcast was "AI for Good." There's a lot of hype surrounding AI, and it can be difficult to find truly practical and actionable use cases, but this is a great use case with direct benefits for people being affected by increasing gentrification in many US cities and communities. It's also a great example of a public/private partnership that is producing results. The focus of this project was to put the tools and technologies offered by companies like IBM into the hands of nonprofit research organizations like the Urban Institute so they can do research and execute projects that provide the greatest benefit to the people that need it most. 

Seth describes the technology side of the solution and Graham describes the outcomes.  Here's what Seth had to say about the project:

"How do we help the Urban Institute better manage gentrification in the communities? How do we ensure that it's done in a trustworthy manner? How do we ensure that it's fair and free of specific types of biases, or in this case, how do we help them identify specific types of biases? How do we make the models transparent? How do we make them robust? And how do we ensure that we're preserving the privacy of people that are being impacted by these models? What we did was we brought our tools, our Cloud Pak for Data with Watson Studio, to the Urban Institute, along with some folks from the data science and AI elite team, to work on this problem."

Armed with these tools, Grant MacDonald and the Urban Institute team were able to develop a solution to measure and mitigate the adverse effects of gentrification, where affluent residents move into long-established neighborhoods comprised of middle-class and working-class citizens who in many cases find themselves priced out of their own homes because they can no longer afford to live there. According to Grant:

"We initially worked with HUD to define neighborhoods that were gentrifying, which were declining, and which were inclusively growing. The vast majority of neighborhoods happened to be in the unchanging or inclusively growing category. And there are a small number of neighborhoods that are in the declining and gentrifying category. And what we were trying to do was to find those neighborhoods, and then use machine learning, taking the data that we have to project or predict, in real-time, what is happening right now."

You'll have to listen to the rest to get the details, but it's an excellent real-world example of a practical application of AI that provides true economic and social benefits to citizens and communities.  

--------------------------------------------------------------------------

Would you like to be a guest on our growing podcast?

If you have an intriguing, thought provoking topic you'd like to discuss on our podcast, please contact our host Jim Frazer

View all the episodes here: https://thesustainabilitypodcast.buzzsprout.com

Unknown:

Broadcasting from Boston, Massachusetts the Smart Cities podcast is the only podcast dedicated to all things smart cities. The podcast is the creation of aarC advisory group Smart City practice. aarC advises leading companies, municipalities and governments on technology trends and market dynamics that affect their business and quality of life in their cities. To engage further please like in share our podcasts or reach out directly on Twitter at Smart City viewpoints or on our website at WWW dot aarC web.com backslash industries backslash smart dash cities Hello, and welcome to another installment of the aarC Smart City Podcast. I'm Larry O'Brien. I'm Vice President of Research at aarC advisory group. And today we have a couple of very interesting guests with us. We have Seth over and of IBM. Good morning, Seth. Good morning. You have morning we have grand McDonald of the Urban Institute. How you doing Graham? Great, great to be here, Les, we were talking a little bit about what are we going to call this episode. It's a little different from what we normally do. Normally, if you tuned into our podcast, we talked a lot about technology, we don't talk a lot about the benefit of technology. So today we're going to do something a little bit different. We're going to talk about AI. But we're also going to talk about how can we make neighborhoods better? How can we use AI for good. So this is a couple of things that I really like to talk about is practical application of AI, which we don't see a lot of, and how can we use that to actually do some good. So why don't we start by introducing you guys, we can start with Seth. Seth, why don't you tell us a little bit about your role at IBM and what you do and how you became involved in this project that we're going to talk more about? Yeah, thanks, I appreciate the opportunity to talk to everyone about this really important topic. So Seth O'Brien, I'm the Global Chief AI Officer here at IBM, my responsibility is multifold. First and foremost, I'm responsible for defining the AI half of IBM hybrid cloud and AI strategy across the whole company. So what does that mean for IBM? How do we go to market? How do we do communications? And in fact, how do we engage with with folks like the Urban Institute for around AI for good? So how do we bring the power of not just IBM technology, but the people inside IBM who are very passionate about helping non you know, NGOs, implement AI to make themselves better, to make the world a better place? And so that's how we enter started interacting with Graham and Urban Institute. I think it's great that you're doing this partnership to have these public private partnerships really produce a lot of interesting results. And Graham, I'll have you introduce yourself since you're from the Urban Institute. Thanks, Larry. Yeah, my name is Grant McDonald, and I'm at the Urban Institute. As you said, My Durbin suit is a nonprofit, nonpartisan, research organization. And our goal is to elevate the debate. So what we try to do is provide evidence data into the public debate to help folks make better decisions and to improve the outcomes for communities, government and what we call change makers, who could be folks from the corporate sector, from the government sector, individual citizens advocates trying to make change, and hopefully, hopefully making that debate a little bit more evidence based. And my team, I lead the data science team and a technology and data science team here at the Urban Institute. And we really try to take the leading edge technologies, and use that in our data in our data systems to help to drive that change. And so we often rely on folks like Seth and the great people at IBM, who are really at the cutting edge of this work to help us to drive even further into that domain. So I'm really excited to be here and to be here with Seth to talk about this project. Yeah, we're glad to have both of you. So Seth, maybe you can explain a little bit about what this project is and how we're using this artificial intelligence technology to to produce better results in neighborhoods and make lives better for people right. When we talk about smart cities at aarC, we usually talk about the benefits to certain groups of people, right? There's economic benefits, there's social benefits, there's environmental benefits. You know, I usually find whatever benefits the citizens and the people usually provides a lot of other benefits too. But let's talk a little bit about how how we're using technology in this case to help you know drive better results and make lives better for people. Yeah, so maybe I'll talk about the technology and kind of how we're approaching this and let Graham talk about the outcome since he's closer to it. So, you know, I think it's important to remember a couple things. So first, and you know, it's important to remember that a lot of nonprofits in fact, most nonprofits don't have access to the same resources and people that for profit companies often do, so it's harder for them to attract talent, because they don't pay as much while they may be working on something that's socially good, and there are people that will give up some some salary like Graham, to in order to work on things that are going to benefit society overall, you know, how can we as companies better engage with these nonprofits to help fill that gap so that Graham can have help folks like Graham can have help in executing projects, because he can't do it on his own. And equally as important is how do we provide access to the tools and technologies that are required to not only build AI, but operationalize and productionize AI models, because it's one thing to build a model, it's another thing to put it into production, and govern it appropriately. And so bringing tools like that, and technologies like that to bear here, plus, in today's world, and especially when we start looking at things like you know, how do we help the Urban Institute better manage gentrification in the communities? How do we ensure that it's done in a trustworthy manner? So how do we ensure that there's, you know, there's fair, it's fair, so, you know, free of specific type of biases, or in this case, how do we help them identify specific types of biases? How do we make the models transparent? How do we make them robust? And how do we ensure that we're preserving the privacy of people that are being impacted by these models? What we did was we brought our tool so our cloud pack for data with Watson studio, to the Urban Institute, along with some folks from the data science and AI elite team, to work on this problem that I'll let Graham get into now. Yeah, Graham, tell us a little bit about the problem here, which I don't we're talking about gentrification. I think most people probably have some idea what that means. I know, I wasn't I witnessed it personally, living in Austin, Texas for several years. Yeah, definitely saw that happening there. So So tell us about your side of this. Sure. So you know, although we're called the Urban Institute, we study every single place rural and urban in the US, but we have a special focus in many of our projects on cities. And so we this is an issue we've been studying for a long time. And you know, this is a the the upward pressure on on rents, and folks who have more resources in neighborhoods is often what we refer to as gentrification. And the results of gentrification that we're usually most interested in. Mitigating are those where the folks who have lived there for a long time will call the neighborhood their home, but for whatever reason, maybe they're retirees and can't afford the rapidly increasing property taxes. Or maybe there are folks who, you know, have a strong family network there and rent there, but the rents are just rising too rapidly, but they need that childcare, right to stay in that neighborhood and get those resources for up for their opportunities to jobs that are nearby, or whatever that may be, we want to preserve that we want to ensure that everyone has access to those types of resources and neighborhoods, because we thrive as a country when we all have access to those resources. Similarly, we're also focused here on what I would call anti gentrification a little bit as well, Larry, which is the other side of that coin, you know, the folks like us and and like the Department of Housing and Urban Development and local governments, and as well as state governments are very interested in preventing the decline, because we're not only talking about, you know, really hot cities, like you said, in Austin, but we're also concerned about rust belt cities that where neighborhoods have less opportunity, and we want to make sure that there is intervention to ensure that those folks also have access to opportunity as well. So we're, you know, when we're talking about what the there there is, it's to try to understand what is happening to these neighborhoods in real time, and make a difference to intervene to somehow to tip the scale so that there is equitable growth, because we know that all people have access to opportunities, people do better. And so the real core of this project is we don't know where that's happening right now. And that's, that's the big puzzle piece we tried to solve. Yeah, you can't control what you can't measure. Right. So how are we using AI to measure this? And what are we measuring? And, you know, how does this? How does this whole thing work? Can we talk a little bit about that? Sure. So I'll talk a little bit about the measures. And that's more of the expert on the I think it's interesting, too, that you bring it up, you know, this isn't just about anti gentrification, this is about you know, giving people access to resources, so they can stay where they are, and be successful in their communities long term, right? And not just pushed out, whether that's gentrification or whether that's because like you said, you live in a rust belt town, or, you know, coming from Texas or driving through Colorado, there's tons of little towns, you know, that are struggling. So those people need help, too. So I think that's a good way to balance it out. That's right. That's actually that's the problem that Department of Housing and Urban Development originally approached us with, and we were talking with them and where we started experimenting on this AI journey with them was, is there you know, we have data right now that is anywhere from two to seven years old, on neighborhood change, and where this where this, which neighborhoods, this is occurring, right. And the problem is, we want to know where it is now. Right? Yeah, we want to make a difference. We can't say oh, yeah, that's great. It was that way five years ago, but as we all know, you know, as you said, like I live in Austin. But I can see here living in DC or whatever city you live in. There's we can all say point neighborhoods where they've changed massively in the last few years. And then we put COVID. On top of it where, you know, the most recent data is from before COVID that we have right now. And now we're talking about housing markets that have been abandoned as a result of COVID. And we're trying to figure out well, where do we intervene now? Not where do we intervene two years ago is the government policy response. So we initially worked with HUD to sort of define what and HUD is my acronym IDC acronym coming out the Department of Housing and Urban Development to define neighborhoods that were gentrifying declining, inclusively growing, which is what we like or unchanging, they just, you know, there wasn't a ton of change happening. And the vast majority of neighborhood happened to be in the unchanging or inclusively growing category. And there are a small number of neighborhoods that are in the declining and gentrifying category. And what we were trying to do was to find those neighborhoods and then use machine learning, taking the old data that we have to project or predict, in real time, what is happening right now. And that's where the AI came in, we had done a lot of that work, a lot of work with hunt, to get a model where we want it where it could be, you know, accurate, what we're doing. But actually, what we found out through this process, which is really fascinating, is that accuracy was almost not very important at all in this project. And the two most important things were, you know, precision, which is the the the relative accuracy with which the correct ones were predicted, which matters because that's where government cares about spending its money, they want to spend the money in most efficient way possible. And the second was, we were great. And we did really well at predicting unchanging and inclusively growing neighborhoods. But we did a terrible job in predicting the declining and gentrifying neighborhoods, which are the ones we care most about intervening. Yeah. And so we can work with the IBM team to sort of construct a different measures of success in this project. And so the technology was both about getting a better AI model, and about how we better define it to meet the goals that we want to meet in this project. Well, let's talk about that. Let's talk about the model and and how we develop those goals through, you know, to arrive at an outcome. Seth, you want to comment on that? Yeah, I also want to jump back really to something that Graham mentioned. And since you said, this is usually a more technical podcast, maybe maybe I'll dig a little deeper. So Graham was talking about the metrics that we were using to measure the model performance. And and oftentimes we think about accuracy. And and if you think about what accuracy is, it's Did I hit the bull's eye? And and and not necessarily, did I hit the bull's eye every time? But did I have to hit the bull's eye? Precision looks at what is the grouping? So are we precise, so is every time are we are the are the darts going in the same spot or about the same spot, regardless of if they're if they're accurate. And in a case like this, where accuracy is less important. We want to make sure you measure predictive positive predictive ability, or the precision rather, because it's easy, easier to move a cluster to be more accurate than it is to tighten up a dispersion around some some level of accuracy. And so I guess the point is, the metric is not always, am I accurate? The metric is use case by use case and a need, you need to have a thoughtful design of what your metrics going to be. There also needs to be some business metric that's associated with that some of the things that that Graham that Graham talked about. Real quick there, Seth, you might Yeah, uh, well, I'll say in this case, I'll define precision in the way the government defines decision working with HUD is great. It's are we most efficient with our money, which is different than you can be very accurate. And but you'll have to spend a trillion dollars to get to all the gentrifying neighborhoods, but they care about is how can we spend less money and still hit the same number of neighborhoods. And that's when we increasing the precision with sets team was really key is are making these models more efficient in the way that we target and spend that money. And that's much more important to them than whether we move a point in accuracy. Okay. We're getting a little introduction to data science here. I feel Yeah. Good. Actually, it's, it's interesting. Sometimes we go in and help customers, whether they're nonprofits like the Urban Institute, or others, and we solve or help them solve the wrong problem. So in this case, we could have helped them solve how to spend a trillion dollars and help one neighborhood, right, but we actually did a good job defining the outcome up front. And so you can solve the wrong problem using the tools of data science and AI if you don't do a good job up front. That's a good point. And I think it's something everybody should take away from this bug guest. So let's talk about how you're doing it. So how does this play out on the ground? We're measuring, you know, different variables in different neighborhoods and putting that into a model, I assume and trying to use that model to predict is this neighborhood headed in the right direction? Yeah, so So I think, you know, AI doesn't happen in a vacuum or data science doesn't happen in a vacuum. In order to do that you need you need data. And Graham mentioned about about the data. And we have pretty extensive data going back a few years, we have less data more recently COVID kind of made the data that we did exist pre COVID, some somewhat irrelevant, maybe irrelevant, not the right word. But but less less. I think it's Yeah, I think it's probably a pretty good word, because everything's less informative, probably a better word, because it didn't completely throw away everything. But it but it changed the game. It was it was an outlier, right? A couple outlier years. But it's a world we live in now, the impacts of COVID. And so, you know, how do you take that data? And how do you leverage it in order to make predictions. And so, you know, there, there's this in this case that the group looked at eight metropolitan areas along the east coast and central US. And we analyze, they analyze this data using census, census tract level. And so, you know, we also pulled in additional data to help so Zillow home value index and rent index data was used. There's a Housing Choice Voucher voucher program that's available in the public. And then there are other data sets that can be pulled in that show, you know, that can kind of help influence us. So. So the point is, it's not one data set. And it's not necessarily only data that an entity or a company has ownership of, there's lots of public data out there. Oh, yeah, third party data that can help you fill in gaps around what's going on. So for instance, the Zillow data very much informed the changes that were happening due to COVID. In this case, because they were more real time or more, more up to date than than the data that the government provided to the Urban Institute. Yeah. And I think that's something that we've, as an analyst covering the smart city space, I see that a lot. There's a lot of data from a lot of different sources. I cover Smart City platforms, a lot of my previous podcasts have been kind of around that smart city platform topic, which does tend to get kind of technical. And we end up talking about things like we need to take this data from all these different sources, you know, whether it's publicly available government data, or like you said, Zillow, you know, kind of data, there's a lot of different data sources out there. And the government doesn't provide real time data, right? I mean, all the data you're getting from the government is going to be a year old, two years old. So that, you know, and they provide great data. I mean, I go to government data all the time, you know, when I'm doing research, but that it's that dynamic, real time aspect of it. And with COVID, and climate change, and everything else, things are changing a lot faster in the urban landscape, or within you know, even rural cities and communities, things are changing faster. One of the ways I put it was it feels like everything is happening at once because it is right, you've got COVID and supply chain problems and climate related issues and things like that. So we're taking this data from all these different sources and combining it together to form a common model. I'm not sure how deep you want to get into that. But I assume you have you have this common data model that you've put together. Can we talk a little bit about that in layman's terms that our listeners can understand her? Yes. So grandma, let you take take that one? Sure. That sounds great. Yeah. So the essentially, what we're doing is we are taking the ground truth data the government provides, right, so we're taking the American Community Survey data, the most recent we have, and we are using that data as the base, right. So that we recognize that that's what governments that's what local activists, that's what folks are using now, to determine neighborhood change. They're saying, I'm looking at the average. And this case, because we're looking at the neighborhood level. And because the census combines years, we're talking about an average from 2015 to 2019, of the American Community Survey that currently serves is what is happening in 2021. Right. So that's, that's how we start and that because that's what people are using. That's our baseline we use to measure our progress. And then we say, let's take all these other sources that Seth described, let's take Zillow is take out the common housing choice vouchers from today, let's take, you know, home mortgage disclosure act, let's take whatever data that we have available. And that has more recency to it. And let's use that using from the gold standard to predict what is happening now. And we basically can back test it, we can use previous years of data, knowing the later years of data gold standard. So we can use 2012 to predict 2016 or something like that, right to see how accurate our model performed in the past and use that as a benchmark for the future. So basically, we're taking this, you know, gold standard data using some of the more real time data sources, and predicting or fast forwarding to the present day. And I think one thing to note, and I think was really interesting, and it was a benefit of coming out of the partnership with IBM not only defining, you know, the best way to do this in terms of weighting our metrics, so we actually got generous vacation and decline neighborhoods, right, which we did, which I thought was awesome, because we didn't do in our hot partnership that is getting in a massively improving our precision or the efficiency with which government funds could be spent, which I think is number two. Number three is we actually didn't end up with a common model, we actually ended up with city level models at the end of the day, because what we found was that a common model did well, but city models did better. And there are just factors that are different about every city, which any longtime resident of a city will tell you. But as a data says, we always want a model that describes it. All right, one model. And here and what happened was, we ended up focusing on these individual level models, then there's two benefits. One is we're able to focus on city level factors and make them make them more accurate for each city. But the second is that, and I, you know, Cephalopoda do this later, is we ended up sharing the code and the cloud pack online along with our white paper and cheat sheet and things like that. And we have partners in these local cities that we hope in the future will be taking that data that we added to the model and adding their local dataset to it, right, their local knowledge and being able to use that to make this more accurate for their city or their, you know, local council people, which is we what we hope is will happen in the future with this is that, you know, a we're more accurate by looking at a city level, but be we enable ourselves to by open sourcing this information, to help those local actors who are good with data, add their perspective, which, you know, as we all know, we all know stuff about cities that isn't describing these datasets that we can, that we can that we can tell a story and help us to predict neighborhood change better. Yeah, and I think, you know, there's, there's a few underlying points in there that are good to pull out. So so one is that we need to look at what again, what problem we're trying to solve. And it's not uncommon, that we go in the, you know, the data science elite team, or IBM consulting will go in to try and help a customer. And they're trying to solve this big problem. So in this case, housing and they're trying to solve it kind of globally, globally, meaning not on a granular scale, say the same thing. We went in with an insurance company, they were trying to do stuff with Medicare, and they were trying to predict what Medicare claims are going to get rejected for the US. And really, you need to look city, state by state because every state has their own way of doing Medicare, they're only corporate Medicaid, their own requirements. And so you need to look, you need to actually build 50 models in that case, right in the case of because every, everyone's different, and they have been trying for years to build one model, and it just doesn't work. Similar thing happened here, where, you know, the Urban Institute was trying to solve this big problem. And the data science and AI elite team helped Graham and the data scientists at the Urban Institute focus down and say, look, the problem is really, at the community level, right or at the city level, and building a model that solve that, and building a pipeline of data. So you know, the features, what we call features, which are the data elements that go into it, the way this project helps accelerate, that is the same type of data, the same features, or data elements that are brought together are transformed. And you're creating a data pipeline, that you can now bring information from a specific city and to feed the model for that specific community or that specific city. So it helps accelerate it and that's the the cookbook, if you will, or the set of Jupyter Notebooks, which are the basically the code, the home of the code that's documented. Now anyone can take that for any community, pick it up, insert their data, build the right features, and have the model get trained off of those. So it delivers a good outcome for them. And so we also need to remember that the outputs of these are probabilistic, not deterministic. Meaning that it's going to give you a likelihood, it's not going to give you the answer, it's going to say it's going to give you a this is likely to happen. This is how we see that election, right? So we see election predictions, right? Nate Silver is this great data scientist who predicts elections, but even if it's 90% accurate, he's still gonna be wrong 10% of the time, right? And so understanding that AI is a prediction and a prediction is probabilistic, not deterministic. So we're not determining the answer, we're telling you the answer, this is the answer that's likely to happen. And that's that that provides, you know, probabilistic in this case is probably a better thing. Because now it's going to provide the city planners, the the nonprofits that are involved in the communities, a set of possible outcomes that they can then look at, and put in context of their community. You know, here, here are the five things that you should be looking at, if you're trying to manage, you know, community change in Austin. And now the city leaders and community leaders in Austin can go in there and say, Well, you know what, this one's at the top really doesn't make sense for us right now. And so let's let's have the humans get involved and determine, you know, the second best and the third best ones are really the ones that we think are going to help the levers that we need to pull to make this a better place. And so having the humans in the loop, getting helping them then reduce the near infinite possibilities down to something that a human brain can get their head on. Round is really the value of something like what we've built with with the Urban Institute here. Yeah, I mean, you're enabling humans to make better decisions. And at the root, I think that's what a lot of this is about. We're enabling people to make better decisions. We're not we're not just relying on the technology to make the decisions for us. Yeah. And you know, AI, the value of AI is to augment human intelligence, not necessarily. Yeah. And I think that's a great point to bring it out in this podcast. Thank you for doing that. I do want to talk about resources later, because you mentioned open source, and people can take this technology and use it in their own communities. But let's talk first about outcomes. So we've talked a little bit about data sources and building models. And you know, how can we drive good results by having the human in the loop? But what are some of the results? What are some of the outcomes that you're seeing right now with this project? As I mentioned, I think, you know, this is something we released a couple months ago, three weeks ago. So I think we're still working with folks in the field on this one. But I think it's very promising. Right? I think, you know, we talked to some people who are very interested local, we have, you know, local data intermediaries that are in cities across the country. Not surprisingly, as an Urban Institute, we have a vested interest in knowing what's going on with our data, people in different cities that are very interested in I've been using a lot of different pieces of our work throughout the pandemic, to try to get a better sense of what's going on real time, because what we hear in the field is, everyone wants real time data right now. Every government entity, every local data player, we want real time data, that's great. You have something before COVID, that'll be good story, we can tell if our kids one day, right? But right now, we want to know how we can improve the situation because things are dynamically changing in real time. And I think if there's one thing this pandemic has shown us, it's that there's a, we didn't have good real time data systems before, especially Oh, no sector. And, you know, there is a ton of demand for this, we have been at Urban advising state, local federal governments on this, you know, on this issue on, because our goal is elevated debate is provide better data. And often our answer is we don't have it. But things like this, you know, solutions like this are very promising. And people are really interested in taking it to the next level. What we've had to work through, though, is, as Seth said, In the beginning, we, you know, there's not a lot of data capacity in our sector, right? You know, the federal government, big, big city governments, maybe big state governments have some data analysts, but even then, you'd be surprised that there aren't as many that can sort of take this and run as much as they can, as much as we'd like them to. So we've been very focused on our data. intermediaries are folks who are university partners, or city partners who are really data savvy and cross cities. And they are the ones who are most, I think, able and interested in taking this, you know, and tweaking it to their local needs, and then helping their local folks to understand what's going on, as well as we're directly engaging with folks at the federal level, like I mentioned, with HUD, but also with other federal agencies on ways in which they can just think better about the data infrastructure that they currently have, and how that might play into machine learning or AI in the future. Because some of them are at that stage, and others are just at the very beginning of their data maturity level, and really need folks to to help them move along that that ladder. So you know, right now, we are really, really excited about the excitement that's out there about helping people to adopt this. We haven't obviously seen the effects of this going going forward yet, because it's only been three months old. So looking forward to seeing what they I think that the demand for this real time data is is almost infinite at the moment. Yeah, data is a huge problem for governments in general, right? Look, look in the US right during COVID. There, there is no standard. That's, you know, for COVID data for health, health, public health data, every state had their own way of collecting data, they own their own data sets that they collected. So not every state collected the same data, there were disparities in what was reported and how it was reported. And even looking at the CDC, CDC data during the COVID was typically at best seven days out date, if not more, who data was even even more outdated. And so we have government officials that are making decisions on data that they you know, should be, you know, we want it to be you know, within a within a day, let alone, you know, you know, we're talking about weeks that they're making their decision you're making decisions and actions on. And I think we're lucky in the housing area, it's a little bit more standardized in this case, because HUD has done a good job of, you know, saying, Okay, if you want to participate in these kinds of loan programs, or these kinds of access programs, you need to report this data, and you need to report it in this way. So they've done a really good job, but I think government J data and it's not just in the US, we need to do a better job of enabling governments and municipalities to collect better data so that we can make better decisions just across the board. Yeah, I agree. And COVID was really the catalyst for all of this. There were It wasn't much real time data at all. No, the only the only real time data was coming from us and Johns Hopkins, we have 50 people working literally 24 hours around the clock trying to pull data together for COVID, Johns Hopkins had a similar level of activity in it. So you know, and these are 50 highly skilled people, that that's what they were doing literally 24 hours a day around, you know, around the clock. Yep. And I witnessed a lot of cities and communities feverishly tried to put together, you know, Power BI based data dashboards and trying to glue everything together themselves, or, you know, as I was pushing, you know, maybe invest in a smart city platform that allows you to take all this data from different sources and combine it together into a unified environment. But then you have that trick of visualization, right? And how do you look at the data? And how do you make those decisions about the data, I think it's important to talk about the work process aspect of that, like you mentioned, Graham, you have these data intermediaries on the ground, that can actually take this data and start to drive some decision making, you know, maybe public policy changing and, you know, actually closing, you know, the humans that are going to close the loop on this, I think is a pretty common problem for a lot of people that are going to implement this technology, right, it's one thing to put a model together and get the data, you know, sucking into the model, and, you know, and actually look at the data, but then making that closing the loop and making that decision, what are we going to do with this is always a problem, so that you've actually covered manufacturing for 25 years, and they still have that problem? You know, yeah, there's a few aspects to that. So one is, if you're trying to automate a process or workflow, you don't really need to worry about the visualization, you just plug it into that process or workflow. However, if you're, you know, you're you're building a new kind of capability or new way of interacting, you need some design involved in that you need some storytelling ability. In fact, when I when I initially built the data science elite team, I had a new a new skill set that people are like, what is this I, you know, I started hiring data journalists. So thank, you know, the people that build the store the data stories in New York Times The Wall Street Journal, 538. But you know, providing a story around the business outcome that's desired, the data that was used on the model that was built, such that it elevates the insights in the language that the business is used to using, using visual tools and visual storytelling capabilities, and sometimes just textual. So sometimes, you know, data journalism, so it's actually telling a story with with data. And that's really important. And that came out, you know, comes out here with the Urban Institute in terms of, you know, how are they surfacing this information to people that have no more than likely have no technical capability, don't even really understand necessarily, the data certainly are likely to not understand, you know, what a random forest or gradient boost model is, right? Or what, you know, what, what an f1 metric is? f1 score is right. And so how do you take that information and give them something meaningful? Yeah, now, I want to jump in there really quick, because I, we those are all things that we do a lot at Urban, I could talk a while about this. But what I want to appreciate about the IBM Process is and what we appreciate about as a as a nonprofit is, we're never going to be the expert on any one of the machine learning methodologies that Seth just mentioned, or on the right metrics or things like that, we have really great data scientist, but we're never going to be at the very leading edge of anything. We like to adopt things that maybe sets team has built, right, or others have built in the open source community and apply it because we know our audiences, and we know how to Tech, we know the technology, and we can sort of make that interchange work. So what I think works, what was really great about this project is, you know, the, the IBM Data Science elite team started and showed us some of these resources where you can go with an auto ml solution, right? So essentially, the idea is you start with a tool that helps you do all the fancy choosing of a model and what the best model is for the metric you're trying to optimize and fix that for you. Right, which previously takes a ton of time and expertise to do. And we don't really have many of those people on our staff to say, oh, yeah, this is different. This particular problem, you really need to do the regular lives actually boosts but of course, you know, no, we don't have any of that. Right. So you know, you just have this auto tool that says, Okay, here's the most accurate model. Now we're now we can focus on the metrics and the communication. Right. And those are really the most important parts of the problem. We don't want to as a province hire really expensive data scientists almost figure out the other part we can first of all, but it really helped it really opens us up to do the things we're actually good at, which is like, Okay, we understand a policymaker is saying, I want to optimize government resources, and I want to focus on these neighborhoods. Okay, so we need other metrics, right? So we'll work with the team on that. And we really know how to visualize and communicate to folks through our you know, Our database and storytelling teams, but also through our contacts, we have tons of contacts, oh, Governor, we understand how to talk to them. Right. And so let's do what we're good at, and not what we're, you know, what? Yeah, our specialty, right. And what Graham just described is using AI to help accelerate and you know, a little bit more democratize the process of building AI. And so that's, that's a really great use of AI is to help provide, you know, resources in terms of the ability, kind of what Graham described. But even if there's places that have a lot of data scientists, how do you get more capacity out of them? How do you let them focus on the things that are bespoke to their business, as opposed to focusing on things that are essentially commodity? And that's what this auto AI tool that inside Watson studio that Graham described? That's what it does? Yeah, I always thought that was one of the great promises of AI, in these types of applications is to let people do what they do best. You know, we're not trying to make data scientists out of everybody, just just let them do their jobs better. Using this valuable information. So we're totally on board with that. AI for good, right. That's the tagline for this podcast. AI for good. Can you talk about some resources? So we talked about open source and other people can use this data? Where do people go to find out more about what you guys are doing? And where would they go, you know, if they are interested in using this open source, model and information and so forth? Where do we go? Yeah, so we have a repository. So we have this in a GitHub repository, that's public that anyone can get access to. And as I mentioned, it's in the form of a set of Jupyter notebooks that, that, you know, people can insert their own own data into, as I described, we can provide the the exact link to you. But we do have a white paper that's coming out on this as well, that describes all of this, in order to or we do have a white paper, it's not coming out, it is out. And you could actually just Google IBM Urban Institute, and it'll be the first hit that comes up, that's probably the easiest way for me to do it. And then, you know, if people want to learn more about it, and they want to, they start using it, I'm sure you know, all volunteer Graham and myself, reach out to us. And we'll, we'll make sure that you know, you you can, you can have access to the right resources. And, you know, dig a little deeper if you want, you know, a couple things to that. One is we have on our end, we've written a blog posts are detailing the policy background and the initial efforts we haven't heard it and how that evolved into these efforts with IBM. And then also say one of the things that we worked with the IBM team on, which I thought was really cool was, you know, obviously, putting GitHub making open source was great. But there's also a starter kit in the IBM Cloud Pak for data. So if you just want to, you don't want to worry about setting up your own resources, or figuring out how to go install the right libraries and Python or whatever it may be. There's just like a pretty easy Quickstart there for people who want to be able to, you know, get up and running on the model really quick. And that was one of the things I really appreciated about the team is they really wanted to make this really easily easily available. Thanks for doing the marketing plug in the grand nine. I thought it was a good idea. Yeah. Bill stellarator is of all kinds into our products. And this is just another another one that's for for social good. And the Urban Institute is not the only AI for social good project that we have going we have, you know, half a dozen going at any one time. And, you know, we I think one of the things that differentiates us as well, as I said at the beginning is not only do we provide technology to them, you know, free for a little while, and a cost in perpetuity or close to at cost. We also provide people to help them deliver projects to help them get real value to show them how you can use, you know, a combination of proprietary and open source tools to actually execute these projects and get extra capacity. And, you know, that's this whole AI for social good program that we run at IBM. And you know, we have, I don't know, we have like, you know, 250,000 people at IBM now, most of which are very passionate about some form of community service, doing good for communities that they live in. And when we when we reach out for volunteers, usually in minutes, we already have too many. And so you know, not even talking about IBM products but talking about just the IBM ors in general that are so passionate about helping the communities they live in and making you know society a better place for for everyone and making it inclusive for everyone. And I think that's a that's something I'd like to point out. I think it's definitely worth pointing out. Thank you for doing that. So if you want to find out more, just Google, IBM and Urban Institute and you will find out a lot more I just did that myself and a bunch of IBM links and Urban Institute stuff came up so that does work. And if you want to find out more I can pass information on you can email me at lol Brian OB r i e n at ARC web.com aarC web.com. If you want to email me and find out more, be happy to help our listeners out there. And of course if you want to find out more about aarC, you can go to www dot orc web aarC web.com. But we are just about out of time today, guys, is there anything else we want to add? Before we wrap things up here might be good to do a follow up on this a few months from now see how this is working out on the ground? And what kind of results we're getting? Yeah, I'd be interested to hear from Graham, how communities are using this and and how they're getting value from it. So for sure. Well, we're happy to do I'm happy to hear Graham talk. I appreciate it. Can I make my my field building plug here, Larry? Yeah. Alright, so everyone on my team will now will turn this off if they're listening. But they've heard this too many times. But, you know, I'm really passionate about the field of AI data science for social good. And, you know, I think there's so many ways to get involved, whether it's SF said, you know, I, IBM or, you know, raising their hands and volunteering folks working at the Urban Institute, folks who are civic data scientists in their communities. And I'm just really passionate about building up those skills and capabilities, and would encourage people to follow along and get involved in either this project or other projects that they I forget or our team here at the Urban Institute as done and take a look at them and take a look at the code. And and get involved in this field. Because we need more, as Seth said in the beginning, and I agree with it completely, we need more folks who are focused on how do we apply this amazing technology? And, and have it benefit society in general and benefit NGOs or governments or other organizations? Because we really need those folks, we really need you to get involved. Yeah, and I think, you know, just build on that a little bit. I often get asked, you know, I want to be a data scientist, how do I do that, you know, and I've taken this this MOOC are that MOOC, you know, what this online course or that online course and gotten a certification, my response to them is, you know, you need to actually do projects. And many of these people are skilled either in statistics or coding, and they just want to kind of bring those together. And I tell them, find something you're passionate about, and, and get the data and, and pull it together and do a project. One thing that I'm going to start saying, you know, that I started saying over the course of last year is reach out to your favorite nonprofit, they are probably looking for people to help them. And you can help them with what you've learned and what you already know, to actually execute projects and put what you've learned through these classes to work. And that's really when you learn it. And so, you know, I encourage people who want to get into the field of data science or AI, like I just said, reach out to the Urban Institute, reach out to diaper bank, reach out to, you know, whatever your favorite nonprofit is, and see if there's the help that they need in this space. Because I bet there is. That's great advice. Thank you. We definitely do need more people. And we do need more people to volunteer for stuff like this. I know, you know, growing up in my family, we're always very service oriented and a lot of community service stuff. So it's good to hear that this is going on, you know, this is a lot. It's a refreshing change for this podcast, that we're actually talking about stuff like this. And we're talking about technology at the same time. Now usually we talk about the economic benefits or you know, things like that. But this has been a great discussion. I want to thank you guys for joining me today. Again, we're talking to Seth Dauber. And Graham MacDonald. Seth is with IBM and Grammy is with the Urban Institute. And hopefully we'll be doing this again in a few months to track how this project is going. But I want to thank you for joining us on this latest edition of the aarC Smart City podcast and we will see you next time Thanks. Broadcasting from Boston, Massachusetts. The Smart Cities podcast is the only podcast dedicated to all things smart cities. The podcast is the creation of aarC advisory group Smart City practice. aarC advises leading companies, municipalities and governments on technology trends and market dynamics that affect their business and quality of life in their cities. To engage further please like and share our podcast or reach out directly on Twitter at Smart City viewpoints or on our website at WWW dot aarC web.com backslash industries backslash smart dash cities