October 21, 2021 By Seth Dobrin 3 min read

Cities across the United States are increasingly seeing their local communities impacted because of widespread neighborhood change due to blight or to adverse effects of gentrification. While gentrification can often increase the economic value of a neighborhood, the potential threat it poses to local culture, affordability, and demographics has created significant and time-sensitive concerns for city governments across the country. Many attempts to measure neighborhood change up until now have been backwards-looking and rules-based, which can lead governments and community groups to make decisions based on inaccurate and out-of-date information. That’s why IBM partnered with the Washington D.C.-based nonprofit research organization Urban Institute, which for more than 50 years has led an impressive range of research efforts spanning social, racial, economic and climate issues at the federal, state, and local level.

Measuring neighborhood change as or before it occurs is critical for enabling timely policy action to prevent displacement in gentrifying communities, mitigating depopulation and community decline, and encouraging inclusive growth. The Urban Institute team recognized that many previous efforts to measure neighborhood change relied on nationwide administrative datasets, such as the decennial census or American Community Survey (ACS), which are published at considerable time lags. For that reason, the analysis could only be performed after the change happens, and displacement or blight has already occurred. Last year, the Urban Institute worked with experts in the US Department of Housing and Urban Development’s (HUD) Office of Policy Development and Research on a pilot project to assess whether they could use novel real-time HUD USPS address vacancy and Housing Choice Voucher (HCV) data with machine learning methods to accurately now-cast neighborhood change.

Together, the IBM Data Science and AI Elite and Urban Institute team built on that pilot to develop a new method for predicting local neighborhood change from the latest data across multiple sources, using AI. This new approach began by defining four types of neighborhood change: gentrifying, declining, inclusively growing, and unchanging. IBM and Urban Institute then leveraged data from the US Census, Zillow, and the Housing Choice Voucher program to train individual models across eight different metropolitan core based statistical areas, using model explainability techniques to describe the driving factors for gentrification.

The IBM Data Science and AI Elite team is dedicated to empowering organizations with skills, methods, and tools needed to embrace AI adoption. Their support enabled the teams to surface insights from housing and demographic changes across several metropolitan areas in a collaborative environment, speeding up future analyses in different geographies. The new approach demonstrated a marked improvement over the precision of older rules-based techniques  (from 61% to 74%) as well as the accuracy (from 71% to 74%). The results suggest a strong future for the application of data to improving urban development strategies.

The partnership put an emphasis on developing tools that enabled collaborative work and asset production, so that policymakers and community organizations could leverage the resulting approaches and tailor them to their own communities.

IBM Cloud Pak® for Data as a Service was used to easily share assets, such as Jupyter notebooks, between the IBM and Urban Institute teams. During the engagement with Urban Institute, the teams leveraged AutoAI capabilities in Watson Studio to rapidly establish model performance baselines before moving on to more sophisticated approaches. This capability is especially valuable for smaller data science teams looking to automatically build model pipelines and quickly iterate through feasible models and feature selection, which are highly time-consuming tasks in a typical machine learning lifecycle.

Together, this engagement and collaboration aims to empower the field to use publicly available data to provide a near real-time assessment of communities across the country. In addition to providing insights on existing data, the project can help uncover shortcomings in available data, enabling future field studies to fill the gaps more efficiently.

For more details on the results, check out our assets which provide an overview of how the different pieces fit together and how to use them. And if you want to dig deeper into the methods, read our white paper.

IBM is committed to advancing tech-for-good efforts, dedicating IBM tools and skills to work on the toughest societal challenges. IBM is pleased to showcase a powerful example of how social sector organizations can harness the power of data and AI to address society’s most critical challenges and create impact for global communities at scale. IBM’s Data and AI team will continue to help nonprofit organizations accelerate their mission and impact by applying data science and machine learning approaches to social impact use cases.

Interested in learning more? Discover how other organizations are using IBM Cloud Pak for Data to drive impact in their business and the world.

Was this article helpful?
YesNo

More from Artificial intelligence

AI Bundle for IBM Z and LinuxONE

5 min read - IT leaders have long faced a need to add compute capacity to meet the increased demands from their business. Adoption of mobile technologies and ongoing digital transformation has added to these capacity demands, and IT leaders have been forced to plan for the increasing need for compute infrastructure. We have seen that the explosion in interest and adoption of AI has led IT leaders to revisit their capacity plans. They are seeing the need for increasing compute resources at a scale…

Unlock the value of your Informix data for advanced analytics and AI with watsonx.data

3 min read - Every conversation that starts with AI ends in data. There's an urgent need for businesses to harness their data for advanced analytics and AI for competitive edge. But it’s not as simple as it sounds. Data is exploding, both in volume and in variety. According to IDC, by 2025, stored data will grow 250% across on-premises and cloud storages. With growth comes complexity—multiple data applications, formats and data silos make it harder for organizations to utilize all their data while managing costs. To unlock…

How to prevent prompt injection attacks

8 min read - Large language models (LLMs) may be the biggest technological breakthrough of the decade. They are also vulnerable to prompt injections, a significant security flaw with no apparent fix. As generative AI applications become increasingly ingrained in enterprise IT environments, organizations must find ways to combat this pernicious cyberattack. While researchers have not yet found a way to completely prevent prompt injections, there are ways of mitigating the risk.  What are prompt injection attacks, and why are they a problem? Prompt…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters