February 11, 2022 | Written by: IBM Academy of Technology
Categorized: Data Science | Technology
Share this post:
Two maps showing results of flood risks from the data science and machine learning hackathon project.
From the competing projects, two were published on the Cloud Pak for Data Gallery:
- 1. Flood risk :
Floods are the most frequent type of natural disaster and can cause widespread devastation, resulting in loss of life and damages to personal property and critical public health infrastructure. Flooding occurs in every U.S. state and territory, and is a threat experienced anywhere in the world that receives rain. According to NOAA, in the U.S. floods kill more people each year than tornadoes, hurricanes, or lightning. Understanding flood risk is important so that people don’t ignore the warnings sent out by agencies like the National Weather Service (NWS). And warnings need to be more specific, pinpointing certain areas and exposed locations. This project extrapolates the FAIR Model for flood analysis. FAIR, short for “Factor Analysis of Information Risk” is the only international standard quantitative model for information security and operational risk. As described in wikipedia, FAIR underlines that risk is an uncertain event and one should not focus on what is possible, but on how probable a given event is. This probabilistic approach is applied to every factor that is analyzed. The risk is the probability of a loss tied to an asset. In FAIR, risk is defined as the “probable frequency and probable magnitude of future loss”. FAIR further decomposes risk by breaking down different factors that make up probable frequency and probable loss. These factors include: Threat Event Frequency, Vulnerability, Threat Capability, Primary Loss Magnitude, Secondary Risk. The project calculates the Loss Event Frequency based on how vulnerable and susceptible the flood location is. And finally, based on the severity of the Flood alert and Loss Event Frequency, the Final Threat Level is calculated.
- 2. Site search :
Site search recommender improves search relevancy by using user behavior data from ibm.com search and de-identified for public consumption. It’s built using open-source deep learning libraries (TensorFlow and Keras) and implements the collaborative filtering algorithm to make meaningful recommendations to users based on their search data terms and historical search behavior. Benefits of this project include allowing data scientists to improve relevancy of corporate site search results, serving as boilerplate to provide out-of-box support for search use case and leverages data and AI to solve real-life search and discovery challenges.
You can try the projects Flood risk and Site search yourself. Note:
With many thanks to the data science community in the IBM Academy of Technology for their energy, dedication, and determination and to the two teams who created these projects.
Thomas Schaeck, email@example.com
Susan Malaika, firstname.lastname@example.org
The content in this blog post is the opinion of the author. For more on the IBM Academy of Technology, see these posts:
A Path to the Open Organization – Academy of Technology President Julie Schuneman
About the Academy