In a typical data science workflow, the initial steps are to identify relevant data sources and request data from various departments and source systems. The bulk of time is then spent cleaning and transforming the data before models can be built.
Feature engineering pain points
In most environments, data scientists have to wait for experts to provide the requested data. Depending on the effort involved and on competing priorities, this process can sometimes take weeks or months before data is available for analysis. The business impact of these delays include crippled analytics projects, missed windows of opportunity, and unmet deadlines.
Taking a shortcut
In the ideal scenario, the data scientist has tools to engineer features and can build models without dependencies. IBM offers self-service data preparation services for data scientists that are agile and interactive.
Self-service data preparation
IBM Watson Studio and IBM Watson Knowledge Catalog include Data Refinery for self-service data preparation. Data Refinery puts data pre-processing and feature engineering in the hands of data scientists, enabling faster data insights. Data visualization aids in tedious and iterative data cleansing and shaping tasks.
Watson Knowledge Catalog enables data scientists to access curated data sets that are known and trusted by the organization. Immediate access to ‘source of truth’ data can simplify or eliminate a number of process steps.
Data Refinery key capabilities include:
Intuitive data transformation — Shape and clean data using graphical interfaces or a library of templates populated with powerful data transformation operations using code
Rapid feedback on the data shaping process — Data visualization and profiles help to guide data preparation steps, reveal data quality issues, and avoid missteps
Increased confidence in data quality — Incremental snapshots enable data scientists to gauge success over time. Visualization and data-shaping tools make it easy to iteratively remediate data quality issues.
Support large data sets — Steps can be saved in projects, edited, and run against larger data sets
Use case example: Identify potential repeat customers from sales history data
In the following video, I’ll demonstrate the visual capabilities of IBM’s data shaping tools using an end-to-end example:
Refine depersonalized, sensitive data in Watson Knowledge Catalog
Clean and refine data using quick, built-in operations or code
Leverage incremental snapshots, data profiles, and visualizations to guide the preparation of data for analysis
Stay tuned for Part 2 of this blog where I’ll show examples of combining financial data with sales data through the power of collaboration.
Part 3 of this blog will provide a deeper dive into preparing and analyzing a mixture of structured and unstructured data sources.
Concerning data privacy, IBM Watson services are trained and designed to process files such as customer contracts, disclosures, and reports which facilitate transparency and enable us to comply with international laws and policies.
Every organization that develops or uses AI, or hosts or processes data, must do so in ways that allow them to rationalize the decisions or recommendations in a way that is easily consumable. Let's examine Forrester's recommendations how organizations can leverage AI for the good of humankind, while avoiding the ethical pitfalls associated with perceived discrimination.
IBM has been named a leader in AI-Based Text Analytics Platforms by The Forrester Wave ™, Forrester's rigorous evaluation of vendors in a software, hardware, or services market. Watson Discovery, Watson Explorer, and Watson Natural Language Understanding’s strengths and weaknesses were assessed alongside seven other providers.