Share this post:
DataOps is a discipline focused on the delivery of data faster, better, and cheaper to derive business value quickly.
It closely follows the best practices of DevOps although the implementation of DataOps to data is nothing like DevOps to code. This paper will focus on providing a prescriptive approach in implementing a data pipeline using a DataOps discipline for data practitioners.
Data is unique in many respects, such as data quality, which is key in a data monetization strategy. Data governance is necessary in the enforcement of Data Privacy. Automation and orchestration in an interoperable hybrid cloud distributed data landscape is where DataOps excels. Whether an Artificial Intelligence, Machine Learning or Business Intelligence use case, all of them depend on governed, high-quality data delivered quickly. This “How and Why to DataOps” paper provides a prescriptive approach toward implementing a data pipeline using a DataOps discipline for data practitioners. It also serves as a point of reference for business executives that wish to understand the level of effort and scope for a DataOps based organization.
For a more in-depth introduction to DataOps, refer to the DataOps flipbook.
Authors: Sonia Mezzetta, Patrick O’Sullivan, Anandakumaran Suthanthirabalan, Christopher Grote, Karina Kervin, Rajesh Yerragunta, Aishwarya Bhupatiraju, Sukumar Beri, Sunny Anand, Mohammed Abdul Qadeer Moini, Jo A. Ramos
The opinions expressed in this post and the document are those of the authors and not necessarily of IBM.