The problem with data

As any business leader will tell you, data is the lifeblood of organizations operating in the 21st century. A company’s ability to effectively gather and use data can make all the difference in its success. But a number of factors can compromise data’s health, making it unmanageable and therefore unusable for today’s businesses. Specifically, data professionals face a dramatic increase in data complexity, variety and scale.

Here, we explain the three categories that keep data professionals awake at night, and why traditional data management practices and methods won’t help. 

The three factors derailing your effective data use
Data sprawl, data drift and data urgency conspire against all data professionals. By definition, data sprawl is the dramatic variety of data sources and their volume. Consider systems such as mobile interactions, sensor logs and web clickstreams. The data that those systems create changes constantly as the owners adopt updates or re-platform their systems. Modern enterprises experience new data constantly in different formats, from various technologies and new locations.

RELATED CONTENT: Is DataOps the next big thing?

Data drift is the unpredictable, unannounced and unending mutation of data characteristics caused by the operation, maintenance and modernization of the systems that produce the data. It is the impact of an increased rate of change across an increasingly complex data architecture. Three forms of data drift exist: structural, semantic and infrastructure. Structural drift occurs when the data schema changes at the source, such as application or database fields being added, deleted, re-ordered or the data type changed. Semantic drift occurs when the meaning of the data changes, even if the structure hasn’t. Consider the evolution from IPv4 versus IPv6. This is a common occurrence for applications that are producing log data for analysis of customer behavior, personalization recommendations, and so on. Infrastructure drift occurs when changes to the underlying software or systems create incompatibilities. This includes moving in-house applications to the cloud or moving mainframe apps to client-server systems.

Data urgency is the third factor. It’s the compression of analytics timeframes as data is used to make real-time operational decisions. Examples include Uber ride monitoring and fraud detection for financial services. IoT is also creating ever-increasing sources of transactions that need immediate attention: For example, doctors are demanding input from medical sensors connected to their patients.

Anatomy of past data service incidents resolutions
You might be thinking “The issues of data sprawl, drift and urgency aren’t new and have been around for years,” and you would be correct in your assessment. But their increased frequency and magnitude are new requirements. In the past, these issues were generally isolated and could be dealt with using standard exceptions methods. Let’s look at how data service incidents were resolved in the past (and how they are still resolved in many enterprises).

First, an exception event occurs. It may be flagged by a computer mainline job that ends with an error and is noticed by a data center operator, or a business owner may see odd results in the monthly sales performance report, or a customer calls the service desk to complain about a slow website. In any event, someone in the incident management team or help desk is notified of the exception.

Second, the help desk gathers as much information as they can and makes an assessment of the severity level. For a low severity, they send an email to the application owner and ask them to look into it when they can. If it’s a high severity, they take a more dramatic action and initiate the “Severity 1 Group Page,” which notifies dozens of staff to organize a conference call.

Third, the staff on the conference call works to understand the current issue and its impact, analyzes the problem and determines the root cause, and figures out how to correct the situation and return to normal operations. Dozens of staff are involved because it’s not clear up front what the precise problem or correction is, so anyone that might be able to help is required to attend. The incident recovery often does not result in a permanent solution, and the company needs to know the root cause and how to avoid future occurrences.

Fourth, a postmortem process is initiated to fully understand the root cause and how to avoid it in the future. It could be several weeks to understand what happened, followed by a group review meeting by multiple SMEs and managers, and then a formal report and recommendations for division leaders, internal audit or senior management. Hopefully, the defined recommendations are approved and a permanent resolution is implemented.

Clearly, this four-step process is tedious and expensive, and simply won’t work in today’s reality of increasing data complexity, data variety and data scale. A better approach is required — one that is built on the assumption that data sprawl, data drift and data urgency are the new normal. 

DataOps: A new approach for the new normal
Built on the use of DevOps, DataOps is a fundamental change in the basic concepts and practices of data delivery, and completely challenges the usual and accepted way of integrating data. DataOps expedites the on-boarding of new and uncharted data and flowing the data to effective operations within an enterprise and its partners, customers and stakeholders, all the while preventing data loss and security threats. Unlike traditional point solutions, DataOps uses “smart” capabilities of automation and monitoring — specifically as monitoring relates to data in motion, including capturing operational events, timing and volume, generating reports and statistics that provide global visibility of the entire and interconnected system, and notifying operators of significant events, errors or deviations from the norm. Monitoring is especially important now because the data landscape is more fluid and continues to evolve dynamically. 

The nature of data and its never-ending creation demands a new approach to its management. No longer can businesses afford the time and resources to tackle data issues. Rather, DataOps presents a new approach that addresses the complexities of the new normal in data management.

The post The problem with data appeared first on SD Times.

anita

Leave a Reply

Your email address will not be published. Required fields are marked *

Next Post

Moving Up: Adjusting to Larger Web Projects

Tue Oct 1 , 2019
For designers who are just starting out, smaller projects provide a tremendous opportunity. We use them to put our skills into practice, develop a workflow and make a little bit of money. Building these types of websites serves as the perfect proving ground. Over time, though, working exclusively on small […]
HTML Templates