The Gartner Data Center Infrastructure & Operations Management Conference 2016 produced some great takeaways for IT teams looking to address the critical challenges that have emerged at the intersection of explosive data growth and legacy storage practices.
The accelerating trajectory of data growth has become a common part of the new enterprise technology landscape. Worldwide, the amount of corporate data is doubling every 14 months—a trend that shows no signs of slowing. For IT teams tasked with protecting and managing this data, the sheer quantity puts tremendous pressure on them to scale solutions in order to meet this challenge. Hindered further by organizational silos and expanding use cases, as well as the need to provide additional levels of user access, these IT teams are faced with a level of complexity that requires far more hours worked by each individual, stretches department budgets, and pushes available tools to their limits in order to attempt to address these issues.
“Storage management alone cannot solve storage problems,” says Garth Landers, Research Director at Gartner, while speaking at the 2016 Gartner Data Center, Infrastructure & Operations Management Conference in Las Vegas. He continues, “There’s a lot that can be done from a storage management perspective, but it’s not enough.” So if our traditional go-tos such as thin provisioning, deduplication, and compression are not enough, what’s next?
Landers explains there are three different roles involved in handling the storage challenges: storage management, storage administration, and data management.
Storage management, he says, refers to roles akin to engineering or mathematics, involving items like storage capacity planning, performance monitoring and reporting, and SLA monitoring. These encompass the key planning and measuring components that allow teams to provide the necessary resources when needed and determine if these resources are delivering on what is promised.
Storage administration, on the other hand, relates to operational tasks performed by team members that can be viewed somewhat more like those of a mechanic. These activities involve configuring devices, provisioning storage, performing upgrades and maintenance, and managing backup and restore processes. Together, storage management and administration ensure an organization has the storage infrastructure it requires, and that the day-to-day requirements of handling stored data are met.
Management and administration are not enough. Simply storing the data doesn’t solve the vast majority of business needs, nor does it add any real value to operations. With the ever-expanding volume of information coupled with the pressures of regional privacy regulations, organizations can no longer store data in the dark, without the visibility necessary to meet even their most basic requirements. This, according to Landers, is where the third role comes in: data management. Data management refers to those on IT teams responsible for items such as policy creation and enforcement, collaborating with other roles such as legal and end users in order to understand the real value of the data that is stored, and how it is used (and reused), and respond to governance events like legal requests and audits. The data management role should help the organization respond proactively to anything involving data once it has been created, allowing organizations to begin to leverage the full value of their data.
Unfortunately, there isn’t a complete playbook for enterprise data management. A truly comprehensive approach requires a great deal of collaboration between business and IT, and there is no clear roadmap. Your business will have a unique set of business objectives as well as unique data management requirements that will require a customized application of these general best practices. Retention policies, for example, vary dramatically from organization to organization and can be enormously complicated. According to Landers, this is an area where Gartner receives a great deal of inquiries from companies looking to understand who owns the process, how to build their own policies, and what others in their industries are doing. Unfortunately, he says, the answer is often “It depends,” and requires an understanding not just of the composition of the organization’s data but also the external (e.g., privacy or industry regulations) and internal (e.g., data reuse) forces that will ultimately govern the way in which the information will need to be governed.
Building an effective retention policy—and enforcing it—are two critical first steps required to achieve good data governance within an organization. But how can a policy be created and enforced without understanding what kind of data you have? In an audience poll, Landers asked those in attendance to identify the main issue they struggle with the most in dealing with aging data. 71% of those who responded, indicated that their main challenge is understanding what kind of data they have, what kind they need to retain, and what they should discard. This result isn’t unexpected—Gartner estimates that 70–80% of data is dark, dormant, and old.
Dark data doesn’t have to stay that way, thanks to capabilities of file analysis software. This type of software can help organizations understand the big picture around their data—where it is, what size it is, what type of content it contains, as well as discover additional details that can enable teams to effectively build policies for key issues like data retention. Based on the current state of enterprise data, it’s no surprise that file analysis is the fastest-growing segment of storage management software. Companies can rely on file analysis functions like detailed reporting to provide new data insights, enabling teams to create and enforce key retention policies. But going beyond this, they can also use the data classification functionalities, such as