True North.

Back to Blog

Posted by Shelley Podolny on Monday, Apr 15, 2013

Q&A with Information Governance "Guru" Barclay Blair

Barclay, welcome to True North. Thanks for taking the time to answer some questions about information governance.

Barclay Blair information governance data dispoal records managementAs an information governance guru, you come in contact with many organizations that are grappling with oversized unstructured data environments and yet continue keep everything.  Are we getting anywhere near a tipping point in the C-suite that will drive better data governance strategies or has that imperative not yet reached the top?

I challenge executives on this all the time, and tell them that, regardless of their intentions, their organization is effectively keeping everything forever. Their answer is no, we are just keeping everything for now. Right. So, when does “for now” end? It never ends. Have you ever watched an episode of “Hoarders?” A hoarder doesn’t fill their house with newspapers and broken appliances overnight – it starts innocently with a George Foreman grill missing its power cord bought for $2.75 at a garage sale down the road, then grows to a mound of faded costume jewelry and rotting clothes from there. It is the same thing with information. We don’t intend to keep it forever, but then we wake up one day with a 300 TB email archive full of emails with Backstreet Boys MP3 attachments.

There is a root cause for this, and until that root cause is addressed, this behavior will not change. What is it? Nobody owns the information problem. The CIO owns the problem of information infrastructure – the tanks and pipes that house and carry the information, but he/she does not own the information itself. Who does? The typical answer is, “the business.” Okay, so who is that? Which person with power and budget is that? Who is the one person held accountable when things blow up?  Many executives continue to believe this is the CIO, despite what the CIO says.

Until we solve this problem, we cannot totally solve the information hoarding and mismanagement problem. We can make incremental improvements, yes, but we cannot achieve the full IG vision.

I do think that awareness of this ownership problem, and other information problems, are starting to reach the C-suite, but because of opportunity, not risk. The buzz around big data has reached the C-suite and they are commissioning projects that are data-driven, hoarder information governancewhich does require some of these sticky questions about governance to be asked, if not answered. Of course big data has pretty much entered Gartner’s “Trough of Disillusionment” now, so we will need to wait to see what effect it really has in the long term.

Regardless, more organizations are asking the right questions about information, either because they want to monetize it, or because of external events like archive migration, corporate mergers and acquisitions, major enterprise system implementations, and so on.

In your insightful white paper, “The Total Cost of Unstructured Information: Decoding Information Governance, Big Data & eDiscovery,” you urge information professionals to engage in “creative thinking” in calculating the risks and benefits of too much information as well as in driving desirable user behavior. From our experience, this is a tough nut to crack. What do you think is the best way to jumpstart such an effort?

By personalizing the problem. We like to operate on the myth that organizations are rational entities. They are not. They are like people in that they are mostly driven by irrational and paradoxical emotions and desires. Every person says one thing and does another in some aspect of their life. Every organization says one thing and does another in some aspect of their business. Once we take this as a baseline for trying to understand and change organizations, life gets much easier.

So, the best advice I can give practitioners trying to build support for an IG program is that they need to make the problems and benefits real. Telling a COO that you have 200 TB of shared drive content that costs blah blah blah a year to maintain is useful, but not powerful. When we do this, we actually interview real users about their “information day” and then tell the story of why for “Anne in Accounting” there is massive inefficiency and risk in her workday that could be really improved in very specific ways through implementation of an IG program.

For many years I have been trying to appeal to logic, rationality, economics, basic fear, and other things that you would expect in my clients. Sometimes this appeal works, sometimes it does not. That is why I am interested in approaches – like the ones I discuss in my paper – that kind of change the conversation and get people thinking about the problem in a new way. I think this maximizes the likelihood that change will happen.

In your paper, you discuss the multitude of costs associated with having and maintaining information—which is counterintuitive to the low cost of storage—and it seems that decision-makers haven’t quite parsed that out to understand the full economic impact on the organization. Executives and managers are usually very attuned to costs. Why are these costs not fully recognized and integrated into the cost models currently used in the enterprise?

Quite simply, we have separated the benefit of IT from its harm. I borrow this language from the world of waste management (which I also analogize to in the paper), where it is used to describe how cities get people to recycle. Where there is no limit on the amount of garbage you can put out on the curb, there is no incentive to recycle. The “harm” or cost of waste management is completely separated from the benefit of the service. This is how much of IT mostly operates.

Does the business or the user see, experience, or pay for the “harm” that is done by creating and saving the same file dozens of times; by not bothering with version control; by not imposing some kind of basic file plan and retention schedule on unstructured content; by using email as a catch all for ad hoc nonsense and the company’s most valuable information? No. Even charge back schemes, which are partially informed by this idea, target raw storage volume, which isn’t that compelling as the basic cost of the storage commodity continues to drop.

This is why I propose things like Full Cost Accounting for information, which borrows again from waste management and other non-IT worlds to make the case that we ought to understand the problem with some precision before we try to solve it. There are many, many factors (which I list in my paper) that together drive the total cost of owning unstructured information, and we need to account for these in our planning.

Of course, that all assumes that we want to act rationally, and I told you earlier to forget that assumption, so maybe I am contradicting myself…

We love your idea of “information calories” – positing the analogy that increasing the user’s sensitivity to their own data volumes may reduce overconsumption, just as posting calories on menus seems to do. But losing weight is hard, nonetheless. What advice do you have for the “calorie conscious” user?

information management data disposalThere are number of fascinating studies that show that we quite simply eat more when more food is available. In one such study volunteers were able to take as many free M&Ms as they wanted from a container. One group was given a large scoop and the other a small scoop. Guess who ate more? These kinds of studies are the basis for much of the lawmaking that requires some restaurants in NY, California, and other jurisdictions to post calorie counts for meals. Behavioral science says that increased awareness reduces consumption.

Is it a panacea? Does it affect everyone equally? Clearly it does not. But it does help. In a world where so many IG practitioners tell me that “there’s no way we can restrict email mailbox sizes” or in fact control any aspect of what their employees do in the information environment, it might be all some immature organizations can do – create awareness of the problem in clever and visible ways.

Defensible data reduction is becoming a hot topic and you note in your paper that the economics in getting rid of unnecessary data are compelling, albeit often unrecognized. It also takes significant information retrieval expertise to undertake a methodical reduction process that will be accurate enough to minimize risk, but there are sound methodologies and tools available. Do you think we will see a gain in momentum in the marketplace to undertake a data reduction effort?

Defensible deletion is the IG community trying to find something as compelling as E-Discovery costs to hang their hat on. In other words, when awareness of (or at least, fear of) eye-popping e-discovery costs crossed a certain threshold, a gold rush emerged for almost anyone that could rub a one and zero together. This gold rush isn’t completely over, but perhaps it is becoming more of a silver or bronze rush as many previously exotic culling and processing technologies have become commoditized, along with their pricing. True innovation in both software and process have now streamlined and modernized the e-discovery process.

Despite the ups and downs of the e-discovery market, almost everyone selling into the IG market looked at the E-Discovery market with great lust and jealousy because it offered a selling point that did not exist in IG: a crystal clear ROI. Better software and process drove down collection, processing, review, and production costs in very real, “hard number” ways.

For someone selling IG based on soft concepts like better knowledge management, efficiency, and risk reduction, this is like manna from heaven. So, defensible deletion offers some of the same sex appeal in that it can drive a hard dollar ROI.

I’m not trying to be cynical here, just pragmatic. Defensible deletion does offer hard dollar savings and real risk reduction, and I try to sell it to every single one of my clients, and I relentlessly promote it as a place to get started in IG.  I love it as a project and always will.
However, we need to also put it in context. Defensible deletion is a tactical, not a strategic activity. What I mean by this is that it does not fundamentally address the problem. It does not change anything. We still have benefit separated from the harm. If we could magically vaporize all the garbage at the Fresh Kills landfill, that would be amazing, but would it strategically change NYC’s waste management calculus? (If anything, it might inspire us to create more garbage, not less!).

Also, defensible deletion projects are not a slam dunk. Lawyers who work for organizations who have basically never thrown anything away are wrapped in a tremendously soft and comforting safety blanket. They believe—wrongly–that they are in a protected place re: spoliation because they have never thrown anything away. While this may technically be true, it is a practical lie for large organizations in that inaccessibility by obscurity is an increasing reality. What is the benefit of having everything when it takes weeks to run a simple search on your archive (true story)? So, convincing the lawyers to shrug off the safety blanket can be hard, even if it is the, cough cough, rational thing to do.

Finally, even relatively big ROI numbers, like avoiding a few million dollars in storage spend for another year, may not be compelling to the best audience for this pitch, i.e., really big companies.

Right now have several large players spending millions of dollars to market the concept of defensible deletion. I expect them to be successful. After all, even if defensible deletion, i.e., cleaning up the past, does not solve the foundational problem, it checks enough boxes for most organization to be a compelling project.

You’ve also suggested the interesting idea of “cap and trade” instead of “command and control” when it comes to data volume, an approach which rewards innovation and management discipline and controls information pollution. The enterprise would set an information target, or volume quota for each department, which would then find whatever creative ways they could to not exceed it. Great idea, for sure – do you think it’s got a future in the enterprise?

Cap and trade as concept really gained traction in the fight against airborne pollutants, specifically “acid rain” and has recently entered the discussion about carbon emissions and global warming. In the 1980’s  the US federal government created a regulatory regime that “capped” the amount of sulfur dioxide and related pollutants that a factory could emit. It was largely up to the factory to figure out how to hit that target. If the factory could figure out a way to come under the cap, then they were free to “trade” this unused quota to other factories. This created a kind of free market incentive to innovate.

I think the same kind of model should be applied to information. Some information is quite simply, a harmful byproduct of a business process, and its emission and existence drives unnecessary cost and risk. Inside your company, departments that figure out how to reduce this dangerous emission via innovation should be rewarded.

This depends on two things: one, a consistent method for calculating how the “cap” should be set, and two, a system for trading unused credits for something of value.

I have some specific ideas on how this can be implemented, and I have a couple of clients trying it, but I want to leave it to the market to take the idea and adopt it to their enterprise.

You’ve written in the past about the battle between Big Data and e-discovery. Who do you think is winning?

Yes I have, but frankly I don’t think that these two kingdoms even realize that they are at war, much less have any idea who is winning. The battle, simply put, is between the (e-discovery) impulse to get rid of information before it causes problems versus the (Big Data) impulse to keep everything because you cannot predict its future value. In the Big Data world, all information is good, and more information is better. In the risk-focused IG world, that clearly is not the view. If I had to guess who is going to win, I would probably put my money on the Big Data view, given the pots of gold that are currently being promised at the end of the analytical rainbow.

Barclay T. Blair is an advisor to Fortune 500 companies, software and hardware vendors, and government institutions, and is an author, speaker, and internationally recognized authority on information governance. Barclay has led several high-profile consulting engagements at the world’s leading institutions to help them globally transform the way they manage information. He is the president and founder of ViaLumina.

Leave a Reply

Your email address will not be published. Required fields are marked *


Thank you for subscribing to the H5 blog, True North.

We strive to provide quality content on a variety of topics related to search, eDiscovery and the legal realm.

Please check your email inbox for your subscription confirmation!