P3: The Project Privacy Podcast
The Podcast That Helps You Understand The Evolving Data Privacy Landscape
Why stop at one podcast when you can host two?! Introducing P3 – the Project Privacy Podcast – the show that helps you understand the constantly changing data privacy landscape.
The podcast – available now Spotify, Transistor, Apple Podcasts and everywhere you listen to your podcasts – brings you valuable, actionable insights on privacy, compliance, and governance – straight from the experts themselves. Every other week, we’ll bring you interviews with privacy pioneers, compliance experts, and governance specialists.
Active Navigation’s CEO, Peter Baumann, moderates our first episode highlighting the ROI of Proper Data Management. Joining the discussion are Joe Ponder and Jason Shelton of InfoCycle – a consultancy providing services around data governance, privacy and compliance, and legal ops. We dig deeper into the important subject answering questions such as “what are the true costs of incorrectly managing data?” and “how can businesses begin to see their data as an asset?”
Joe and Jason recently merged their respective companies to collectively form InfoCycle. Together, they bring a tremendous amount of experience and knowledge around information security and information governance to the table. Click below to tune in.
On this episode, you’ll learn about:
- The true costs of storing unnecessary data and how to reduce those costs
- The argument for cost avoidance versus cost reduction
- The importance of proper governance when/after migrating to the cloud
- How proper data management plays into data privacy
Show Links:
Full Transcription:
*We’re only human. We hope you’ll forgive us if we missed a few “um’s” here or there!
Marissa Wharton: Hello and welcome to P3 – The Project Privacy Podcast. I’m Marissa Wharton with Active Navigation and with me today is Peter Baumann, our CEO and Co-founder. Welcome Peter, thanks for joining us for our inaugural episode!
Our guests today are Joe Ponder and Jason Shelton, both senior partners at InfoCycle, a consultancy providing services around data governance, privacy and compliance, and legal ops. We have a great conversation in-store today and to kick it off, I’m going to hand it over to Peter!
Peter Baumann: Thank you very much Marissa, thank you. Welcome, Joe and Jason, delighted to have you here today! I understand, given that you’re both based in Nashville, you had some bad storms overnight – so it’s an extra privilege that you could get online. Thank you.
Joe Ponder: Absolutely. Thanks for having us, guys!
Jason Shelton: Thanks, Peter.
Peter Baumann: So let me just give a little bit of background for our audience, and the tremendous experience and knowledge that you guys have in this space. So I’ll start with you. Joe.
Joe has executed enterprise, programs and projects, for numerous fortune 500 companies primarily focused on information security and information governance. These include the health care company, HCA. Joe was also an early adopter of the HITRUST security framework. And holds numerous certifications, including the Certified Information Systems Security Professional, otherwise known as the CISSP, six Sigma and PMP.
Joe, can you just tell me a little bit about HITRUST? I’m not familiar with that particular framework.
Joe Ponder: The HITRUST framework came out in, gosh, I’m going to probably get it wrong, but early two thousand – 2003, 2004 timeframe – but has matured a lot over the years. Really it’s a conglomeration of federal requirements, but the focus is on general security compliance to help organizations establish a baseline of security when faced with different challenges like PCI, SOX – those kinds of things. And so it helps organizations establish and identify baseline security to be compliant across a number of different federal regs.
Peter Baumann: That’s great. And on to you, Jason. You also founded the Better Information Governance Consulting Group back in 2018. You’ve also got a wealth of experience partnering with both clients and on numerous legal operations, enterprise content management analysis projects.
And, I think, of note for our audience today, you’re also a graduate-level lecturer and teacher. And if I recall correctly, Jason, I think that’s in the field of economics and mathematics. Is that correct?
Jason Shelton: So first was high school level in mathematics, and then moved on into the graduate level with the Regent’s online degree program – which is an online program offered in the state of Tennessee for nursing students and informatics professionals – so focusing on healthcare informatics was where my coursework and education was based.
Peter Baumann: Well, I’m suitably intimidated now on this call. I’m glad you didn’t mention the trigonometry, otherwise, I would have left the call! But that’s very helpful, Jason – particularly in this world where really it’s all about data and the ability to analyze data and derive value from large sets of data. So I can imagine that plays very helpfully in your business and your day-to-day activities.
So we, Active Navigation, have worked with you both individually and more recently as a team, as you’ve merged your two respective businesses, in just recent times. Now if I recall, as well, I think you managed to merge the businesses – was it on the same day as the United States shut down due to COVID. Was that the beautiful timing that you manage that?
Joe Ponder: It was. Hindsight’s 2020, I suppose! But we do feel strongly that the issue that we’re talking about today and data governance in general, are pervasive throughout many organizations. And we’ve seen time and time again, these use cases play out where companies really struggle with the challenge of enterprise data management. Things for us are still continuing to move forward and we’re excited to partner up and join forces. The timing is a bit challenging, but nevertheless we’re still excited and continue to move forward.
Peter Baumann: I think it’s great. I think, you know, my own experience is that these things are set in front of us as challenges and you overcome them and you’ll be a better, stronger organization as a result.
It’s actually quite easy to build a business in good times. In challenging times, it’ll make you a leaner, better business, and so I wish you the best with that. It’ll be fantastic.
So let’s jump straight in and talk about return on investments of proper data management – which was the topic actually of one of your recent blog posts. One of the first statements you make in that post says that the storage conundrum bounds in almost every business.
Can expand a little bit on what you mean by that, please?
Jason Shelton: Sure, Peter. So the storage conundrum is really specific to how businesses have typically just allowed data to build and build and build – specifically on the unstructured side. And that data is essentially kept by either users and or departments in a longterm fashion.
Because what’s the risk of it? They don’t understand why not keep it, because what if we need it? What if we need to go back and reference something that we created back in 2001 – so keep it just in case. And then IT teams are, then left with the conundrum of how do we, tackle the storage issue we have?
We’re running out of storage. It’s very expensive. Do we add more storage? Do we try to get users to delete things? And typically we run into IT teams who don’t want to face the challenge of getting rid of data. So they essentially either buy more storage as we have mentioned in some of our posts. Let’s move everything to the cloud where it’s a little cheaper to keep things.
Peter Baumann: So I’m interested in this argument around the ROI… You talk about the average cost of storing unnecessary information can range, it’s quite a broad range, from $3,000 to $5,000 per terabyte. And what we see with many of our clients is that they’re storing everything. Because basically, to your earlier point, storage is seen as cheap – it used to be seen as quite expensive and certainly now it’s seen as cheap – and $3,000 to $5,000 per terabyte clearly isn’t a cost that should be ignored.
But these build up over time, don’t they? What kind of ROI do you expect businesses, maybe with a hundred terabytes, to realize in managing their data better?
Joe Ponder: So we think that the return is anywhere from 30 to 40% – somewhere in that range – when they look at managing data in a more methodical way. I think the actual ROI is probably a lot higher, but in terms of what we’ve been able to get organizations comfortable with, it’s around that 30 to 40% range. And the $3,000 to $5,000 per terabyte figure, that’s a rough estimate.
It’s more focused on the kind of on-prem storage challenges and the cost of adding net new storage per terabyte that’s really the source of some of those figures. The reality is, getting organizations to understand that there is a cost associated with storing all the data that Jason was speaking to.
And at times what we find is that, until organizations are nearly running out of space, they don’t think too much about it, right? Until it becomes time to have a big new capital expenditure to upgrade, storage devices. And then this becomes a hot topic.
Peter Baumann: So I guess what you’re saying there, Joe, is that the argument for cost avoidance is an easier and stronger one for the enterprise to work with than the cost reduction argument – is that fair?
Joe Ponder: I think the cost avoidance is a lot of times where the discussion starts, from our perspective. Again, it can be difficult for an organization to desire to move forward on a mass data reduction project without that catalyst. I know we’ve spoken a lot in the past about having that burning platform or that catalyst that’s helping drive things.
Sometimes it’s a large litigation activity or a merger or acquisition, but in this space, a lot of times also, it’s just running out of storage. What do we do? We’re going to have to make a net new capital expense. In today’s business climate, that’s not good news by many CFOs.
Peter Baumann: Can I dig a little bit deeper into this cost per terabyte? Are you talking about all the aspects of storage in kind of, let’s call it old school storage when it’s still on-prem or pre-cloud and by that I mean primary disk, back up disk maybe you have some kind of archiving, whether it be Convolt or Enterprise Volts, back up kind of software, storage management tools, administration, cost and allocation and software licenses.
Does that form part of your $3,000 to $5,000 estimate?
Jason Shelton: Peter, I think you’re hitting the nail on the head. The 3,000 to 5,000 it’s such a wide range because you have so many variables that play into that number. And whether it’s disk space and just disk space only, or if it’s the entire architecture and advantage of that architecture from a people perspective, as well as keeping the lights on, powering it, racking it, wherever it’s sitting, whether it be internally within your own data center or externally in a hosted data center…
We are having to pay specifically for space you’re taking up, that number can range because of those variables. So for sure, all those things do play into the numbers.
Peter Baumann: It must be really hard for most organizations to have a true steer and handle on the actual cost of storage, I imagine.
How do you work from there? Who owns it? If it’s not the CIO or the CTO office, who owns it and how do you help them determine true costs?
Jason Shelton: I think a lot of that has to do with, where you can get the most bang for your buck in terms of leadership and who can drive the message down to not only the leadership within each department and business unit, but also down to the users.
Joe and I both had really good experience in driving that message down from the legal department. And typically the general counsel is the person that can have the best voice to drive the discussion around not only reducing the cost of the storage, but also reducing the risk associated with keeping the storage.
As Joe mentioned, litigation can be one of the driving factors, and if that is the case, that is for sure. One of the places we start with is with legal and with counsel and say, Hey, users, we’ve had a lot of spend and a lot of unnecessary exposure to our data because of us keeping it around forever.
So, therefore, we’re moving to a new standard and a new policy that’s going to allow us to retain this per retention and get rid of things that are no longer necessary, no longer have business value, and therefore our risk is reduced and our cost of storing that data is reduced as well.
Peter Baumann: No, that’s very good.
I could discuss this for a long time, but before we move on to another subject from your blog, obviously a lot of customers have moved their data into the cloud environment…What’s your experience of the ROIs with clouds?
Customers do it because they see an immediate bottom-line improvement on the quarter practically. But are they really building out the longer-term potential costs and challenges they’re going to have with recalling that data if and when it’s required? What’s your take on that?
Joe Ponder: So a couple of thoughts there, and actually, tune into our upcoming blog as well because it’s on this exact topic. You’ve migrated to the cloud for a particular business function or a particular service, but oftentimes not everything gets included in that migration – in the case of unstructured data or any data – structured or unstructured.
You now have the challenge of trying to manage the data in two locations. Typically there are data remnants that are left behind in the on-prem setting that continue to need governance, continue to need to be backed up. And you’re also trying to manage it in the cloud as well.
So we’ve seen that time and time again, where companies do this migration, but don’t cut the cord, so to speak in still maintaining some of that legacy file residue on-prem. In terms of the cloud itself and the costs, I kind of liken it to a lot of the box storage vendors, right? That it’s cheap to get it in there – it’s cheap to onboard. It’s a lot more expensive to get your data out. Is there an immediate op X reduction when transitioning to the cloud? Yes. But it’s also one that you have to be mindful of in that while it is cheap, organizations seem to be growing their cloud footprint and even a quicker rate than the on-prem footprint.
And you know, the storyline there is that it’s almost out of sight, out of mind, right? The storage is so cheap and there’s so much of it, we’re never going to fill it up. So feel free to keep as many copies or files as you need. We see that argument a lot, especially on the collaborations, services and email front, you know? No longer are users kind of corralled, if you will, by a terabyte limit or a megabyte limit in their inbox – it’s keep everything for a period of time.
So it’s just a different way of thinking about the storage management side of things. And a lot of organizations that are transitioning to the cloud really aren’t our theory from a maturity standpoint, to be able to put in the right rules to govern the data once it gets there.
Peter Baumann: Yeah. So it could be a bit of a misnomer that you’re actually making true savings… you’re probably storing up additional risks and therefore potential costs for later on.
So to that point, clearly it’s all about data, information governance. And one of the other interesting stats I pulled out from your recent paper, you say that there’s only a 1% chance that a file will ever be needed. 1%?!
Jason Shelton: So Peter, that’s really kind of a number in, I think I was mentioning this earlier, where let’s just keep it just in case, and that 1% is really just around the usability and the usefulness of that file a longterm. And when we’re saying 1%, we’re saying the recall of that file is going to be close to 1% because as you move on and that fall ages even into two, three years which is a fairly short time period – what is the likelihood of you going back and trying to retrieve that file and if you do need it? Depending on your current technology footprint, if it’s a windows file share base, what’s your likelihood of being able to find it?
Which is also another reason why people are moving to the cloud – to make things a little more searchable and findable because there are so many data silos in each of these businesses that we work with, and individuals store things in their own silos, and when they leave the company, that silo’s essentially now sitting there stale with content in it that no one can really access or know what’s there.
So again, the 1% number is a number that is fairly close to accurate, but again, it’s based upon users leaving. It’s based upon data being in locations that are not accessible as well as the usability. So that’s just one part of it, one aspect.
Joe Ponder: I’ll add to Jason’s explanation… You know, Peter – turn the question to you – when you think about, we talked about email a bit earlier, right? How many emails does the CEO of your organization get every single day?
Peter Baumann: Too many!
Joe Ponder: Correct! You’re getting hundreds of emails a day. And so what we’re saying is roughly one out of every hundred of those emails you see as an important business record that has long term retention potential to it. I think that resonates, right? Because that’s the reality, is we get so much noise and so much information thrown at us on a daily basis that’s transient in nature – that isn’t a record that isn’t an official document or something that we need to refer back to as it relates to retention schedules.
Yet we just continue to keep it for the sake of what if we need to recall something?
Peter Baumann: Just to kind of pull on that thread a little bit… this is kind of a joint question if you like. So again, in your paper, you have this nice, phrase – unstructured content is a necessary evil for all businesses.
So that plays the 99%. You’re only going to find the 1% by receiving a hundred percent, so I guess that’s what you mean by it’s a necessary evil. What businesses need to do, they need to see the unstructured data as less of an evil, more of an asset. You’ve got to work your way through it, and kind of in line with that, moving towards, okay, we understand the problem, we understand there’s a cost associated with the problem, and we haven’t really touched on it.
We haven’t even gotten into the litigious side of the cost base – e-discovery and counsel cost,and what have you – but how do you get a cultural shift in the business? It’s something we constantly hear. How do you build the C level – the executive level’s – energy, buy-in, commitment to execute into these information data and governance programs?
Joe Ponder: Yeah it’s one that doesn’t happen overnight would be my perspective. You know, Jason and I have, through many projects, we continue to focus on the quick wins of managing this unstructured content, right? Let’s do a small project and show the benefit of pulling back particular record types and getting them classified.
Let’s show the benefit of now moving those records, once classified, into a cloud system such as Office 365 or Google vault and let’s show how the operational upkeep and the management of those records ongoing can just be something that happens second nature to the business.
And so you start to tease out those kinds of quick wins, you know, you start with a department, you start with two departments, and all of a sudden you’re changing the business workflow. You’re changing the process. And that’s where the real value comes into play here.
You know, it is a necessary evil and it’s evil in a sense because so much of it is noise, as we noted. When I think about the problem from a security standpoint and the role of a CISO, you’re now trying to govern and manage and secure all of this data that you really don’t know what it is. And so how can you manage it properly if you don’t know which of these unstructured files, which of the million files have PHI or PII present in them?
Jason Shelton: To add to Joe’s point, I think it’s becoming ever more apparent that privacy departments have long been shut in the corner and almost forgotten, and they’re just doing their own little privacy policy work. But with a lot of federal and now state-based privacy regulations that are starting to pop up across many States – I think three or four now in the United States are actively going through the process of implementing privacy regulations on how you must protect user and their data – it’s becoming more apparent to organizations. I think some of the fines are going to start raising some eyebrows and as users and people start requesting companies to show me where you have my data, it’s going to be pretty easy and apparent when they start looking at structured data. But how do you find and locate and identified that PII within your unstructured data stores?
And if you’ve got these data silos, as we mentioned earlier, it’s going to be really hard for you as an organization to respond to some of those queries from individuals asking about their data.
And as these privacy regulations start spinning up, the fines are going to start spinning up and it’s going to become more and more apparent to C-level executives that, hey, we’ve got to focus some attention here and get our data in order – especially our unstructured data.
Peter Baumann: No, that’s very helpful actually gentlemen. Clearly the privacy space is driving this, whether it’s CCPA, GDPR, New York Cyber Regulations and the like. I’m interested though, in your view on the balance of driving requirements between risks versus value. And so when we talk privacy, you know, pre-breach or post-breach work, clearly we were on the risk side of the pinwheel, if you like.
Are you seeing many customers start to look at their unstructured data assets from a value perspective as well, or do you think that there’s still few and far between?
Jason Shelton: I think Peter, that’s a trend that is staring to grow as companies try – and we’ve got a client that’s looking and asking specifically for, hey, can you help us drive more business decision-based value out of our unstructured content?
We’re doing a great job with some of our Hadoop and Power BI tools as we start looking at our structured data and being able to drive business decisions and those types of projects around that data. But how do you drive more value around your unstructured content?
And I think that is a trend that started to grow because data volumes are exponentially growing, and there’s no way to really kind of curve that growth that as we are talking to other things. But as that data grows, the volumes of unstructured content are also growing and how do you turn that data into value for you as decision-makers for the company? And that’s somewhere that the companies I think are going to continue to move toward as they try to turn value out of what they have.
Peter Baumann: Yeah and I agree wholeheartedly with that. I think the only time that large organizations, whether they be public or private, will really get their arms around this data challenge will be when they align the risk with the value. It feels like we’re some way off that happening, but as it happens and as we derived no information from the unstructured assets, it can drive both sides of the business and that will be very helpful.
So, I’m going to close out there if I may gentlemen. Jason and Joe, thank you so much for your time today. I’ve really enjoyed our discussion. Sorry, I won’t be able to visit you in the great home city of Nashville anytime soon, but as soon as I can, I will, and I’ll pass back to Marissa!
Marissa Wharton: Thanks Peter! That was great. And when you guys were talking about, people hanging on to files, even if there is a 1% chance that it will ever be needed, I started thinking about what I keep and what I hang on to and what I can start deleting. So I think I might have to go do a cleanup of my own after this!
Thank you guys again, Joe and Jason for joining us. It was a pleasure speaking with you and thank you Peter. And thank you, of course, for listening! We will see you next time on P3 – the Project Privacy Podcast, produced by Active Navigation.