Master the complexities of cloud compliance with expert resources and relevant insights.
Understanding the Total Cost of Ownership of Cloud Compliance
In this guide are tested formulas and directional advice from the compliance and cloud experts at Datica on how to measure and manage the total cost of ownership to achieve compliance in the cloud.
Our goal in building the Datica Portfolio of products was to reduce the barriers of compliance in the cloud for makers of digital health products. We believe it should be as simple to deploy a healthcare application that handles PHI to the cloud as it is a less-regulated consumer application. It is our ongoing commitment to simplify healthcare’s onramp to the cloud that enables our customers to focus their efforts away from compliance and cloud management and toward developing digital health applications that improve patient outcomes and move healthcare forward in this digital age.
There’s much to know about both compliance and cloud management and that knowledge requires a significant investment in time, education, personnel, and money. The intent of this document is to help makers of digital health products make informed decisions about whether to “build” cloud compliance themselves, or “buy” it in the form of Datica. In the following pages, we lay out a framework and cost estimates to give you a better under- standing of the total cost of ownership of building and maintaining compliant cloud infrastructure yourself (on AWS, for example) and the value that the Datica provides.
This document explains the considerations of compliance and cloud management in healthcare and offers a framework to understand the activities, resources, and cost estimates in each of those domains. We show how Datica addresses those requirements in some detail and how you as a customer benefit from building your application on Datica. We also attempt to provide you a model to estimate costs and resources based on our extensive experience working with multiple third-party auditors and multiple cloud infrastructure providers (like Amazon Web Services, Microsoft Azure, or IBM Softlayer).
Understanding Compliance in the Cloud
Healthcare is slightly behind the curve when it comes to the massive market shift from on-premise servers to the cloud but, in healthcare’s defense, it’s not as simple a decision as just choosing a cloud deployment model and provider. The complexities of adoption and migration are different for healthcare than other industries because of strict regulations about handling PHI, most notably, HIPAA. Yet, healthcare is being driven forward by the relentless pursuit of interoperable data. And that requires interoperable infrastructure. Cloud infrastructure. With that said, there are multiple options in public and private clouds available to healthcare teams.
In a private cloud, the cloud infrastructure is operated for a single organization. The dedicated data center can be managed by the organization or by a third-party service provider and may be on-premise or off-premise. In a private cloud, the infrastructure is dedicated to the use of that one identified entity and the organization’s data is physically segmented from all other organizations using the provider. Private clouds are often the most expensive cloud alternative due to the costs associated with running a physical environment, including the actual data center and maintenance of hardware. In the past, private clouds were the premier choice for security and compliance for organizations in highly regulated industries. Today, security and compliance are just as achievable on public clouds.
In a public cloud, such as Amazon Web Services (AWS), Microsoft Azure, Google Cloud, or IBM Softlayer, the data centers are owned and managed by the cloud provider but are made available through a shared service model to the general public or a large industry group that is owned by the organization selling cloud services. Pay-as-you-go models benefit organizations with economies of scale and the same levels of security and compliance can be achieved today as on private clouds. Public clouds are the preferred option for most makers of digital health products and the foundation of the Datica product family.
In a hybrid cloud, a combination of both public cloud and private infrastructure are typically connected and run by a Managed Service Provider (MSP). This combined approach is often used by companies who value owning storage but prefer to outsource computing costs. Splitting deployment models between public and private options increases internal costs. We often see hybrid environments used as a strategy of compromise in larger organizations that already have infrastructure, yet are migrating workloads and applications to the cloud.
80% of healthcare CIOs know migration to the cloud is inevitable, yet most don't have a migration plan in place
One important consideration is the topic of multi-tenancy in the cloud, whether it is a private, hybrid, or public. In a multi-tenant cloud environment, there are multiple customers, organizations, or applications that share the same resources (i.e. infrastructure, data stores, virtual components, etc.).
An important note to remember about multi-tenancy in the cloud is that customers do not generally have any knowledge or insight into the customers with whom they are sharing resources. Additionally, cloud customers do not have any knowledge or insight into the ways the other customer organizations are employing security of their internal environments used to access the cloud.
Healthcare compliance in the cloud is possible in any cloud deployment model as long as it addresses controls in the main five HIPAA Omnibus categories:
Administrative Safeguards (§ 164.308)
Physical Safeguards (§ 164.310)
Technical Safeguards (§ 164.312)
Organizational Safeguards (§ 164.314)
Policies and Procedures and Documentation Safeguards (§ 164.316)
Additional security provisions within References 13402 of the HITECH Act.
Being on the cloud is critical today, and critical for the future of healthcare data interoperability. In general, the public cloud is emerging as the better choice due to pay-as-you-go pricing, better usability, increased security, and scalability.
Architecting for HIPAA Compliance in the Cloud
For the purposes of this document, let’s assume you’ve chosen AWS public cloud as the infrastructure to build your compliant application. There are some important things you need to know about HIPAA compliance, before you make the decision about whether to deploy your application directly on AWS or on top of a platform like Datica (which sits on top of AWS).
Foreshadowing: It’s not as easy or affordable as it seems to go directly to AWS. At the end of the day, it costs less, is less risky, and a better experience to use Datica to reap AWS’s benefits while reducing risks of non-compliance.
AWS does not take on 100% of HIPAA Compliance
Here’s where it gets complicated. AWS is HIPAA compliant exactly to the extent they are required to be at the level of abstraction for the AWS service you are using. You could be using a vanilla virtual machine or a fully managed service for ML, each with a unique set of layers of responsibility. With 1,000s of service to choose from, and we’re still only talking about AWS, the multitude and complexity of compliance is hard to manage.
Regardless of the AWS services you’re using, and we’ve found that most people use about 5-6 services in a typical cloud environment, using AWS does not make you HIPAA compliant. You’re not building infrastructure, you deploying technology and data to AWS infrastructure and that adds greatly to the list of HIPAA controls that apply to you. Those additional controls vary depending on your specific case but, generally, include additional infrastructure-level controls, application-level controls, network-level controls and administrative-level controls at the company level. In other words, AWS has a shared responsibility model which means when you build your application or store your data directly on top of AWS, you have to take it the other controls toward HIPAA compliance.
Datica’s portfolio includes AWS and takes you the rest of the way down
the path toward full HIPAA compliance, and further down the path toward compliance at the company and application levels so you can focus on the functionality of your application and not on compliance. Datica has mapped the layers of abstraction for various cloud services, ensuring there are not gaps. With Datica, you get a compliant platform for deploying and managing critical healthcare applications in the cloud.
Let’s take a deep dive into understanding HIPAA controls.
Does HIPAA Matter?
HIPAA kicks in when a digital health product handles Protected Health Information (PHI). There are several different categories of PII, like some- one’s name, home address, or phone number. Once this PII is tied to health information, it is PHI. When a technology environment stores, processes, or transmits PHI, HIPAA asserts rules as to how it should handle a multitude of security, privacy, and policy procedures, called “controls”. In HIPAA terms, there are physical, technical, and administrative “safeguards”. Datica manages the physical and technical safeguards of HIPAA, leaving you to the administrative HIPAA safeguards which are almost always custom to your organization. Thus, Datica provides more than 2/3s of what it takes to be HIPAA compliant. Demonstrating that a company and its digital health product meets all those controls is how it can call itself compliant.
Another way for a digital health organization to look at HIPAA controls is by categorizing them into three simple levels: infrastructure, application, and company.
At the infrastructure level, the organization needs to meet certain controls around encryption, backup and disaster recovery, OS hardening, and so on. It’s a robust list.
At the application level, the organization needs to do basic security and privacy best practice, i.e. don’t store plaintext passwords. Some products exist to help these controls but, for the most part, it’s up to the organization to do the right things and to coordinate an external audit to prove compliance at this level. There is also the broad concept of “access” that fits into this level: Does the product ensure that only authorized people have access to only certain sets of data? Oftentimes this is implemented using Access Control Lists, or ACLs. It’s a broad topic but is an important component to HIPAA compliance as well. Often a health organization—like a hospital trying to buy a digital health product—will do their own security audit to assess this level.
At the company level, it’s about implementing administrative policies. Some products exist to establish and then continuously administer these controls. Datica open sourced our company policies under a creative commons license, which hundreds of organizations have used as a starting point for their own company-level policies used in their own audits. For the purposes of this TCO guide, the compliance requirements a the company level are not included as these test to vary wildly from organization to organization. That said, devel- oping and implementing policies should be the first step on your compliance journey before following this guide.
We open sourced our policies to help jumpstart the process for organizations new to the cloud. You can find them here.
AWS offers 1,000s of different services that provide a great amount of flexibility in order to make it possible for any developer to bundle what they need for their data and workloads. Datica does this step for digital health applications. We package a subset of those services (the ones that are HIPAA-eligible services like CloudTrail (for logging) and S3 (for object storage) together into the Datica platform to address the specific use case of hosting an application in a HIPAA compliant manner.
So, if you choose to build out the requisite infrastructure for your application yourself vs. buying a pre-built product with compliance baked in, here are the major points to keep in mind:
Building all of this yourself is possible, and setting up the individual services is, in fact, not the hard part. Orchestrating the DevOps between all the components in an ongoing and compliant basis with every deployment IS the hard part.
It’s important to understand this key point: AWS’s shared responsibility model grants excellent security for the security OF the cloud but customers (you) are still responsible for security IN the cloud. Orchestrating DevSecOps is just as challenging as orchestrating DevOps.
Making each AWS service HIPAA compliant is challenging on its own but
if you plan to employ any modern environment paradigms—namely cloud native technologies like Kubernetes—that requires an added level of development and maintenance that isn’t achieved by simply making a single service compliant.
If you build everything yourself, you also shoulder the risk. Read business associate agreements (BAAs) carefully as they tell you the areas in which you are accountable. Risk reduction should be part of the TCO calculus, in a similar way that you’d never process your own credit cards.*
The Importance of Proving Compliance with HITRUST
HITRUST is an industry-led initiative that defines a common, prescriptive framework and associated requirements for HIPAA compliance. The crazy thing about HIPAA is there does not exist an exact definition of what it means to be “compliant”. It’s one entity’s lawyer, or Privacy Officer, arguing with another. As you can imagine, this creates business inefficiencies as buyers (like a hospital) try to ensure that a seller (like a telemedicine solution) are compliant.
HITRUST aims to fix that. It lists hundreds of controls that map to HIPAA’s rules, as well as standards like NIST, a format they call a Common Security Framework, or CSF. Any company being audited by a third-party auditor, like Coalfire who audits Datica, can request the auditor use the HITRUST CSF
as the basis of their audit. Results are then exchanged with the HITRUST Alliance, with the eventual outcome being a HITRUST CSF Certification, good for two years. (Datica has been HITRUST CSF Certified three times.)
The value of the HITRUST CSF Certification is that organizations can use it as a way to accelerate relationships with other healthcare organizations. Let’s take a basic example: Say you are a telemedicine solution and you want to sell to a giant healthcare system. Before, you’d spend months working with the health system who wants to audit your tech stack, controls, etc. It could take six months or more to get past that stage. But with a HITRUST CSF Certification, the compliance officer at the health system will oftentimes effectively sign off that you are indeed compliant in an accelerated fashion. An auditing process truncated from months down to weeks helps everyone involved not only with costs, but with speed to market.
A great feature of HITRUST is it now supports inheritance. Going back to the three buckets in the previous segment, an organization who wants to be HITRUST CSF Certified who is also a customer of Datica can effectively inherit all the infrastructure-level controls from the Datica platform. Therefore the cost and time requirements for your own HITRUST certification are much less than they would be if you build your own compliance layer on top of AWS.
It is important to remember that HITRUST is not a requirement for everyone. But, since time is money and digital health initiatives are usually short on both, HITRUST can be a critical accelerant to success. It’s much easier to backfill the occasional custom audit with controls from your HITRUST certification than it is to assert that your one-off policies are best in every audit. Therefore we consider the costs of HITRUST compliance in with our TCO Framework.
The Total Cost of Ownership Framework
You should understand whether you intend to build a small proof-of-concept application which would likely need to be re-written in order to scale, or if you intend to build, manage, and support multiple applications and enterprise-class workloads. If it is the former, then the TCO calculations will be different. This TCO framework is intended to address the needs of mid-to-large organizations serving large markets with requirements for multiple applications.
In addition to one-time costs and time for setup, there are many activities that need to be done on an ongoing basis to ensure a fully compliant and certified infrastructure. To enable you to better understand the scope of effort you’re undertaking by building your own compliant healthcare environment directly on AWS, we’ve categorized the work efforts into the following five buckets:
Design (i.e. Designing for compliance). This is the series of steps you need to take to ensure that the infrastructure that you deploy the applications on and the application itself is built with a clear set of security principles and associated documentation. This is the high level step of ensuring the overall design of your cloud footprint is in line with your organizational policies and procedures. In many cases, this mandates updating or re-writing policies and procedures for the cloud. Depending on how your policies map to relevant compliance regimes, it is also important to design with certain compliance frameworks in mind depending on the industry that you operate in. In healthcare, increasingly most of the industry is aligning behind HITRUST as the authoritative trust framework.
Implement (i.e. IaaS specific implementation): If you’re building your infrastructure from the ground up, your choice of Iaas providers will necessarily force you down a particular path. This is because each IaaS provider has a different set of services and more importantly, a specific set of APIs. That decision is likely to lock you into that IaaS provider and removes the option for you to be IaaS neutral and be able to migrate / deploy workloads across providers challenging. This may or may not be critical to you immediately but is likely going to be a factor as you scale.
Monitor (i.e. Application deployments and scaling): Once your infrastructure design and implementation are in place, you have to monitor your cloud workloads to ensure they stay in compliance with the approved configurations from the design phase above. Compliance is not simply a zero day challenge, it is ongoing.
Prove (i.e. Ongoing proof of compliance): The thing about compliance is that it is not a one time activity. Frameworks such as HITRUST CSF require annual “re-ups” i.e. reassessment annually. Which means that you need to be able to show proof of ongoing compliance for your application and also adherence to policy and procedures. Policy and procedures might seem annoying but they lay the foundation on which everything else runs. So having a fully documented policy and process with associated proof is the key to re-upping your audit quickly. The other thing to also note here is that compliance frameworks will evolve. This is necessarily so as security threats change and as frameworks borrow best practices from others.
Given this framework, we will now look at the specific set of activities under each bucket and provide a model to estimate costs.
The Total Costs of Infrastructure for Digital Health
Designing for compliance requires a clear understanding of the compliance requirements (in our case the HITRUST CSF) and translating that into both technology and policy / procedures and associated documentation.
The core set of things that need to be done as part of the design phase are outlined below.
Choose your approved set of cloud services. There are 1,000s of cloud services to choose from and, just like the non-cloud world, there should be certain technologies that are approved for production use in your organization. This list does not have to be static, and should definitely not be static, on the cloud but you need to have an approved list and process for amending the list.
Create approved configurations for each cloud services. Once a
list of approved cloud services is built, each service needs to have an approved configuration.Similar to having an approved list of versions for an operating system, the details of how to configure the parameters of cloud services needs to be created.
Publish your list of approved cloud services. The cloud pro-
vides easy access to deploying new infrastructure. Non-admins and non-operators can now deploy and configure technical resources independently. Ensuring infrastructure is deployed in accordance with approved services and in approved configuration states requires education.
Create encrypted networks for your cloud environments. The way you go about this differs based on the cloud services in use, but the requirement is the same - you need to ensure you data is encrypted at all times when in transit on the cloud. This can be as simple as a service configuration parameter but it has to be done on every service and in every one of your cloud accounts.
Ensure data is encrypted at rest. All data should be encrypted at rest. Fortunately, there are a lot of good tools to automate this process and to ease the process of managing encryption keys.
Centralized logging, monitoring and metrics. Whatever services you deploy, it is essential to set up central logging and monitoring The first step is simply logging the events. The follow-on step that makes logging valuable is developing tooling to filter out the noise and isolate the signal.
Vulnerability scanning, intrusion detection, and virus scanning. This can be lumped with logging into the umbrella term of event management but we broke it out because we have found it’s rare to have a mature, centralized eventing platform in place. At the very least, each of these types of monitoring tools should be running and monitored on a regular basis.
Patch management. Security vulnerabilities continue to be discovered and, as they are, CVEs are created and associated patches developed and released. This is an additional process that needs to be created to ensure systems are kept up to date and not vulnerability to known exploits.
Backup and disaster recovery. The principles of backup and disaster recovery are easy to understand. The specifics of how to design an effective program on the cloud can be tricky. As a part of this step, it is essential that you test your DR plan and ensure you document the test.
Gap / Bridge Assessment. As a starting point for compliance, this is the right time to do an initial gap assessment of the security design to ensure you’ve identified gaps. A risk assessment should be a part of this process as well.
Based on the design decisions laid out above, the next stage is to implement to your approved configurations.
Implement a POC. The design principles above should guide all cloud implementations. As a first step, a POC, whether a real use case or made up, should be implemented. Once implemented, the POC cloud environment should be assessed in a retrospective to ensure the proper processes were followed and the resulting cloud services and configurations are within the body of approved cloud configurations.
Create, deploy and scale cloud workloads. The goal of the design phase, and the associated hefty investment in time and resources, is to enable rapid, repeatable implementations. Once a POC has been completed and assessed, the job of implementing and scaling is an ongoing effort.
Compliance is not only a zero day challenge. While you can automate many of the design steps above, there is a still a need to continuously monitor your cloud workloads to make sure they stay within the approved configurations and parameters. There are some third party tools that can help in this but, ultimately, there is a lot of work that has to be done on a regular basis to ensure continued compliance.
Develop and implement process for event review. Events include system, application, and network logs, IDS records, vulnerability scans, and a slew of other data sources. It is imperative that you establish a process to both review events as well document event reviews, findings, and any required mitigations. This review process should include multiple people, ideally from different teams (ie engineering and security). Pre-scale, you can get away with having this data in different places; but, as you scale the amount of event data will scale exponentially and the review process will grow and grow.
Monitor cloud workloads for compliance with approved services and configurations. Similar to leveraging a cloud financial platform like Cloudability for price transparency on the cloud, everybody wants one pane of glass into the security and compliance states of their cloud services. There are tools and services to do this but the gap we still see at Datica is the mapping of those views into the approved services and configurations from the design stage above. This requires the additional step of either creating custom rules in your monitoring platform or spending time manually reviewing configuration states. Still other companies outsource this work to a managed services firm. Unfortunately, there is no way to get around doing this work. We have spent years at Datica building tooling and process to monitor our cloud services and it often feels like a never ending effort.
Create long term artifact storage. The amount of data intentionally tracked for events and unintentionally created by cloud services via APIs is massive. All of this data serves as artifacts for your cloud compliance program and should be stored for some period of time to provide a historical trail of data. This data can be used both for audits as well
as for investigations of security incidents and breaches. A big part of compliance is not simply prevention but also identification, investigation, containment, and remediation. The more data available, the better an organization will be able to quickly and confidently correct security and compliance issues. For these purposes, the data does not necessarily need to be readily available so longer term, or cold, storage options like AWS Glacier are good options to consider. The length of time to store artifacts should be dictated by organizational policies.
Track security exceptions. Often overlooked, security exceptions are a crucial part of any effective compliance program. Compliance is not black and white, and sometimes the risks of being out of compliance are minimal and outweighed by the requirements of the organization. In these cases, such deploying a non-approved cloud service required for a specific use case, there needs to be a process for documenting these exceptions along with justifications and mitigations of any residual risk.
After designing, implementing, and creating the tools and processes to monitor the compliance of your cloud services, the last step is to use a 3rd party to prove your compliance. We’ve written extensively on the cost of a HITRUST Certification and included the relevant content below.
The direct costs for this include both fees to HITRUST and to your auditor or approved assessor. The direct cost, at the low end, is about $60,000-$120,000 but costs can be much higher for larger organizations.
Indirect costs are harder to quantify. In regard to the Datica HITRUST assessment, we estimate the total time spent for all employees and have come to an estimate of 400 hours. Also necessary to consider is the time spent between each audit to address issues and solidify compliance and information security programs. Though not captured for our HITRUST assessment, this contributes to the overall cost of compliance.
Conservatively estimating the cost of an hour of work to be $100/hour, a rough calculation can be tallied. With the cost of salaries, benefits and lost opportunities from work not performed simultaneously (writing code, customer support, sales, marketing, etc) a partial loss must be considered. Based on those numbers, the total cost of the HITRUST Assessment is appraised $100,000 - $160,000.
This above cost is for full assessment year. The level of effort during an interim assessment, which is every other year, is about 50% of the full assessment cost. To arrive at an average ongoing cost, we split the difference and land at $75,000.
TCO to Build Your Own Compliant Cloud Summary
The overall costs based on the above calculations are:
These costs are only for security and compliance on your chosen cloud provider. The tasks and associated costs on this guide are on top of your cost to purchase your preferred cloud provider’s services.
Compliance matters because it establishes credibility of your product within the industry. In healthcare, data is an increasingly large risk vector for enterprises so they need assurances on the cloud in order to feel comfortable sending their data to the cloud. Without proof of it, you’ll never get a foot in the door. Satisfying compliance without sacrificing the benefits of the cloud is how you get to market faster while reducing costs.
Building compliance in the cloud yourself is an expensive effort that requires a specialized skill set. Datica was designed from the ground up to be the cloud enablement layer for all healthcare applications. The Datica portfolio of products enables you to focus on building and securing your application by managing all compliance and security obligations on and in the cloud.
Speed time to market: Prototyping in healthcare is hard. Datica dramatically decreases a team’s time to market by removing compliance as a blocker during development and deployment.
Make life easier with a single BAA: Aligning Business Associate Agreements (BAAs) amongst all technology partners is a full-time job. You sign one BAA with Datica to cover the entirety of compliance in the cloud.
Empower technology teams: Engineers are liberated to focus on problems and features more closely related to products, and not on reinventing the wheel on compliance, resulting in happier, more productive teams.
Compliance on the cloud isn’t a one-size-fits-all kind of challenge. Maybe you don’t have the expertise, time, and resources to build your own compliant infrastructure and want a solution that picks up where your cloud provider leaves off. Or, maybe you do and just need help with compliance monitoring and reporting for your complex environments.
Wherever you are on the road to healthcare compliance, we can help.
Need Compliance Help?
Talk to the experts.