Our friends at Gartner predict that 80% of businesses will overspend their cloud infrastructure budgets. The 2nd biggest cloud spend behind compute power is storage (yeah obvious I know) but since storage is the second largest spend item in the cloud, it’s time to take a look at how to contain those costs.
Interestingly what Gartner don't tell you is how many of those IT teams will try and hide the budget disasters in other spend areas rather than address the issue.
Simoda are working in partnership with the fantastic Komprise who provide intelligent data management solutions which help manage your data across multiple platforms giving you visibility that leads to actionable results.
Ok then what are 4 ways you speak of ?
1, Gain accurate visibility across cloud accounts into actual usage
Often cloud administrators are the custodians, not the users so they don’t know how the data’s being used. So first you need to get your true cloud picture across all your cloud accounts and services no matter what vendors and cloud services you use.
You need to know your data usage, growth, and costs so you can optimise your cloud data. The more granular the insight, the better. Ideally, you should know how much data you have, what storage classes are being used, who’s using the data and when it was last used/ accessed, how it’s growing, and probabilities of future data access.
With this information, you can understand how your data is actually being used to develop a cost-saving strategy. But this information is not readily available in the cloud, as it is strewn across multiple accounts and buckets, and some data, such as when an object was last used/accessed, is not even reported.
2, Forecast Savings & plan data management strategies
Once you can see your true cloud picture, you need to understand your current cloud costs and establish a baseline. Tools that allow you to you examine “what-if” scenarios are valuable to see exactly how much you’ll save with different data management policies.
This information is critical to have before you start moving your cloud data. The ability to set policies on when your cold data will get archived across storage classes will help you accurately project your company’s savings.
3, Archive data based on actual data usage to avoid cost surprises
If you want ongoing savings, you need to manage the lifecycle of your cloud data. That requires the right information about your data on which to base your decisions.
Last-modified time. Many cloud solutions base policies on when data was last modified (or written). But that doesn’t identify the most prevalent data usage pattern like when data is created once and then read frequently again and again making it hot data.
Using last modified time can result in erroneously archiving hot data to lower storage classes, which can reduce performance, cause disruption, and in some cases, break applications. It can also actually increase costs incurred by frequently accessing the hot data from a lower tier that has much higher access fees.
Last-accessed time. Basing management policies in the cloud on when data was last accessed (last read or written) is far more accurate than last modified. Archiving based on last-accessed time of the objects, provides a more predictable decision on the objects that will be accessed in the future.
This helps intelligently archive the data in a cost-effective and efficient way without disrupting users and applications. Decisions based on data access allow archiving to the most cost-effective storage classes, including S3 Glacier and S3 Glacier Deep Archive, without the risk of increased costs from accessing hot data in cold storage.
4, Simplify migrations
Migrating data in the cloud is no one’s idea of a good time. If it is please reach out and tell us why 😂
Migration tools need to simplify the time-consuming, error-prone task for you in the following ways:
Easily pick your source and destination even if they’re on different clouds and create migration or copy tasks.
Run dozens or hundreds of migrations in parallel, and automate with multi-level parallelism that exploits the inherent parallelism of each data set key to finding the most efficient way to migrate data.
Reduce the babysitting, by adjusting to network unavailability and other issues and retrying automatically, so you don’t have to.
Cloud budgets are straining, and the second biggest line item is storage. The biggest challenges to controlling these costs are poor visibility into cloud data and the complex multiple factors involved in managing it.
This contributes to over 80% of organisations missing out on available cost-saving options in the cloud.
We’ve identified four proven ways to reduce cloud storage costs. And given the cloud’s pay-as-you-go model, an analytics-driven, automated data management approach is the best choice to implement them removing the cloud management complexity and keeping costs down.
This is where we recommend Komprise Intelligent Data Management for Multicloud because it provides unprecedented cloud data insight within and across clouds.
Because data usage patterns are in constant flux, its automated policies that change as access patterns change are a crucial capability. It enables data storage to be continuously optimised to dramatically reduce costs based on actual data use. It mitigates risk by simplifying the migration of data between multiple cloud providers and between on-premises and the cloud.
With so much data in the cloud, an analytics-driven approach to cloud data management is a strategic move to simplify your multicloud strategy and save substantial costs.
If this all makes absolute sense to you and you are interested in how Simoda & Komprise can help deliver true value reach out to us today.
0114 553 3600