top of page

The AWS dilemma - Stability & performance v's cost efficiency

If I said that we could save you up to 50% of your EC2 spend would you be interested in reading the rest of this blog post ?

OK so you must be interested ?

Let me set the scene first.

If given the choice between stability, performance, and cost efficiency, DevOps engineers will always prioritise stability and performance. They will do everything in their power to avoid application degradation or failure including allocating more cloud storage or compute than they need in order to account for future scale (overprovisioning).

Since applications will cease to function properly without the right amount of cloud resources, overprovisioning has become a necessity. Yet despite the fact that these resources sit unused in the user’s account for the vast majority of time, they are still costing organisations billions of pounds per year.

To combat the costs associated with overprovisioning, DevOps need to predict how many resources they’ll use, how much storage they’ll need, and how their architecture will change far in advance. This is a challenge, ask any DevOps team member, These factors are difficult to predict as they are dependent on how many customers are onboarded, which new features are launched, and a variety of other elements that are difficult to forecast even if the leadership team tell you its easy.

DevOps forecasting, now that is a tough olympic sport

Furthermore, in order to keep their cloud running at optimum efficiency, engineers also need to constantly perform manual and repetitive processes such as monitoring, adjusting, and managing their cloud infrastructure. These tasks are frustrating, time consuming, and not the best use of engineering resources.

For these reasons, cloud costs are often put on the back burner as technical teams prioritise application performance and product development as their core business objectives, preferring to spend their time adding value to the business rather than babysitting the cloud.

The EC2 challenge

Business moved to the cloud to gain greater flexibility, scalability, and cost efficiency. But as we know from the interesting conversations we are having they soon found that the benefits of flexibility and cost savings were at odds with one another.

EC2 On-Demand Instances provide optimum flexibility as they can be spun up at a moment’s notice in response to increased demand. But while its flexible, pay-as-you-go framework makes it easiest to use in dynamic, unpredictable cloud environments, it is also the most expensive instance type.

AWS also offers commitments such as Reserved Instances which provide savings of up to 72% off of On-Demand prices. However, in order to leverage Reserved Instances, users must commit to a certain number of instances 1-3 years in advance, which is nearly impossible to do as it requires stability and reliability in forecasts, which we know is a challenge even for cloud veterans.

Due to the difficulty in forecasting workloads, coupled with the threat of over-provisioning, many teams that buy commitments manually will put a maximum threshold for coverage in place. Without this threshold, they run the risk of buying too large a commitment, and if their workload decreases and/or they migrate to a newer server, they risk under-utilising the commitments they purchased. With AWS estimating that 35% of all cloud spend is waste, this is a legitimate fear.

As a result, the majority of users have no more than 40-80% of their compute environment covered with commitments. The rest of their environment is by definition, running on-demand, which costs twice as much as they could otherwise pay for that same service. In addition to the cost-related challenges, EC2 is quite burdensome to manage.

Adjusting capacity and reducing cloud spend requires human effort and technology. Engineers need to manually calculate and forecast usage far in advance in order to maximise their commitment utilisation. This takes time away from their traditional tasks of building new products and features, which are prioritised by most engineers. In addition, it is difficult to accurately estimate usage as engineers do not usually have access to their real-time data.

So these are the challenges, how do we help solve them ?

Now bare with me as I am going to go all 'have you been injured at work through no fault of your own' sales person on you………

We are committed to a real POV (Proof Of Value) when it comes to saving you money on your AWS environment.

In other words

No Win No Fee (see I told you)

We will use AI driven automation and real time data from your environment to advise on commitment utilisation, performance optimisation.

We commit that this solution is 100% hands free and we have seen savings up to 50% on EC2

One of the most compelling parts of our solution is that all we ask is that you pay us a percentage of the savings. So no budget for new tools is no reason not to talk to us, with our pricing model we technically do not require any budget whatsoever.

We will only invoice after you have already experienced monetary gains from using our service. Therefore, you will always have net positive gains from implementing working with Simoda.

What are the basic requirements ?

  • Minimum of $5,000 of EC2 spend per month.

Yes that is it.

If you fit the criteria what is stopping you from engaging with us ?

Contact a member of our team ASAP and lets run a POV for you.

I am so proud of our approach to public cloud solutions, this is just another one of the amazing solutions we have in our tool bag to help our customers.

Thanks for reading


A proud MD of a fantastic technology 1st company.

38 views0 comments


bottom of page