Jani K. Savolainen

FinOps as a Management Practice in Private and Public Clouds

Finops as an organisational management practice is just as well suited to managing the costs of your own data centres as it is to managing the costs of clouds. However, the methodologies used here differ somewhat, in some cases a lot. It is therefore useful to identify these commonalities in order to properly assess the investment and life-cycle costs of private cloud, public cloud and possible hybrid solutions. In my blog series, I take a three-pronged approach to this topic from a business perspective:

  • Tipping Point
  • Solution
  • Result

Application of FinOps in your own data centres (private cloud, private cloud)

FinOps management principles can be used in your own data centres, even if the approach and tools differ somewhat from the public cloud. Although FinOps is traditionally associated with cloud platforms, its three basic principles can be applied to an organisation’s own data centres:

  • Financial responsibility of the different parties
  • Cost optimisation
  • Generating value

Visibility of use and costs

Pain point: apportioning the operational costs of in-house data centres to the entities using the services may not be very accurate or fair.

Solution: Intelligent and real-time monitoring of the use of data platforms in data centres by user and taking into account all variable costs (including CPU, memory, network and storage capacity usage) provides a good starting point for true cost allocation.

Result: the organisation is able to allocate data centre costs for data platforms to the right users and business activities.

Allocation of costs

Pain point: However, in data centres, many of the costs that can be disaggregated by user and business function are allocated using fixed allocation ratios. This can lead to distorted profitability and competitiveness calculations and cause significant drawbacks, for example when comparing alternative ways of delivering the required services or, when reselling businesses, lead to incorrect analyses of production costs.

Solution: the chargeback and showback approach enables an organisation to allocate all costs associated with data platforms fairly and transparently to the right users and businesses. Chargeback functionality allocates costs to the right parties and bills accordingly, while showback functionality allocates costs but does not bill. Internal organisational policies guide the use of the appropriate practice.

Result: the costs of data platforms are correctly allocated to those who use them and the different parts of the organisation get a true representation of the services they use. This is a good way to encourage different teams to optimise the use of resources. For organisations to be competitive and cost-effective, it is essential that all costs are accurately quantified, as this is the most effective way to identify efficiency gains. If costs are allocated at a fixed ratio, inefficient and costly details are hidden in a flat grey.

Resource optimisation and capacity planning

Pain point: One major difference between in-house data centres and public clouds is the flexibility of resources. In-house data centres are typically much smaller and designed for predictable workloads, growth and risk margins. The life cycle of data centres is typically plus or minus five years and significant changes within that cycle are typically costly and avoided.

Data centres require up-front investment, making scalability a significant concern. In-house data centres are often over-allocated and acquired resources are under-utilised due to static up-front capacity planning. Residual over-allocation easily occurs up to 50% due to imprecisely defined security margins, under-allocation much less frequently.

Many investments in data centre resources are significantly cheaper at the beginning of the life cycle than in the middle. This is a major reason for over-allocation.

Even if planning is done correctly for current operations, there may be changes in the business that lead to over- or under-allocations. Mergers and acquisitions, various mergers and, most recently, the technological and capacity requirements brought by artificial intelligence are good examples.

Solution. When these elements are in the same equation, optimal capacity planning and resource utilisation can be achieved.

Sizing and planning should use tools and practices that take into account the seasonality of data platforms, trends, outages, service times, and business requirements such as scalability needs, database criticality levels and required service levels.

Result: the most critical thing for an organisation is to ensure business continuity. Just as important as ensuring the sizing of data platforms at the beginning, is to keep them optimal throughout their lifecycle. This ensures the best possible cost-effectiveness for critical services.

Workloads on an organisation’s data platforms are in a constant state of fermentation, and therefore need to be kept under review not only for quality of service, but also for cost-effectiveness. Correct sizing, virtualisation, containerisation and consolidation of workloads ensure optimal cost-effectiveness. By being able to monitor and predict the evolution of data platform usage, an organisation can also optimise future acquisitions and replacement investments, both in terms of quantity and timing.

Automation

Pain point: data centre services related to data platforms are resource hogs, so to speak. Data platforms consume on average a third of data centre energy, and as the volume of data continues to grow rapidly, the energy and other resources they require will continue to grow. In this equation, it is good to turn over the small stones when looking for cost-effective solutions for delivering services to organisations.

Manually optimising cost-effectiveness, quality and performance is a daunting, virtually impossible task. Personnel costs are too high and the risk of errors increases.

Solution: when it comes to analysing and optimising data platforms, there are so many data points and factors to analyse that the only way to deal with them is advanced automation. Automation by itself does not provide the solution, but requires a lot of knowledge about the behaviour of services and data platforms in the past, in the present and, most challenging of all, the ability to predict the future. All this is backed up by the personal skills of experienced experts, their technical and support skills, and predictive analytics models that analyse workloads on a proactive (pre-migration) and needs-based (post-migration) basis .

Result: applying automation helps an organisation turn over every stone in its cost optimisation challenge. Capacity planning with the right information through automation helps organisations minimise investment without compromising service quality. The data platform lives and changes with business needs, and automating resource allocation in a cost-optimised way can deliver significant savings. More advanced solutions also optimise the timing of an organisation’s workloads by exploiting the cyclical nature of resource usage. Every low cycle is a waste and a high cycle is a potential service level risk – and in the worst case, a trigger for new investment.

Governance, common practices and cooperation

Pain point: the true costs, optimisation opportunities, savings and improvement potential of the services provided to businesses by their own data centres’ data platforms are often overlooked because they cannot be drilled down to a sufficiently detailed level.

Solution: In order to maximise its cost-effectiveness and competitiveness, it is useful for an organisation to establish common policies between data centres, different service units and business teams. At a minimum, the policies should include models for information exchange between teams, future ideas and needs, and encouragement to seek trade-offs, because, for example, both maximising cost savings and maximising speed to respond to business needs can lead to extreme solutions in the overall picture and thus to unfavourable outcomes. The compromise lies somewhere between these two extremes.

Result: the organisation is able to analyse the right costs, optimisation opportunities and cost-saving potential of all services using data platforms and allocate costs to the right business activities, optimise resources for maximum benefit and realise optimal savings on data platforms.

Closing words

SQL Governor is the only tool on the market that combines both in-house data centers and public cloud cost management (FinOps) into a single, managed entity, enabling performance, capacity and quality optimization in a single package. Our AI-powered software holds several international patents for predictive capacity planning. The tool can be used to automate and significantly improve the cost optimisation of Microsoft-based data platforms. At the time of writing this blog, we have served medium and large enterprises in a wide range of industries in 12 countries. Interested? Read more here: www.sqlgovernor.com