Azure,  Back to Basics,  Performance Tuning,  Problems & Solutions

Scooby-Doo and the Mystery of Cloud Costs (Let’s have some fun!) – Scooby Dooing Episode 1

If there’s one thing I’ve learned in consulting, it’s that SQL Server, and other database performance tuning isn’t just about faster queries—it’s directly tied to your bottom line in the cloud. Databases, because of their large memory and IO footprint are some of the most expensive cloud resources. Every extra read, every bloated execution plan, every oversized tier you’re running? That’s money disappearing faster than Scooby Snacks at a midnight ghost chase.

So, grab your Scooby-Doo hat, because it’s time to solve the mystery of runaway cloud costs.

Clue 1: Start with Query Store – Your Map of Clues

Every good mystery starts with clues, and in SQL Server (and Azure Postgres and MySQL), Query Store is your lantern in the dark hallway. It can quickly show you your top resource consumers—those suspicious queries lurking in the shadows. Think of it like Shaggy and Scooby stumbling across a half-eaten sandwich: it points you in the right direction. Use this list as your breadcrumb trail to start digging.

Clue 2: Unmask the Villains – High Resource Consumers

Once you’ve got your suspects lined up, it’s time to unmask them. High CPU queries, storage-heavy reads, expensive writes—these are the villains behind inflated cloud bills. By tuning them, you’re not just improving performance—you’re reducing the resources you need and, in turn, the tier you’re paying for.

Clue 3: Chase Down the Culprits – Tuning Opportunities

This is where the real chase begins:

  • Key Lookups: Sneaky little monsters that pile on I/O. Cover them with the right index and watch them vanish.
  • Implicit Conversions: Ghostly cost-drivers that slow everything down. Fix the data types and bust them for good.
  • Large Joins and CTEs: Sometimes they’re too heavy for the scene. Breaking them into temp tables can lighten the load.
  • Logical and Physical Reads: The bigger they are, the more they haunt your storage bill. Tune queries to shrink that footprint.

Clue 4: Reveal the Real Villain – Cloud Costs

As you piece the clues together and clean up each problem, you’ll start to notice something magical: your cloud bills shrinking. By reducing I/O, storage, and the need for higher compute tiers, you’re not just making your SQL Server faster—you’re saving real money.

And just like every Scooby-Doo episode, the big reveal isn’t shocking: inefficient queries were the villain all along. But with the right sleuthing, you and your team can say, “We would’ve paid for that oversized tier too… if it weren’t for those meddling DBAs!”

Monica Morehouse (Rathbun), a Microsoft MVP for Data Platform, resides in Virginia and brings two decades of experience across various database platforms, with a particular focus on SQL Server and the Microsoft Data Platform. She is a frequent speaker at IT industry conferences, where she shares her expertise on performance tuning and configuration management for both on-premises and cloud environments. Monica leads the Hampton Roads SQL Server User Group and is passionate about SQL Server and its community, she is dedicated to giving back in any way she can. You can often find her online (@sqlespresso) offering helpful tips or blogging at sqlespresso.com.

3 Comments

  • Kenneth Fisher

    I’ve always found the idea “Move to the cloud, you won’t need DBAs” to be hilarious. The work may have shifted some, but performance-related tasks are FAR more important than they used to be.

    Great post!

  • Mark Freeman

    Having gone through all that query and index tuning, what is hurting us most is having to pay for more vCores than we need in Azure SQL Database to get the storage we need. For example, an Elastic Pool that needs almost 3 TB of storage and therefore requires 12 vCores even though the maximum CPU usage is 20%.

    We plan to test Hyperscale, which is supposed to let us set CPU and storage capacity separately without any linked caps.

  • Sean Redmond

    Mention the importance of datatypes to the people developing the database.

    If Unicode can be done without, then do without it. As an exercise once, I took a database, created a new one based on its structure but VARCHARs rather than NVARCHARs. All of our text fits nicely into the Latin1 collation. The new database after migration was 6GB rather than almost 10GB. It was 30% smaller. That is a big reduction.

    I also questioned why we really needed DATETIME2(7) when DATETIME2(3) was more than adequate. These are small things but make differences in tables with many hundreds of millions of rows. It turns out that the developers had let EF design the database for them.

    If you have a DB bloated with JSON text, check to see how irregular the schema in the JSON is. If it is very regular, then these columns are good candidates for saving as relational data.

    Page (or row) compression might be an idea. Much less space is used but a higher CPU cost. It all becomes a resource management problem.
    The fewer resources are being used, the more efficiently the DB is running and the lower the costs in the Cloud should be.