Under The Hood With SQL Server Batch Mode Part 3: Sequential Scan and Join Scalability

In this post I want to take a look at how batch mode and column store indexes perform in two scenarios:

  • Sequential scans
  • Fact table to dimension table join

Sequential scans demonstrate some interesting nuances depending on the type of hardware setup being used, for this very reason, I have two distinct types of storage:

hardware

For the all the tests performed an artificially inflated FactInternetSales table is used:

SE

Lets now look at how a straight sequential scan performs with my slow and fast storage using the pre SQL Server  2012 forms of compression:

conv seq scan

Elapsed time increases as the compression ratio goes up for the slower storage, however for the faster storage, the reverse is true, the theory being that we are feeding the servers CPU cores at such a rate that the system is now CPU bound. The cost of decompressing a page compressed heap during a sequential scan versus that of an uncompressed heap is quantifiable by looking at the call stack, as obtained through xperf:

quant compression Sequential scans of ( clustered ) column store indexes exhibit similar behaviour: cstore compression As before, the CPU cost of decompression during the scan is quantifiable:quant cstore comp

These sequential scan findings are best illustrated with a see-saw analogy as there is a tipping point between the balance of a server’s CPU capacity and its IO sub system bandwidth which determines whether or not compression is beneficial for a sequential scan.

compression_works

Conversely, if your IO sub system can keep up with or swamp your available CPU resources, compression yields benefits for sequential scans.

compression_nowork

When joins are introduced into the equation, prior to column store indexes, this is the best that is achievable for a simple fact table to dimension table join:

Pre SQL 2012 In the SQL Server 2014 world with a clustered column store index, the graphs look like: Post SQL 2008 R2 If we compare the two slides, several conclusions can be drawn:

  • The use of column store indexes and batch mode results in better scalability, up to a degree of parallelism of around 10, as opposed to 8 without column store indexes.
  • Column store indexes and batch mode reduce elapsed time by an order of magnitude compared to rows stores and row mode.
  • Here is the really interesting thing, the best sequential scan for the row store and the flash storage takes place when there is no compression is used. This begs the question as to what sort of performance could be obtained if SQL Server allowed for column store indexes without any run length compression. It is my belief that for storage that can match / best a servers aggregate CPU core consumption rate, this would result in the fastest possible sequential scans and the best all round query performance.

In the next post I will cover some of the fundamentals of modern CPU architectures which are the drivers behind why the batch mode execution engine exists.

2 thoughts on “Under The Hood With SQL Server Batch Mode Part 3: Sequential Scan and Join Scalability

  1. Bit late to reading this. I’m so used to thinking of storage as the bottleneck that I never thought to consider that compression could hurt when CPU is the bottleneck. Would be cool to have some control over the compression algorithms applied to column store indexes. This may be required as faster storage technologies arrive. Nice post!

    • Column store compression / decompression is completely totally and utterly different beast to row and page compression, in that you do not have the massive CPU overhead of working out where columns begin and end as you go across the row. With column stores all you do is scoot down the blobs your column(s) is/are stored as. Where the bottleneck is with the batch engine is with hash joins, relevant because you are going to want to join your fact tables with column stores to dimensions tables, under the covers a hash join aggregate will throttle a column store scan back via making system calls to sleep if it is cached in the column store object pool or if we have really fast storage. Taking a step back the whole art of tuning will shortly be turned on its head, historically most database professionals attention has been focused on indexing strategies, PLE, minimizing fragmentation . . . we are standing at a massive inflection point at which disk will become the new tape.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s