Scaling Out Singleton Insert Workloads Using Containers: Part II

In the previous part of this blog post I discussed how containers could be used to scale out a singleton work load. Where as my attempts to get my experiments to work ran into difficulties with Ubuntu Linux, Docker community edition on Windows 2016 had no such problems.

Anything that relies on a single point of serialization to ensure that an entity can be recovered to a consistent state in the event of a crash, incurs a scalability penalty because of the synchronization that needs to take place around this mechanism. This is just one of three main areas that inhibit transnational throughput:

  1. XDESMGR
    Unless you are using SQL Server’s in-memory engine, you are likely to encounter contention around the spinlock which synchronizes access to the part of the engine that gives out transaction ids.
  2. LOGCACHE_ACCESS
    Under a heavy OLTP load across two CPU sockets, spinlock contention will be encountered on LOGCACHE_ACCESS, the entity which serialises access to the log buffer. This is not so much of a problem with the in-memory engine as this does away with write ahead logging and also it is extremely aggressive when batching together the logical operations which are logged.

Cache Line Ping Pong

A cache line is the unit of transfer between main memory and the CPU cache, I could write an entire blog post on the nuances of the CPU cache and cache coherency, suffice it to say there are two pieces of information which are vital to know. Firstly each spinlock has a cache line associated with it, to acquire a spin lock the cache line associated with it has to travel to the CPU core running the thread that wishes to acquire in, a compare and swap operation is then performed on the cache line and then it is returned to the CPU core from were it came. The further that these cache lines have to travel, the more CPU cycles are burned up. Remember that all memory access is CPU intensive. Code and software that leads to cache lines bouncing from CPU core to core, hence the term “Cache line ping pong” is highly inefficient.

Why Not Just Use The In-Memory Engine ?

You will note that I have mentioned that the in-memory engine solves a lot of the problems the legacy engine encounters around spinlocks, therefore a reasonable question to ask might be, why not just use the in-memory engine and end the blog post here. Consider a scenario where you are having to ingest a large amount of data, IoT for example, you may soon run out of memory, considering that as a rough rule of thumb you require twice the amount of memory as the data being stored per memory optimized table.

Scaling A Singleton Workload With A Single Instance

For this test I will be using:

  • A two socket Dell R720 with 12 Xeon 2.7 Ghx V2 cores per socket and 512GB of memory.
  • Windows server 2016
  • SQL Server 2017 CU3
  • Hyper-threading turned off
  • Socket 0 core 0 removed from the CPU affinity mask
  • Trace flags 1117, 1118 and 2330 enabled

Each thread in the workload performs 250,000 inserts into the following table:

CREATE TABLE [MyBigTable](
     [c1] [bigint] NOT NULL
    ,CONSTRAINT [PK_BigTable] PRIMARY KEY CLUSTERED ( [c1] ASC )
)

CREATE PROCEDURE [dbo].[usp_InsertSpid]
     @TransactionsPerThread int = 500000
    ,@CommitBatchSize       int = 1
AS
BEGIN
    SET XACT_ABORT ON;
    SET NOCOUNT ON;

    DECLARE   @i      INTEGER = 0
	         ,@j      INTEGER = 0
             ,@base   BIGINT  = @@SPID * 10000000000;

    WHILE @i < @TransactionsPerThread
    BEGIN
        BEGIN TRANSACTION
		    WHILE @j < @CommitBatchSize
			BEGIN
				INSERT INTO dbo.MyBigTable (c1)
				VALUES  (@base + @i);
				SET @j += 1;
                SET @i += 1;
			END;
        COMMIT;

		SET @j = 0;
    END;
END;

And this is the throughput of the workload

Capture

Repeating this exercise with a guid based key and this code:

CREATE TABLE [MyBigTableGuid] (
     [c1] [uniqueidentifier] NOT NULL
    ,CONSTRAINT [PK_BigTableGuid] PRIMARY KEY CLUSTERED ( [c1] ASC )
)

CREATE PROCEDURE [dbo].[usp_InsertGuid]
     @TransactionsPerThread int = 250000
    ,@CommitBatchSize       int = 2
 AS
BEGIN
    SET XACT_ABORT ON;
    SET NOCOUNT ON;

    DECLARE  @i INTEGER = 0
	        ,@j INTEGER = 0;

    WHILE @i < @TransactionsPerThread
    BEGIN
        BEGIN TRANSACTION
            WHILE @j < @CommitBatchSize
	    BEGIN
	        INSERT INTO dbo.MyBigTableGuid DEFAULT VALUES;
		SET @j += 1;
                SET @i += 1;
	     END;
        COMMIT;

        SET @j = 0;
    END;
END;

Untitled

Where Is The Bottleneck ?

A stack trace gathered for the singleton insert workload using a clustered Guid key as viewed through windows performance analyzer reveals that most of the CPU time is burned up by the SQLServerLogMgr::AppendLogRequest function:

xperf1.PNG

Drilling down into the call stack reveals more information about this function:

xperf2

It appears that this has something to do with the LOGCACHE_ACCESS spinlock:

logcache_access

Tuning For Scalability 101

When faced with contention on a singleton resource, the obvious solution is to create more of that resource, however there are no knobs, levers or dials that will allow us create more instances of the LOGCACHE_ACCESS (or XDESMGR) spinlocks. What to do ?.

. . . Sharding To The Rescue !!!

Using containers we can create a number of SQL Server instances and shard the singleton workload across all of our available CPU cores, below is the PowerShell script which will allow us to do this:

Clear-Host
docker ps -a -q | ForEach-Object { docker rm -f $_ }
Get-Job | Stop-Job
Get-Job | Remove-Job

$HostBaseDirectory = "D:/mssql-data"
erase D:/mssql-data*/*

$MaxContainers = 24
$CpuSets = @( "0,1"  ,  "2,3" , "4,5"  , "6,7"  , "8,9"  , "10,11", "12,13",
              "14,15", "16,17", "18,19","20,21" , "22,23", "24,25", "26,27",
              "28,29", "30,31", "32,33", "34,35", "36,37", "38,39", "40,41",
              "42,43", "44,45", "46,47" )
$Key = "SPID"

for($i=0; $i -lt $MaxContainers; $i++) {
    $CpuSet = $CpuSets[$i]

    $DockerCmd = "docker run -v " + $HostBaseDirectory + $i +`
                 ":/mssql-data --cpuset-cpus=$CpuSet -e `"ACCEPT_EULA=Y`" -e `"MSSQL_SA_PASSWORD=P@ssw0rd!`" " +`
                 "-p " + [string](60000 + $i) + ":1433 --name SqlLinux$i -d microsoft/mssql-server-linux:2017-latest"
    $DockerCmd
    Invoke-Expression $DockerCmd
}

Start-Sleep -s 20

for($i=0; $i -lt $MaxContainers; $i++) {
    $DockerCmd = "docker exec -it SqlLinux$i /opt/mssql-tools/bin/sqlcmd " +`
                 "-S localhost -U SA -P `"P@ssw0rd!`" " +`
                 "-Q `"CREATE DATABASE SingletonInsert " +`
                 "ON PRIMARY " +`
                 "( NAME = `'SingletonInsert`', FILENAME = N`'/mssql-data/SingletonInsert.mdf`', " +`
                 " SIZE = 256MB , FILEGROWTH = 256MB ) " +`
                 "LOG ON " +`
                 "( NAME = N`'SingletonInsert_log`', FILENAME = N`'/mssql-data/SingletonInsert.ldf`'," +`
                 " SIZE = 256MB , FILEGROWTH = 256MB );" +`
                 "ALTER DATABASE SingletonInsert SET DELAYED_DURABILITY=FORCED`""

    $DockerCmd
    Invoke-Expression $DockerCmd

    if ($Key -eq "GUID") {
        $DockerCmd = "docker exec -it SqlLinux$i /opt/mssql-tools/bin/sqlcmd " +`
                     "-S localhost -d SingletonInsert -U SA -P `"P@ssw0rd!`" " +`
                     "-Q `"CREATE TABLE [dbo].[si] ( " +`
                     " c1 UNIQUEIDENTIFIER NOT NULL DEFAULT NEWID() PRIMARY KEY);`""
    }
    else {
        $DockerCmd = "docker exec -it SqlLinux$i /opt/mssql-tools/bin/sqlcmd " +`
                     "-S localhost -d SingletonInsert -U SA -P `"P@ssw0rd!`" " +`
                     "-Q `"CREATE TABLE [dbo].[si] ( c1 BIGINT NOT NULL );`""
    }

    Write-Host "Creating table" -ForegroundColor Yellow
    Invoke-Expression $DockerCmd

    if ($Key -eq "GUID") {
        $DockerCmd = "docker exec -it SqlLinux$i /opt/mssql-tools/bin/sqlcmd " +`
                     "-S localhost -d SingletonInsert -U SA -P `"P@ssw0rd!`" " +`
                     "-Q `"CREATE PROCEDURE [dbo].[usp_Insert] AS " +`
                          "BEGIN " +`
                          "    SET NOCOUNT ON " +`
                          "    DECLARE @i INTEGER = 0; " +`
                          "    WHILE @i &lt; 250000 &quot; +`
                          &quot;    BEGIN &quot; +`
                          &quot;        INSERT INTO si DEFAULT VALUES; &quot; +`
                          &quot;        SET @i += 1; &quot; +`
                          &quot;    END; &quot; +`
                          &quot;END;`&quot;&quot;
    }
    else {
        $DockerCmd = &quot;docker exec -it SqlLinux$i /opt/mssql-tools/bin/sqlcmd &quot; +`
                     &quot;-S localhost -d SingletonInsert -U SA -P `&quot;P@ssw0rd!`&quot; &quot; +`
                     &quot;-Q `&quot;CREATE PROCEDURE [dbo].[usp_Insert] AS &quot; +`
                          &quot;BEGIN &quot;  +`
                          &quot;    SET NOCOUNT ON &quot; +`
                          &quot;    DECLARE  @i INTEGER = 0 &quot; +`
                          &quot;            ,@base BIGINT = @@SPID * 10000000000;&quot; +`
                          &quot;    WHILE @i &lt; 250000 &quot; +`
                          &quot;    BEGIN &quot; +`
                          &quot;        INSERT INTO si VALUES (@base + @i); &quot; +`
                          &quot;        SET @i += 1; &quot; +`
                          &quot;    END; &quot; +`
                          &quot;END;`&quot;&quot;
    }

    Write-Host &quot;Creating stored procedure&quot; -ForegroundColor Yellow
    Invoke-Expression $DockerCmd
}

$RunWorkload = {
    param ($LoopIndex)
    $DockerCmd = &quot;docker exec SqlLinux$LoopIndex /opt/mssql-tools/bin/sqlcmd &quot; +`
                 &quot;-S localhost -U sa -P `&#039;P@ssw0rd!`&#039; -d SingletonInsert &quot; +`
                 &quot;-Q `&#039;EXEC usp_Insert`&#039;&quot;
    $DockerCmd
    Invoke-Expression $DockerCmd
}

$TruncateTable = {
    param ($LoopIndex)
    $DockerCmd = &quot;docker exec SqlLinux$LoopIndex /opt/mssql-tools/bin/sqlcmd &quot; +`
                 &quot;-S localhost -U sa -P `&#039;P@ssw0rd!`&#039; -d SingletonInsert &quot; +`
                 &quot;-Q `&#039;TRUNCATE TABLE si`&#039;&quot;
    $DockerCmd
    Invoke-Expression $DockerCmd
}

$InsertRates = @()

for($i=0; $i -lt $MaxContainers; $i++) {
    for($j=0; $j -lt ($i + 1); $j++) {
        Start-Job -ScriptBlock $TruncateTable -ArgumentList $j
    }
    while (Get-Job -state running) { Start-Sleep -Milliseconds 10 }

    $StartMs = Get-Date

    for($j=0; $j -lt ($i + 1); $j++) {
        Start-Job -ScriptBlock $RunWorkload -ArgumentList $j
    }
    while (Get-Job -state running) { Start-Sleep -Milliseconds 10 }

    $EndMs = Get-Date

    $InsertRate = (($i + 1) * 250000 * 1000)/ (($EndMs - $StartMs).TotalMilliseconds)
    Write-Host &quot;Insert rate for &quot; $j &quot; containers = &quot; $InsertRate
    $InsertRates += $InsertRate
}

Clear-Host

for($i=0; $i -lt $MaxContainers; $i++) {
    Write-Host &quot;Inserts`/s using &quot; ($i + 1) &quot; containers = &quot; $InsertRates[$i]
}</pre>
<pre>

What Does The Script Do ?

The script creates 24 containers each with their own SingletonInsert database, inside each SingletonInsert database it creates a clustered index and a stored procedure to perform inserts. 250,000 singleton inserts are performed inside each container using a single session, the penultimate loop in the script executes this workload with 1 through to 24 session. The –cpuset-cpu flag is of particular interest because this provides the means by which the script spins up containers on specific logical processors. In this example, each container is effectively bound to a single CPU core, meaning there is no cache line ping pong involved around the XDESMGR, LOGCACHE_ACCESS and LOGFLUSHQ spin-locks. Two types of clustered index are provided by the script, one using a GUID based key and the other using a sequential key

The Results

Firstly for the sequential “Spid offset” key we have this graph:

spid offset key

for the GUID key we have:

guid

In both cases, whilst the throughput when using a single instance plateaus off, the containerized approach keeps following a respectable curve.

Where To Next ?

I would like to repeat this exercise with the windows container image and compare the results with those for the Linux image, also I would like to see what difference changing the number of logical CPUs per container makes to the throughput curves.

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s