SQLGrillen

I’m sitting at Schiphol airport in Amsterdam contemplating my midsummer weekend. I was accepted to speak at SQL Grillen, a free one-day (plus one day of paid pre-cons) event in the metropolis of Lingen, Germany. Never heard of Lingen? Well, neither had I, before this event. When William Durkin, the main organizer of the event, gave me the news that I had been accepted, I took the opportunity to ask him where Lingen was, and he told me that it was fairly near “the ass end of nowhere”. Having been to Lingen, I would have to agree.
Actually getting there turned out to be a bit of a challenge – in the end, the amazing Andrew Pruski told me that he was driving there from Schiphol, that his flight landed five minutes after mine and that he’d love travel company. Said and done, we found each other and proceeded to pick up the car. Then we proceeded to repeatedly park our way to Germany as there was a lot of traffic. 2,5-3 hours later we arrived in the sleepy town of Lingen.

After the speaker dinner I found myself talking to the heavy hitters of the #SQLFamily, and they pointed out that with one track being newcomer only, there were 28 sessions in total. Then I was told that they had had over 200 session submissions. Take a second to consider that. It was almost a 1:10 ratio between the number of sessions submitted and the available slots. And here I was. I looked around and marvelled at the company I was in. I was chosen when so many people were not, and for the first time in a long, long time I felt a very real feeling of pressure. I was here, so many people were not, time to really, REALLY get my head in the game.

The event itself has been running for just a few years, growing from very few speakers and a small number of attendees, to a hard cap of 200 attendees and 35 sessions. New for this year was that one of the tracks was given over to newcomers – people that had never given a session at anything bigger than a usergroup or similar. This is simply a stroke of genius. The catch-22 of speaking is as simple to understand as it is depressing in practice – in order to be selected you must me a somewhat known entity. In order to get known as a good speaker, you need to – yeah, you guessed it – speak. The newcomers were, in a word, exceptional. With no exception, all of them performed absolutely great. I had the pleasure of attending a session on “soft skills” by Robert French, a gentleman from Scotland. I do a fair amount of speaker mentoring and presentation skill training, so I tend to keep track of small things that can be improved in pretty much every speaker I listen to, but Robert was … as close to absolutely brilliant as I’ve ever seen. I can’t remember the last time I left a session with such a feeling of disbelief due to having watched a master at work. I can’t wait to see him present again, and I sincerely hope the rest of the speaker circuit sit up and take notice, because this man can teach something to EVERYONE. Each of the speakers had one mentor assigned to them, and Robert was paired with the equally brilliant Cathrine Wilhelmsen, so he could not have been in better hands.

My session was the first one after lunch. At the same time as Grant Fritchey. Grant. Fritchey. Yes, THAT Grant Fritchey. The good thing with being on at the same time as Grant is that I wouldn’t be having him in the audience, but the other side of that coin is that I wouldn’t have anyone else either, as they’d all be at Grant (or any of the other three tracks). Now, Grant is an amazing guy – exceptionally skilled, always eager to help, kind, open, super-helpful, approachable and all the other good things, but he’s one of the very few people that genuinely scare me. The thought of having him in the audience would take some self-control to handle.
The session I presented this time was “The Force Awakens – Azure SQL Server for the on-prem DBA”, a new version of one of my original Azure sessions. All in all it went well, but I’ve got a list the length of my arm of things that I want to improve, and it is so interesting to see how easy it is to give advice to others and completely miss it yourself – I *know* I have a stoneface and a deadpan demeanor worthy of the highest echelons of the British royal family and that I *need* to loosen up. Did I? I’ll let you take a guess. It’s yet again on the list. At the top. In red. Underlined three times.

When the sessions were done for the day, the other two words of the SQLGrillen logo came into play – “beers” and “bratwurst”. The evening consisted of talking to all the amazing sponsors, attendees and speakers – all while eating excellent German sausages and grilled meat and drinking different kinds of German beers. In short – it was amazing. Social interaction ALWAYS lead to new friends, new insights and new ideas. At the surface “databases, beer and bratwurst” doesn’t sound like a very serious event, but this has been one of the best events I’ve ever had the pleasure to attend. During the evening I was asked if I would consider being a mentor for a new speaker at DataMinds in Belgium later this fall. I did not have to think about that for more than a second, as I would be absolutely honored to help out any way I can. And so it was that I became one of the newcomer mentors for DataMinds.

I would like to finish with a heartfelt “thank you” to William, Ben and all the other organizers, sponsors, speakers and attendees. This was one of the best run events I’ve ever had the pleasure to speak at, and definitely one of the most fun. I would like nothing more than to get to come back next year, even though the chances are slim. I’m so happy to have been given the opportunity, and I’m looking forward to the next opportunity to meet my #SQLFamily.

Going to unexpected places

A week ago I woke up in Tel Aviv, Israel, the day after I gave my presentation “Speak your hands – using body language for effective communication” at SQL Saturday in Israel. Despite feeling the onset of a sore throat, I contemplated how I had gotten here.

I did not expect to find myself in Israel doing what I love – speaking at conferences and sharing knowledge – when I first started working with databases back in 1997. In fact, I didn’t expect to get very far from my birth town at all. It turns out that was going to be rather far from the truth.

The event in Israel was the fourth SQL Saturday in Israel and had some 400 attendees registered. The speaker lineup was quite impressive, with speakers from Portugal, Brazil, the US, the UK, Canada and Israel. Two minutes before I was due to take the stage I had a grand total of two (2) people in the room. I asked one of them about the Israeli view on time, and was told not to worry, as Israelis in general have a very flexible view on this “time” thing. This turned out to be quite correct, as five minutes later I had 40-ish attendees, eagerly awaiting my session on presentation skills.

The session went very well despite some technical issues in the beginning, and I was very happy to receive very good feedback both immediately after the session and all through the day. While presentation skills might seem like a weird topic for a SQL Saturday (especially as far from everyone is a presenter), my opinion is that most everything about interactions with other people is a kind of a presentation. My session covered gestures, body movement, facial expressions and use of voice – all of which is equally useful in a discussion with the boss or the significant other as it is on stage delivering a presentation.

I’m on a kind of mission to help technical presenters in general to up their game – the absolute majority of the presenters I meet at SQL Saturdays and other conferences are VERY good at the tech but can benefit from learning a thing or two about presentation skills. It’s not hard, but it is a skillset that needs to be learned just like the technical aspects. At the end of the day, everything is about people, so it might not matter if you have the best demos, the coolest tech or the niftiest scripts – if you can’t explain or disseminate what you want others to understand the whole thing falls flat on its face. I hope to get the opportunity to give this presentation at multiple technical conferences around the world in the future.

This year alone I’ve spoken at seven conferences in as many countries. I have four conferences in three countries in the pipeline, as well as scores of abstracts waiting to be reviewed at conferences all over the world. It started with SQL Server in the small town of Linköping, Sweden, and now it’s taken me all over the world. The #SQLFamily is truly amazing.

I crawled out of the bed and made ready to go to Jerusalem with four of the Israeli organizers and three of the international speakers. We had an amazing day in Jerusalem and I am forever grateful to Michelle, Schmuli, Adi and Maria for setting it all up. It takes a special kind of crazy to organize SQL Saturdays and I have the utmost respect and admiration for the team behind SQL Saturday Israel. I was so very glad to be invited to the beautiful country of Israel, and I very much hope to be back next year!

Basic settings for SQL Server 2012/2014/2016, part 5

Part 5: updating the basic settings for SQL Server 2016/2017

It’s been quite a while since I penned my best practices posts for SQL Server 2012/2014 and I’ve been saying for ages that I should update them for SQL Server 2016. Well, it would seem that hell finally froze over as here we are!

Let’s startout with the Query Store. This is a new contraption introduced in SQL Server 2016. It is turned off by default, but in my opinion you should always turn it on. This is done for every database, either via the GUI or

ALTER DATABASE [DatabaseName] SET QUERY_STORE = ON;

The query store needs to be monitored, and I highly recommend that you read this post about Query Store best practices.

For previous versions I highly recommend the Open Query Store – this is an open source alternative (run by Enrico van de Laar and William Durkin) that enables Query Store-style functionality for versions below 2016.

SQL Server 2016 does away with many of the previously more or less “standard” trace flags. 1117, 1118, 2371 and 4199 are all folded into COMPATIBILITY_LEVEL 130. The behavior of 1117 and 1118 should now be handled using ALTER DATABASE per database, and 2371 is automatically enabled for databases with COMPATIBILITY_LEVEL 130. That means that if you have older compatibility levels on the same server you might still want to keep it on, though.

References: https://blogs.msdn.microsoft.com/sql_server_team/sql-server-2016-changes-in-default-behavior-for-autogrow-and-allocations-for-tempdb-and-user-databases/

SQL Server 2016 also introduced something called “scoped database parameters”, enabling setting things like MAXDOP per database. This does away with the clunky need to set potential far-reaching settings on a global level (MAXDOP comes to mind). At the time of writing the following settings can be scoped:

  • Clear procedure cache
  • MAXDOP
  • Cardinality optimization model regardless of compatibility level
  • Parameter sniffing
  • Query hotfix settings
  • Optimize for ad-hoc workloads
  • Identity cache
  • Reference: https://docs.microsoft.com/en-us/sql/t-sql/statements/alter-database-scoped-configuration-transact-sql

    I’ve taken these blog posts and done a presentation that cover them. I will be presenting “Boring stable, stable is good – best practices in practice” at the Nordic Infrastructure Conference in Oslo as well as SQL Saturday #704 in Reykjavik, Iceland.

    T-SQL Tuesday #96: Folks Who Have Made A Difference

     

    This T-SQL Tuesday was an interesting one, and I’ve been spending quite a few hours thinking about how to formulate my thoughts. Unfortunately the end of said Tuesday is nearing with unreasonable speed, so I’ll have to be brief. This month’s challenge comes from Ewald Cress and is to recognize those who have made a meaningful contribution in your life in the world of data.

    There is a saying that “all good deeds are only made possible by people standing on the shoulders of giants”. That in turn requires an environment of sharing and nurturing – the more the better. There are many giants in our community; way more than I could possibly write in a single blog post.

    While the number of people having made meaningful contributions to my life in data is huge, two names stand out. The first is Jes Borland (@grrl_geek) who, despite having yet to actually talk to me in person, gave me the inspiration to actually stop thinking of speaking and actually do something to get there. As is quite common, the small, at the moment somewhat insignificant actions often turn out to be the pivotal ones. There is another saying that “no raindrop ever believes it is to blame for the flood”, but the short words of encouragement from Jes started the trickle that became the stream and is heading full tilt to becoming a flood.

    And “flood” is a good way to describe the other person: Cathrine Wilhelmsen (@cathrinew). From the moment I bumped into the Norwegian Whirlwind at Summit 2016, my life has literally not been the same. She introduced me to every hot shot in the business, all while I was having eyes large as saucers and a seriously hard time keeping up.

    This year at Summit, I had a fair amount of steam going in. Due to the many acquaintances I made the year before, it was easier to socialize and make even more friends. I did what Jes and Cathrine had done to me, each in the own way, and tried to help first timers and people eager but lacking in direction to connect with people that might get them further than I could. This way I got to meet Laura Muise (@Laura_SQL, an amazing woman who will undoubtedly go far in our world of data) and maybe, hopefully, gave her something to in turn enable her to pay it forward whenever the time comes. I can’t wait for her to speak at her first SQL Saturday.

    Thinking about this topic while teaching a SQL Server course this week, I came to the realization that the folks who have made the most of a difference is the #SQLFamily itself. The giants I wrote about earlier are all regular people – people just like you and me. For some people (albeit a very small number) I am the giant. Many, many others are my giants. The point is, that a community so focused on giving to such a degree that the SQL Server community is, become self-sustaining with giants.

    I’d like to give the biggest shout-out to you.

    All of you.

    You make meaningful contributions to my life of data every day.

    Thank you for being part of the most awesome community there is.

     

     

    Another kind of contention in TEMPDB

    I’ve been helping troubleshoot a SQL Server 2014 that kind of tips over every hour when a job is run on one of the application servers.

    All eight cores of the server go through the roof together with the counter for batch requests/sec – and then everything grinds to a halt. This behavior continues for about ten minutes (until the application servers are done with their horrors) and then the server recovers back down to 5-10% CPU load and everyone breathes a sigh of relief. For about an hour.

    So how does one troubleshoot an issue like this? Well, the first thing I did was to look at the wait statistics during one of these windows of disaster. This can be done in one of several ways, but I ran a script that takes a snapshot of the sys.dm_os_waiting_tasks, waits a number of seconds (30 in my specific case), takes a new snapshot and calculates the difference (I used Paul Randals script HERE). This way I’m not burdened with whatever the server has been doing up to this point in time.

    From the result one thing sticks out somewhat – PAGELATCH_EX. So what IS PAGELATCH_EX anyway? A full explanation is beyond the scope of this blog post, but Paul Randal (as usual) has a great writeup about the wait HERE.

    The next step was to find out what query was causing this wait using sys.dm_os_waiting_tasks and filtering on wait_type=’PAGELATCH_EX’.

    SELECT *
    FROM sys.dm_os_waiting_tasks WHERE wait_type='PAGELATCH_EX'

     

    I ran this several times to see if there was a pattern to the madness, and it turned out it was. All waits were concentrated in database ID 2 – TEMPDB. Many people perk up by now and jump to the conclusion that this is your garden variety SGAM/PFS contention – easily remedied with more TEMPDB files and a trace flag. But, alas- this was further inside the TEMPDB. The output from the query above gave me the exact page number, and plugging that into DBCC PAGE gives the metadata object ID.

    DBCC TRACEON(3604)
    GO
    DBCC PAGE(2,7,1639,0)
    GO

     

     

    This ID can then be run through the OBJECT_NAME function to get the name of the object in question.

     

     

     

    It turned out to be sys.sysschobjs that can be read about in detail HERE. In short, it is a table inside TEMPDB that contains one row per object in TEMPDB.

    Thus there is apparently contention for an object that tracks the existence of tables in the TEMPDB. This pretty much has to do with the server creating and demolishing TEMPDB tables in a rate that is simply too high for the poor server to handle. To see just how many TEMPDB tables were actually created and destroyed during these windows of despair, I took a look at the performance monitor counters SQL Server: General Statistics/Temp Tables Creation Rate and Temp Tables For Destruction. During most of the day these were running at around 1-2 per sec of creation rate and 7-8 in the queue for destruction. Then this happened.

     

    It is kind of obvious WHEN this window opens, yes? The numbers soar to 27 tables created per second and some 280 queued for destruction.

    So, to recap: the job on the application servers is creating temporary tables at a rate with which the database simply can’t keep up. This leads to PAGELATCH_EX waits on sysschobjs and this shows as high CPU.

    The million dollar question boils down to “what can be done about it?”. Not much, it turns out. In this case I was assisting the vendor and they have some homework to do with their code. There are essentially two things that can be done from their end: either cut down on the use of TEMPDB tables or implement an in-memory table instead. Now, this behavior might very well be due to a bug where the TEMPDB table caching is not working as expected or someone has mistakenly put a CREATE TABLE #WHATNOT on the wrong side of a loop, but I’ll have to get back to you on that one.

    From the customer’s side there is nothing to do but throw hardware at the problem and impatiently wait for a fix from the vendor.

     

    Azure database maintenance using Azure automation

    One of the things often overlooked with Azure SQL databases is the fact that they need love in exactly the same way as their on-premises brethren. Sure, Azure will keep an eye out for the most blatant issues, but the old adage “a penny of prevention is worth a pound of cure” still holds water. Pieter Vanhove did a great blog post a few years back, but since then Larry Silverman made some updates to the original script so I decided to revisit the whole thing and put everything in the same place. A big thank you to both Pieter and Larry for the excellent groundwork.

    The trouble with Azure SQL Database is the lack of an SQL Agent to schedule the usual scripts – like Ola Hallengren’s excellent Maintenance Solution. Another issue is the lack of cross-database scripting support in Azure, making the possibility of executing Ola’s scripts from a DBA-DB somewhere impossible.

    There is a way to have Ola’s script maintain each and every database on your Azure SQL Database server, but due to the lack of cross-database scripting mentioned earlier, it requires that the procedures IndexOptimize and CommandExecute as well as the table CommandLog be set up in each and every database you deploy to your server. I’d recommend putting them in their own schema to be easily distinguished from the rest of the database, and in the example below I use the schema “dba”.

    Download IndexOptimize.sql, CommandExecute.sql and CommandLog.sql from Ola’s site here.

    Open them up in a text editor and run a search/replace for all instances of dbo and change that to dba. This is to make sure they end up where they are supposed to. Then let’s get cooking:

    1. Create a DBA schema in the Azure SQL Database.

    CREATE SCHEMA dba
    GO

     

    2. Run the three scripts you downloaded earlier, making sure that the two procedures and the table end up in the DBA schema.

    At this stage you can test the scripts by running the following code and taking a look in the dba.CommandLog table:

    EXECUTE dba.IndexOptimize
    @Databases = 'USER_DATABASES',
    @FragmentationLow = 'INDEX_REORGANIZE',
    @FragmentationMedium = 'INDEX_REORGANIZE,INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE',
    @FragmentationHigh = 'INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE',
    @FragmentationLevel1 = 5,
    @FragmentationLevel2 = 30,
    @UpdateStatistics = 'ALL',
    @LogToTable = 'Y',
    @Execute = 'Y'

     

    3. Let’s create an automation account.


    Click “Add” to create one.

    Then we input a name for the account, a resource group and a location. For performance reasons it’s best to let the automation account be placed in the same datacenter as the database itself.

    With the account set up, go ahead and click on “credentials” on the left. We need to set up a stored credential to enable the runbook to log into the database server.


    Set a name of the credential (this will be referenced later), input the administrative user for the database server and the password for said user.


    Go ahead and click on “overview”, folloed by “runbooks”.

    On the runbook blade, click “Add a runbook” and fill out the name and the runbook type. The name must be the same as in the code we’re going to paste in and the type must be PowerShell Workflow.

    Paste in the following code, overwriting the text that’s already there:

    <# .SYNOPSIS Perform index maintenance .DESCRIPTION This runbook uses Azure Automation to perform reindexing and statistics maintenance of all databases on a target server. As prerequisite, please create an Azure Automation credential asset that contains the username and password for the target Azure SQL DB logical server ($SqlServerName). Make sure that you have installed the scripts IndexOptimize.sql, CommandLog.sql and CommandExecute.sql of Ola Hallengren (https://ola.hallengren.com/downloads.html) Make sure to get Ola's modified scripts which work on Azure here: https://ola.hallengren.com/downloads.html .EXAMPLE SQLServerIndexMaintenance .NOTES AUTHOR: Original author Pieter Vanhove: http://pietervanhove.azurewebsites.net/?p=14137 Heavily modified by Larry Silverman, CTO, TrackAbout Very slightly modified by Alexander Arvidsson: http://www.arcticdba.se #>
    
    workflow AzureMaint
    {
    param (
    # Fully-qualified name of the Azure DB server
    [parameter(Mandatory=$true)]
    [string] $SqlServerName,
    
    # Credentials for $SqlServerName stored as an Azure Automation credential asset
    # When using in the Azure Automation UI, please enter the name of the credential asset for the "Credential" parameter
    [parameter(Mandatory=$true)]
    [PSCredential] $Credential
    )
    
    inlinescript {
    # Set up credentials
    $ServerName = $Using:SqlServerName
    $UserId = $Using:Credential.UserName
    $Password = ($Using:Credential).GetNetworkCredential().Password
    $databases = @()
    
    # Create connection to Master DB
    Try {
    $MasterDatabaseConnection = New-Object System.Data.SqlClient.SqlConnection
    $MasterDatabaseConnection.ConnectionString = "Server = $ServerName; Database = Master; User ID = $UserId; Password = $Password;"
    $MasterDatabaseConnection.Open();
    
    # Create command to query the name of active databases in $ServerName
    $MasterDatabaseCommand = New-Object System.Data.SqlClient.SqlCommand
    $MasterDatabaseCommand.Connection = $MasterDatabaseConnection
    $MasterDatabaseCommand.CommandText =
    "
    select name from sys.databases
    where state_desc='ONLINE'
    and name <> 'master'
    "
    
    $MasterDbResult = $MasterDatabaseCommand.ExecuteReader()
    
    while($MasterDbResult.Read()) {
    $databases += @($MasterDbResult[0].ToString())
    }
    }
    # Catch errors connecting to master database.
    Catch {
    Write-Error $_
    }
    Finally {
    if ($null -ne $MasterDatabaseConnection) {
    $MasterDatabaseConnection.Close()
    $MasterDatabaseConnection.Dispose()
    }
    }
    
    # Create connection for each individual database
    # Iterate through each database under $ServerName
    foreach ($DbName in $databases) {
    Try {
    # Setup connection string for $DbName
    $ChildDatabaseConnection = New-Object System.Data.SqlClient.SqlConnection
    $ChildDatabaseConnection.ConnectionString = "Server=$ServerName; Database=$DbName; User ID=$UserId; Password=$Password;"
    $ChildDatabaseConnection.Open();
    
    # Create command for a specific database $DBName
    $DatabaseCommand = New-Object System.Data.SqlClient.SqlCommand
    $DatabaseCommand.Connection = $ChildDatabaseConnection
    
    Write-Output "Performing index and statistics maintenance on $DbName"
    
    $DatabaseCommand.CommandText ="
    EXECUTE dba.IndexOptimize
    @Databases = '" + $DbName + "',
    @FragmentationLow = NULL,
    @FragmentationMedium = 'INDEX_REORGANIZE,INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE',
    @FragmentationHigh = 'INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE',
    @FragmentationLevel1 = 5,
    @FragmentationLevel2 = 30,
    @UpdateStatistics = 'ALL',
    @OnlyModifiedStatistics = 'Y',
    @LogToTable = 'Y'
    "
    $DatabaseCommand.CommandTimeout = 0
    # Write-Output $DatabaseCommand.CommandText
    $NonQueryResult = $DatabaseCommand.ExecuteNonQuery()
    }
    
    # Inner catch for individual database failures.
    # We want to keep processing the next database.
    Catch {
    Write-Error $_
    }
    
    Finally {
    if ($null -ne $ChildDatabaseConnection)
    {
    $ChildDatabaseConnection.Close()
    $ChildDatabaseConnection.Dispose()
    }
    }
    }
    }
    }

     

    I’ve made some very, very slight modifications to the script that Larry Silverman in turn modified from Pieter Vanhove’s original script. My modifications consist of changing schema name from DBO to DBA and turning on logging.


    At this stage it’s a good idea to test te runbook, so click “Test pane” up top. Input the parameters (the fully qualified server name and the name of the credential we set up previously) and make sure everything runs as it should.

    If everything worked as expected, let’s go ahead and schedule the runbook.

    Click schedule.

    This part looks a lot like scheduling a job via the agent.

    This is the only difference from using the agent. Input the parameters required (database server and credential for the server administrator) and you’re good to go.

    A word of caution: as each and every database will hold it’s own copy of the scripts and table, it’s imperative to keep them at the same version. I.e, when updating the scripts in one database, make sure you do it on all the other databases on the same server.

    “File is read only” when trying to change TCP settings in SQL Server Configuration Manager

    Today I hit upon this bug that’s apparently been around for ages. In short, sometimes SQL Server Configuration Manager works as expected when it comes to enabling and disabling specific IP addresses for the instance to listen on, and sometimes you are greeted with a very irritating message telling you that “the specified file is read only”. Not very helpful, unfortunately.

    Digging deeper into this, there is a fairly simple way to get around the issue, provided one is not afraid of dipping into the registry. All of the settings in the configuration manager are stored in the registry and can be edited there without the pesky error message. Go to the following key:

    HKLM\SOFTWARE\Microsoft\Microsoft SQL Server\***INST***\MSSQLServer\SuperSocketNetLib\Tcp\

    where ***INST*** is the name of the instance you want to edit.

    Under this registry folder in the left tree you will find subfolders for IP1, IP2 and so on – these correspond to what can be seen in the configuration manager. The next step is simply to click on folder for the IP you wish to edit and thange whatever settings you want. In my case I wanted the instance to only listen to a specific IP, so I set all the other IPs to inactive (i.e set the “Active” key to 0). A quick restart of the SQL Server service later and I was able to verify via the error log that the instance was indeed only listening on that specific IP.

    PASS Summit and the #SQLFamily

    PASS Summit 2016 was a week ago, and I’ve tried (unsuccessfully) to put my thoughts into words. I’ve read many excellent blog posts about the conference and I wholly agree with them all, but I wanted to articulate my feelings in a blog post of my own.

    PASS Summit 2016 was my third Summit on paper, but was more like my first in reality. The previous two times, I went to the sessions, spent time on the exhibition floor, ate my lunch with other attendees and then I went back to my hotel to read up, do research, work or just watch TV.

    The Summit lends itself to this kind of activity very well, as the sessions and speakers are generally excellent, there is fun stuff to be had, done and seen on the exhibition floor and the other attendees are very friendly. It has consistently been one of the best conferences I’ve been to, and I’ve been to quite a few by now.

    But this year was to be something different. This summer I held a presentation at SQL Saturday #536 in Gothenburg and there I had the pleasure of meeting a Norwegian MVP by the name of Cathrine Wilhelmsen. Not only was she firmly established in the PASS community, but she thrives on bringing new people in and helping others grow their network. We kept in contact and she promised to introduce me to other members of the PASS community and I went to Seattle with an open mind.

    I had just picked up my badge when Cathrine happened. She grabbed hold of me and proceeded to physically haul me around the hall and introduce me to a multitude of people, several of whom I recognize from the community and whose blogs I frequently read. I must have looked like a deer i headlights, much to the delight of Cathrine. Among the people I was introduced to was the guy I went on to spend the week with – Adam Saxton (of Guy in a Cube fame). We hit it off immediately and hung out almost every night. He in turn introduced me to more people than I can remember, and for this I will be forever thankful to them both.

    It proceeded to become the best conference week in my professional career. Let that sink in for a moment. The people I met and my growing network was the best thing that could have happened. I keep going on about that I’ve been doing this for close to 20 years and while that is all good and dandy, it also means that *I’ve* been doing this. Singular. The magic happens when there’s more than one person, when ideas get exchanged and networks grow. The discussions I’ve had with amazing people over the week, over karaoke, drinks, food, pool or just hanging out at the conference center has done more to boost my career to the next level than a lot of the technical work I’ve done the last couple of years. Technical stuff is extremely important, but it is only part of the equation – the other parts are a willingness to share, wide-ranging contacts and a network of like-minded people.

    That’s where the SQL Family comes into play, and the whole reason I needed some time to put my thoughts in order. At Ignite in Atlanta I had the idea to have silicone wristbands with the text #SQLFamily made up to give to people in the community. I got a good deal for 200 bands, and kind of expected to give out some 30 or so. This turned out to be the single best idea I’ve had in years – they took off on Twitter like crazy, and people from near and far sought me out to get one. I met even more amazing people this way and the reaction of everyone who saw the band was the same: “I love my #SQLFamily! Where can I get a band like that?” At the end of the conference I had less than 80 left.

    wristband

    The SQL Family is unlike anything I’ve seen – a collection of like-minded individuals who live and breathe the Microsoft data stack, ranging from wide-eyed newbies to hardened veterans such as Kalen Delaney or Bob Ward. Everyone is invited to come play, and everybody takes care of everyone else. This community is the reason I think PASS Summit 2016 was the best conference of my career, and this community is the reason I strive to share my knowledge through teaching, blogging and presenting. Together we are strong and together we can grow exponentially. Come join us!

    SQLHangout #38 and Microsoft TechDays

    My life is … hectic, to say the least. I just finished a talk at Atea IT-arena in Karlstad last week, as well as a 60-minute recap of Micrsoft Ignite in Atlanta that I held yesterday for 40 people at AddSkills in Linköping. The crowd was very attentive and lots of good questions were raised. Hopefully even more Swedes will go to Ignite next year! I’m preparing for speaking at the Atea IT-arena i Norrköping in late November as well as teaching two courses in December: 10986 (Upgrading your administration skills to SQL Server 2016) and 10989A (Analyzing data with Power BI). Full speed ahead!

    At 1800 CET On Monday the 17th of October I have the pleasure to join MVP Cathrine Wilhelmsen for a chat about career transitions in IT. I’ve spent close to 20 years deep under the hood of database systems and only recently crawled up and decided to tackle something new – data visualization and the cloud. Or is it really new? That’s what we’ll cover on Monday, so be sure to tune in and don’t hesitate to tweet us if you have any questions!

    SQLHangout #38 live stream

    In November it’s time for Microsoft TechDays in Stockholm. I’m happy to say that I’ve received a speaking slot where I will be talking about Azure SQL Database – the cloud awakens. Or, as attendees will find out – what REALLY happened on the Death Star and what led to the downfall of the Empire. Few people know the inner workings of these events, and even fewer realized that it has a lot to do with IT…

    SQL Database: the Cloud Awakens

     

    Reducing SSIS anxiety

    Story time again. This tuesday I was to create a set of import packages in SQL Server Integration Services (SSIS) to automate data loading from a couple of my customers’ storage systems. Said data comes in the shape of several text files – some delimited, some fixed length, all of varying size and shape. Anyone who’s worked with SSIS know how “fun” it is to click-click-click your way through the incomprehensibly boring GUI – a GUI that does NOT lend itself to any efficient work at all. We had been struggling with this for a couple of days before summer already, but the data format has changed and it’s basically the same amount of work to just redo the whole thing again from scratch than to try to edit all the tiny details in all the gazillion boxes everywhere.

    I can’t really say that I was looking forward to this work. Again. And I knew that if we made this proof-of-concept work, we’d have to do it several times over for each and every customer that wanted in on this idea. Ugh. Then something interesting happened.
    This weekend I had the honour of presenting at SQL Saturday #536 in Gothenburg and I had the pleasure of meeting several amazing people. Among these was a Norwegian MVP named Cathrine Wilhelmsen, whose presentation “From accidental to efficient BI developer” I decided on a whim to attend. She showed some interesting things about the SQL Server Management Studio that I had no idea about, and I felt that those tidbits alone were well worth my time listening to someone talk about something I know basically nothing about.

    Then she whipped out BIML and everything I thougth I knew about SSIS went out the window. BIML stands for Business Intelligence Markup Language and is basically a XML wrapper language for generating SSIS packages. This way I can write a bunch of XML code, generate a package or a hundred, over and over again. I don’t have to click on forty different places to change one thing in forty packages, I just change one thing in the XML and regenerate the packages. This. Is. Awesome. She showed us some websites and her blog, and home I went to study.

    Tuesday came around, and I first spent about 20 minutes outlining BIML for my colleague, and then we set off – basically from scratch. We are both old hands at programming in different languages, so that was nothing new for either of us. Using different websites, examples, blogs and the like we managed to do in about six hours what we had kind of failed to do in three days. We now have a few hundred lines of code that does exactly what we want, is well documented(!), is easy to read and above all – is extensible. When I need to add a new file or two to the input – I copy and paste some code. Need to add a derived column? Sure thing, a tad more XML. And so on and so forth. We’ve barely scratched the surface of what BIML can do, but the key takeaway is this: I could get from zero knowledge of BIML and a VERY difficult relationshop to SSIS to a fair grasp of the basics of BIML and a newfound appreciation of the power of SSIS in about six hours. When was the last time you had such a return of invested time? Exactly.
    Now, BIML is definately not new but as a DBA I had not come across it earlier. Now I will sing the praise of BIML to all my DBA colleagues.

    Now – I would not have attended Cathrine’s session had i not been invited to SQL Saturday in Gothenburg. I would not have been invited to Gothenburg had I not written an abstract and sent it for review to the organizers of said SQL Saturday. I had not had the idea to send in an abstract had I not met representatives of PASS (the association for SQL Server and BI professionals) at PASS Summit in Seattle and seen first-hand what the community can do. I’ve only been a part of the PASS community for a couple of years, but I can’t praise it enough. The community is about sharing above all – knowledge, contacts, the works. The people I’ve met have all been friendly, caring, eager to help and very, very good. I rate my getting involved in the community as one of the best and most pivotal choices of my professional career. My one regret is that I’m a bit late to the party. But I’m bringing cookies and 20 years’ worth of experience…

    Useful links:
    http://www.cathrinewilhelmsen.net/
    http://bimlscript.com
    http://www.sqlservercentral.com/stairway/100550/
    https://bidshelper.codeplex.com/