The other day I tweeted that it was time to retire one of my most successful sessions - the Untruthful Art - Five Ways of Misrepresenting Data. This resulted in some curious questions from the community - questions why I would retire such an important and obviously successful session. I dedcided to write a blog post on the state of this session and where I’m going with it next. A little bit of background First of all - “retire” (in this case) does not mean “put it on a shelf never to be used again”. Quite the opposite; the session will be retired, but not the content.
I was setting up an Azure Synapse Serverless Pools demo environment based on a the excellent data lakehouse architecture originally created by Andy Cutler. I inadvertedly created a shared access signature (SAS) token to reference my data lake storage that expired the very next day. When I went to update it, everything went pear-shaped. I’ll show you what happened to me so you can avoid it: Creating the original credentials I started with creating a scoped credential that referred to my original SAS token. CREATE DATABASE SCOPED CREDENTIAL [SasTokenAA] WITH IDENTITY = 'SHARED ACCESS SIGNATURE', SECRET = '?sv=2021-06-08&ss=b&srt=co&sp=rwdlacx&se=2022-06-16T09:07:00Z&st=2022-06-16T09:00:00Z&spr=https&sig=xxxxxxxxxx'; GO As you can see, that token was set to expire on the 8th.
I’ve been back from SQLBits for a few days and things are slowly starting to settle. It’s been quite a long while since I went to a conference this large in person. A friend of mine commented on being tired but couldn’t grasp why on earth she’d be this tired from just talking to people. I responded that it is probably because of exactly that - talking to people, in person, is not something most of us have done for the past couple of years. I’ve been on stage many, many times over many years and I like to think I’m fairly accustomed to it.
The upcoming week is a hectic one for me. On Tuesday I will be speaking virtually at the Global Power BI Summit - the brainchild of Reza Rad and Leila Etaati of New Zealand. This is an online conference literally spanning the globe. It starts on the 7th and continues to the 11th, moving with the time zones as the world turns. The list of speakers is, put simply, huge, and it feels like every speaker in the Power BI world is present. I’ll be delivering my favorite session: “the Untruthful Art - Four Ways of Misrepresenting Data” at 12:00-13:00 (CET) in Room 6 on the 7th of March.
In 2019 I spoke at 12 conferences outside of Sweden. 2020 was looking up with not only a lot of conferences planned, but also training as well as consulting all over the Nordics. It was not to be. The pandemic hit hard, and just about everything I did stopped in its tracks. When Benni de Jagere told me that they hoped to run DataMinds Connect in Mechelen, Belgium, in person in October, I was elated. The thought of getting to travel again made it an easy choice to send in a completely new abstract. I’m extremely happy to say that the abstract was accepted, and I’m excited to share my new session.
A few weeks ago I created a data lake in Azure and filled it with some CSV files. Then I spun up a Synapse Analytics Workspace and queried the files using Azure Synape Analytics On-demand pools to query the CSV files via the Synapse Analytics Studio. This works great - if you haven’t tried running SQL on text files in Azure data lake, stop reading and go check it out. Next, I created a database in the on-demand pool, and added a view to it, referencing the select OPENROWSET statement. That view can now be used in, say, Power BI or other tools that can connect to the on-demand pool endpoint.
Microsoft Business Applications Summit (MBAS) turned out to be a veritable goldmine for Power BI. The announcements are out in force, and Marc Lelijveld (Twitter|Blog) has penned an excellent summary of the features. I’d like to give my two cents on two of the features I personally find the most exciting: hybrid tables and streaming datasets. Hybrid Tables Let’s start with hybrid tables - they’re what we’ve been wishing for ever since Direct Query and Composite Models came out. This will give us the ability to combine imported data with Direct Query data in a seamless fashion. I have a use case for it right now: I have an application that logs a lot of data from an integration platform.
Thanks to Kendra Little’s blog post Moving from Wordpress to an Azure Static Site with Hugo I was inspired to try the same. Since I’ve already experimented with Hugo for some time, the move to Azure Static Sites was dead simple - and I love the GitHub integration. I save my markdown file, I push to GitHub, and a few minutes later my changes are up there. Fantastic!