Hi everyone,

This is the 3rd instalment to my series looking at reporting on SQL Server Reporting Services. In the last section we looked at the data warehouse. In this section we will look at loading the DW with all the data we need to effectivly monitor our SSRS environment.

The ETL process is formed from 2 steps.

Step 1 – Back up the existing ReportServer and restore it to a new database “ReportServer_Clone”. There are several reasons why you might want to do this, the main one is to remove any strain from your production server. Microsoft does not advise that you make alterations to the ReportServer database so we are goon on that front too. The final stage is to add a view and persist that view to a table. This is a very expensive view which is why it is persisted. This view is used for most of the load process so persisting to a table enables the DW to load faster.

Step 2 – The ETL phase. The ETL has 5 phase.
Phase 1 – Load variables. The SSIS packages is designed to be as dynamic as possible and fit to your environment. When you create the SSRSDB there is a table called “Configuration” under the “Support” schema. This holds all the connection and dynamic variables.


Phase 2 – Load the dimensions.
You will notice that there are less dimensions being loaded than were created in the build process. This is because some are enumerator tables and will not change over the life of your versions of SSRS, for that reason there is no need to load them daily.


For all other dimensions these are loaded using TSQL MERGE, to enable this process we need a staging table to make the comparison. Essentially what a TSQL MERGE does is an UPSERT, an UPDATE and INSERT. The dimensions are all type 2 slowly changing dimensions. So when a change is registered we are closing off the old row and adding a new one.

Each sequence container contains a truncate staging EXECUTE SQL task followed by a DATA TRANSFER task to move the data from the ReportServer_Clone database to the staging tables. There is a degree of transformation that also happens, mainly around converting INT tables to expanded results. Example, IsHidden is a boolean value BIT value (0 OR 1) when filtering this in a report it is not obvious what 0 and 1 relate to, so this is transformed to be “Hidden” and “Not Hidden”.

The dimensions being loaded here are Dim_Catalog – report details, Dim_Format – Format of the report, Dim_User – who has been using the report, Dim_Role – what role does the use have and finally Dim_Dataset.


Once the data has been loaded in to staging it passed in to the TSQL MERGE.

Phase 3 – Fact loading. The facts are loaded using the LOOKUP design pattern. When you set up the SSRSDB you will have a set of historic data, when this is first loaded you want all that historic data. After that has been loaded there is no point reloading all the data as this is just a waste of IO. Instead as part of the configuration once the ETL has been ran once it will set the “full load” option to “Incremental”. Once this has happened only the data loaded that day will be removed.

Three fact tables are being loaded Fact_transaction_Execution, Fact_Factless_PUR and Fact_Factless_Dataset_fields.


The first step is to either remove today’s data or all the data (dependant on whether this is the first time you’re loading the SRSDB). Once the data is removed we move the data from the ReportSerer_Clone database to the SSRSDB.

(1) A query to extract the data and transform it, (2) look up the current Item_skey, (3) look up the current Formay_skey, (4) look up the current User_skey, (5) look up the current Action_skey, (6) look up the current Source_skey and finally load to SSRSDB.


The same process happens for the other 3 fact tables.

Phase 4 – Loading and updating the metadata. Part of the power of SSRSDB is the extended metadata. This is achieved using a simple Excel spreadsheet. When SSRSDB is created a table is also created which hold the metadata, Support.Item_Metadata. The columns in this table mirror that of the Excel Spread sheet.


The metadata allows you to store additional data about the report which will be used to automate communicating changes/usage of reports. The metadata stores the style (theme management blog coming soon), date it was created, who requested it, what type of report is it (Main or drilldown), the report owners email (again for automation) In active period (how many days can a report go without being used, what happens after that time, final period and what happens.

When you first use the SSRSDB this will be blank, rather than manually populating this there is a process to read the file and write out the existing contents of the ReportServer, any changes after this will be updated.

The final phase is to read the ReportServer logs. This is a for each loop to loop over each log and load it in to the SSRSDB.

In the download attached you will also find a set of jobs that you can create. Once these have been created and scheduled you have a fully automated ReportServer data warehouse. I personally have this scheduled to run every 30 minutes.

Thanks for reading.


WOW! What a week. In less than 7 days I have been accepted to speak at 3 SQL Saturday conferences! The schedule currently looks like this:

July – SQL Saturday Manchester
August – SQL Saturday Oslo
September – SQL Saturday Cambridge

SQL Saturday in the UK has become something of a regular occurrence. In 2015 alone have 4 (Exeter, Manchester, Cambridge and Southampton), but it all started in Cambridge with the first UK based SQL Saturday back in 2012. I missed that event but went to the following event in 2013. I had a blast! The only downside to the conference was that I didn’t spend enough time in Cambridge! I am hoping to change that this time and take in some of the sights. What I remember from last time were bikes! Bikes everywhere! I don’t have the best luck with bicycles – Ask me when you see me – I will show you the scars….

This will be the 3rd time that Mark and the team have organised SQL Saturday Cambridge and they have certainly gone big! 2 full days of training and 6 tracks on the Saturday! That is bigger than big! That is MASSIVE! In 2013 SQL Saturday Cambridge tried something different by making the even half SQL half SharePoint. I don’t use SharePoint, but like many I administer it from time-to-time. In 2013 I went to a SharePoint session and really enjoyed it. I will do the same in September.

Looking forward to seeing you there. Register now:

Thursday 10th September 2015

Penelope Coventry – Automating business processes with SharePoint, Office 365 and Azure

Benjamin Nevarez – SQL Server Query Tuning & Optimization

Niko Neugebauer – A Day with Columnstore Indexes

Friday 11th September 2015

Rasmus Reinholdt Nielsen – Building tomorrows ETL architecture with BIML and MDS today

Kennie Nybo Pontoppidan – SQL Anti-Patterns

Mark Broadbent & Regis Baccaro – Implementing Business Intelligence & SharePoint High Availability Solutions

Microsoft – Building a Data Driven Strategy


A few days ago I posted that I had been selected to speak at SQL Saturday in Oslo. You know what they say, “SQL Saturday’s are like buses. You wait for one and two come along at the same time”. I am really happy to announce that I will be speaking at SQL Saturday in Manchester UK. On the 24th and 25th of July Manchester will play host to a SQL Saturday!

Not only is this the first time there has been a SQL Saturday in Manchester, this is the first time I will have been to Manchester! I live in the Southwest of the UK in Exeter and I never have a great need to head further north than Bristol (83 miles or 133 KM – terrible I know) so I am really looking forward to this event!

**Edited 10/06/2015**

When I originally wrote this post there was not a great deal of information about the training days on offer at SQL Saturday Manchester. Well the team have pushed an update out and they are great (So great I had to come back and edit the post). Manchester has 6 training day session.

1. Building Big Data Solutions with Microsoft Azure HDInsight – Graeme Malcolm | Microsoft
2. Machine Learning without a PhD in statistics – Andrew Fryer | Microsoft
3. SQL Server Business Intelligence – Chris Testa-O’Neill
4. SQL Server 2014 In-memory technologies – Tony Rogerson
5. Practical R: A hands-on training day – Steph Locke
6. Data Management Lifecycle by Redgate

I am hoping to extend my visit in Manchester and see Mr Fryer’s session on Machine Learning. If I could split myself I would go and see Steph Locke’s session. Steph did a similar session in Exeter and it was a great success!

Make sure you check out the event/schedule and I will see you on July 25th in Manchester!


Hi everyone, this is the second part of this series looking at creating a monitoring solution for SQL Server Reporting Services. The second instalment looks to cover the following:

  1. The process of backing up your existing SSRS instance
  2. The structure of the SSRSDB

The process of backing up your existing SSRS instance

My personal preference for getting data from the ReportServer database to somewhere that you can work with it without causing issues to your production server is to take a backup and restore is to another database. I have called this ReportServer_Clone (Original right! :P)

The process will be as follows:

  1. Take a backup of the ReportServer – You dont need the ReportServer_TempDB.
  2. Restore that elsewhere
  3. Load the ReportDW with your data – Via SSIS.
  4. Run the usage reports

You might be asking why cannot I just not query the SSRS ReportServer directly? Well you can – but at your own risk. The ReportServer is a temperamental old gal. You can cause a whole lot of issue by making just the smallest of changes outside of the report manager.

As I have stated the backup and restore process is just my preference, it is one of the many options that you have available to you for getting your data out of ReportServer. You might choose to use transactional replication of the key tables (refer to part one for a list of key tables). Other alternatives would be use a HADR option, AlwaysOn availability groups or mirroring with snapshots . Both methods will work well for you. Personally for the size of my environment I opted for a simple BU and restore.

The process will back up the ReportServer and restore it to your server of choice, you can do this locally or push the backup to a different server. Once you have a restored version we will add a view and then persist that view to a table. The view we will add is from Bret Stateham (B). Bret has a fantastic codeplex project that does a lot of the hard work for you. I have made very minor amendments to the codeplex project. Bret if you’re reading this. Thank you for your fantastic scripts. You can find Bret’s codeplex here: SSRS queries.

Now that we have a working restored version of the ReportServer we will want to look at the data SSRS is generating. The easiest way to do this is to query the ExecutionLog3 view. This is the process for monitoring that ships out of the box from Microsoft. It will give you a very basic view of what is happening on your environment. Bret has expanded this with ‘SSRS Queries’. For me and my organisation the users wanted something that was in a style they understood, they already use SSRS, so why not use SSRS as the presentation layer – before we get in to the presentation layer we need to touch on where the data is going to be stored. As a data warehouse developer, creating a DW seemed the best option.

The structure of the SSRSDB

There are many factors that went in to the decision to model this data as a DW. The key reason is to capture history.

ReportServer will only hold the history for 60 days. By modelling this is  to a DW we can keep the history for as long as we feel necessary. If your ReportServer has become very large and you are experiencing performance issues or possible issue with storage, then you could look to reduce the amount of history stored to 14 days and let SSRSDB handle the storage on a larger slower disk – or even Azure.

There are so many moving parts in SSRS, and SSRS changes all the time. An example of change might be an alteration to the TSQL of a query, report styling, users who have access, new columns added, columns removed, change of shared data source etc. There is an awful lot of information inside ReportServer that is lost when changes happen. Does this scenario sound familiar?  A user calls you and claims they had access to a report last week but cant open it now. How do you prove that they did have access? Simple answer track their access daily.

In part one of this series we looked at the main tables in the ReportServer. The ReportServer is all hinged around a central table – execution_log. When modelling this in to a DW I was working towards a standard Kimball star schema. If you have read the Kimball groups books then you might be familiar with their 4 step process for defining a DW.

  1.  Select the business Process – We want to monitor the execution of SSRS reports.
  2. Declare the grain – We will monitor at the per execution grain.
  3. Identify the dimensions – Calendar, User, Item, User etc.
  4. Identify the facts – Report executions

Now we have the 4 step defined we can model that in to our star schema.


We have a few different types of dimensions and facts that have been employed in this project. We have our standard slowly changing dimensions and also a collection of role-playing dimensions. Role-playing dimensions are used to simplify the model, where we have many different dates associated to an execution/item – When was the item executed, when was it created, when was it last updated etc. We could have physically created multiple version of the calendar dimension however, that would have involved maintaining multiple versions – and I really hate doing more work than I need to! So we have a series of views based on calendar.

To capture change we will be implementing type 2 slowly changing dimensions – apologies to some if it feels like I am teaching you to suck eggs. For those who are unfamiliar with Kimball DW design, a slowly changing dimension tracks history over time and comes in several flavours. You can read more about them here. SSRSDB uses predominantly type 2 SCD, a type 2 SCD captures history by adding a new line for every changed row. To ensure that we know which rows are current we also typically add an current flag (a bit boolean value) and also an active from date and an active to date. This allows you to quickly see which dimension attributes reflect the production system while maintaining history and allowing us track user movement.

We also have multiple types of fact table. Fact_Executions is a standard transitional fact table. Each row represents a row in the expectation log tables in the ReportServer. Along side this is also 2 additional factless fact tables to monitor changes in report structure and user permissions. Now that we have our SSRSDB model we can begin to design a solution to load it with data from the ReportServer_Clone database.

We will look more at the ETL phase of SSRSDB in part 3.

Thanks for reading!





(Thanks to Pyramid Analytics for the amazing sketch – They accurately captured my physique!)

Before I get in to making all those who were unable to attend SQLBits peanut butter and jelly, I would first like to thank all the organisers, speakers and the wonderful helpers! Without them the conference would not be possible, or at least it wouldn’t run smoothly! A special thanks to Annette Allen for running the helpers (hard!) and for allowing me to be part of such an awesome bunch of helpers – I don’t want to say that the helpers this year we the best ever – but we were pretty damn awesome!

For those of you who do not know, SQLBits is the largest SQL Server conference Europe with more than 1500 attendees. For the last few years SQLBits has been a 3 day conference with day 1 being a pre-conference training day, followed by 2 days of hour log sessions. This year the SQLBits organisers opted to add another day to the bill in the form of a mixed pre-conference training day / session day. I really like this idea as the conference is always over in a flash (pun intended).


Day 0: aka “Travel day” * How many bags!
I am based in the UK so didn’t quite have the same journey times as other attendees (who come from as far as America & South Africa). My journey time was a little under 5 hours by taxi, train, tube then the DLR. I was a helper last year and frankly I loved it! You get a completely different experience when wearing a green shirt, I recommend it to you all! I was helping this year so was eager to get to the ExCeL centre early to help out. Upon arriving at the ExCeL it was clear that SQLBits was not a little fish area any more. If you have not been to the Excel before it is MASSIVE! By my estimate it is half a mile long (which made finding a vending machine fun!) If that isnt enough there were 2 other conferences and a movie being filmed at the same time – massive! Over the 4 days I racked up 30 miles on fitbit. Any way I digress.

If you are under any idea that running a conference is easy, take a look at the amount of bags that needed to be packed (~1800). Sounds like a lot of work but it is made easy by amazing company! Upon arriving I was able to catch up with old friends and make some new ones! As always there was beer and pizza for those who helped (always a good bribe for IT nerds!).

After the bag packing we headed to the Aloft (the hotel connected to the ExCeL) for a well deserved catch up and some drinks. My junior developer came with me this year which was awesome as he had to listen to me talking about bits for the last year and this year he was there. Particular highlight of the evening for me was spending some time chatting to Itzik Ben-gan about TSQL – If you don’t know Itzik then you should checkout his books (


Day 1: Pre-conference day & the boat ride
After a 06:15 wake up call I was off to the ExCel for day 1 of SQLBits. The weather was brilliant, crisp and clear, which made for a very pleasant walk from my hotel to the ExCeL. Helper briefing at 07:30 was followed by an hour and a half of checking in lovely attendees. I was monitoring Adam Jorgenson and Bradley “King of costume changes” Ball’s Optimising your companies data lifecycle” session at 09:00. This was a great session with a real mixture of content, DBA, BI & machine learning. I particularly enjoyed Bradley’s many costume changes!

Train hard, party harder should be the tag line of SQLBits! Following a day of training was the SQLBits boat tour of the Thames. How do you get 300 SQLBits attendees from the ExCeL to the boat? Squeeze them all in the DLR! This was a lot of fun!

This was a great evening and I got to see London from a different perspective. Back to the hotel for an early night!


Day 2: Standard sessions & the pub quiz
Another 06:15 start for another 07:30 helper briefing. This was a half pre-conference training and half general sessions day. I was too slow with my selection and missed out on some amazing sessions. My junior attended David Peter Hansen’s SSIS internals pre-con which I would have loved to attend. Hopefully I will have chance to see it again in Exeter in April ( I spent a lot of the day hanging around the community area chatting to folks about user groups SQL Saturday conferences – needless to say I had a blast! You guys are amazing!

When I did get to a session I attended the following:

Deadlocking for mere mortals – Jonathan Kehayias. Great session. I love deadlock graphs!
Data loading performance presentation – Matan Yungman. Matan is a gent! Great session, great speaker, great podcast!
Azure SQL DB – Nigel Ellis – Azure SQL DB 12 is funky! Really great session!

This was a very DBA flavoured day for me. As a BI developer I typically tend to stick with what I know. This in the past has been really limiting. Stepping out of your comfort zone and expanding your knowledge in a different area, allows me to understand more about the engine which allows me to write better queries. By only attending 3 sessions I missed a lot of great content, but not to worry! SQLBits put all their content online after the event! There is almost 12 conferences worth of content on their website! Hundreds of hours of expert training, all FREE!

I have attended the last 3 SQLBits and every year I have been incredibly envious of the teams who win all the amazing SQLBits prizes. This year there were so many awesome prizes on offer, ranging from t-shirts to Blu-ray boxsets and large Lego sets. My team this year came 3rd (I think), which meant we missed the prizes. 🙁 Never-the-less it was a great quiz!

Day 3: Standard sessions, the Cow note (as it will hence be known) & the super party!
As a helper I spent the Friday morning posted down stairs directing people to registration and the key note. The Key note was interesting. This year the key note was delivered by Rohan Kumar – Director of software engineering at Microsoft. In true super hero style Rohan appeared from the bat mobile! Did he go on to tell us about SQL Server V next? No! We learnt about wi-fi cows. There were some genius tweets going back and forth though out the session.


After the Cow note, I got back to attending sessions.

Practical T-SQL Tips and Tricks to Coding – SQL Sentry. I got to see some great tools in action. Plan explorer is great!
Query Tuning Mastery: Clash of the Row Goals – Adam Machanic. Adam is a great speaker! You don’t write sp_Whoisactive without knowing your stuff!
Deep into Isolation levels – Boris Hristov. Another great session. I love Boris’ Google hangouts, so it was great to meet him in person.
Indexing Internals – Denny Cherry. At this point in the day my brain had melted! Denny always delivers a great session!

After the last session, it was a quick dash back to the hotel before the Party. SQLBits is known for its amazing parties! In 2013 I was dressed like a dragon, 2014 saw me head-to-toe as a steampunk gent. 2015 was super hero themed. I LOVE super heroes! I must have decided and then undecided 50 times before settling on my costume idea. I am a massive fan of Adventure Time so this year I was Finn! Unfortunately 95% of SQLBits attendees had never seen Adventure Time. I ate, drank, partied, rode a zip line I even battled aliens with Steve Jones. Yet again this was an amazing event.


Day 4: Saturday sessions & the trip home 🙁
Saturday is the FREE community day and is chock full of content. Sessions started at 08:15 an ran until 18:00 – pretty full on!

I attended the following sessions:

How to decide if your database is a good candidate for Virtualisation – Denny Cherry
Analytics Master Class-exploiting exotic patterns in data – Mark Whitehorn. This session was the best session at SQLBits and possibly the best session I have attended at any conference. Mark is a Professor at Dundee university and his teaching style was spot on. The content was tricky but demonstrated in an accessible way. After a regular session there is typically a few people left asking questions. Testament to Mark, half the room were waiting to ask more. I really hope you all watch this session when it is available.
The BI Power Hour – Matt Masson. They say what happens in the Power Hour stays in the power hour – It was awesome 😛
Introducing the Reimagined Power BI Platform – Jen Underwood. Power BI is great!

And with that, SQLBits was over for another year and this year was great! I had a blast and met some really amazing people – especially those who knew me through this blog!

See you all next year!


In recent months Big Data seems to have hit Exeter. With the announcement of a 97 million pound investment in to a new big data “Super Computer”, the Exeter based MET Office is set to make the South West a “world-leader in weather and climate change” and Big Data [1].

This increased attention on the South West seems to have prompted a surge of Big Data related training and conferences.

Unlocking Big Data – Investing in Human Capital 28/01/2015 [2]

The aim of the event is to help the Heart of the South West realise its Big Data potential by shaping the conversation and coming together to define the focus for our investment in Big Data Human Capital. It will also discuss possible funding proposals to the European Structural and Investment Fund Strategy.

There are 5 sessions tackling subjects such as building a Big Data team and the Big Data revolution. This is a free to attend conference held at the MET Office on the 28th January, and is organised by the Heart Of The South West (HOSW). The HOSW are expecting this event to be well subscribed so register early. Register here. [2]

Date: 28/05/2015
Start time: 08:45
Location: MET Office

Data visualisation workshop – 28/01/2015 [3]

The workshop is designed to encourage collaborations and discussions between bioscientists, mathematicians, systems biologists, physicists, geographers, medics, engineers, psychologists, computing scientists and anyone else with an interest in visualising large and/or complex data.

Unfortunately this workshop is on the same day as “Unlocking Big Data”. This workshop is being organised as part of Exeter University’s Big Data Challenges Series and The Exeter Imaging Network. This workshop is designed to promote cross-disciplinary networking regarding the challenges of visualising and interpreting data.

You can find details here.

Date: 28/01/2015
Start time: 13:30
Location: Exeter University Washington Singer 219

ExIStA-YSS: Early-Career Stats Showcase – 11/02/2015 [4]

The evening will begin with a series of eight-minute talks from eight speakers whose professions span the statistical spectrum, giving a unique insight into the variety of different careers that are available, and a flavour of the current challenges and future trends within each area.

Following on from the Data vis workshop, Exeter University will be hosting an evening showcasing young-career statisticians work or those working with statistics in the South West. The evening will feature 8 minute talks followed by snacks. Free to attend. Register via their website [4].

Date: 11/02/2015
Start time: 17:00
Location: Exeter University, Meeting Rooms 1-3 of the Forum, on the University of Exeter’s Streatham campus

SQL Saturday Exeter 2015 – 24/04/2015 & 25/04/2015 [5]
(Disclosure, I am an organiser of SQL Saturday Exeter).

Now in its third year, SQL Saturday Exeter is a community driven free to attend event on a huge variety of topics for every type of data professional. Sessions are not announced but in recent years sessions topics have included Parallel data warehousing, data visualisation, R & hadoop for Big Data.

24th April – Friday pre-conference training day. Whole day training on one of a number of sessions from world class SQL Server professionals. Sessions have not been announced yet, for more details follow #SQLSatExeter on Twitter or via the SQLSouthWest website [5]

25th April – SQL Saturday. Approximately 30 training sessions over 4 tracks covering all areas of working with SQL Server. Speakers will be a mixture of MVP’s, international speakers and people making their first appearances as speakers. Admittance to the Saturday event is free, all costs are covered by donations and sponsorships .

Please register soon as attendance is limited and tickets go fast!

Register here [6]

Date: 24/04/2015 & 25/04/2015
Start time: 08:00
Location: Jurys Inn, Exeter.

With 4 events in 4 months, Exeter is storming ahead in the Big Data sphere. If you’re having a big data event let me know in the comments.