Wednesday, January 30, 2008

MSDN Code Gallery

In case you missed the news, Microsoft has added a new feature to MSDN called the "Code Gallery". It can be accessed here:

Basically it's a single location for all official Microsoft code samples, demos, tutorials, etc... It's also got a feature that allows community members to post code samples as well.

Check it out!

Monday, January 14, 2008

Updating the Team Foundation Server Data Warehouse

We've recently deployed Visual Studio Team System Team Foundation Server as our project management and source control system. (Actually my team has been using it for the last year, but we've finally deployed it out to the rest of the organization)

One of the questions that has come up is how to change the update interval for the Data Warehouse. (The update interval controls how "real time" the generated reports are. It's set to a default value of 1 hour)

Basically, TFS is a pretty convoluted set of parts that are stitched together to present a very straightforward and integrated set of tools. Because of this though ,certain tasks can be a pain in the neck. Updating the Data Warehouse configuration is one of those pains.. :)

Using the Data Warehouse Configuration Web Service

The TFS Data Warehouse is controlled through a web service interface. As of now, there's not a unified configuration GUI available (at least not one that I know of) so you need to use the built-in "diagnostic" capabilities of the web service. Access the web service by logging on to the local machine (you MUST log on locally!) and using Internet Explorer (or another browser that supports SOAP) browse to: http://localhost:8080/Warehouse/v1.0/warehousecontroller.asmx to load the Warehouse configuration web service. It should bring up the following page:


In order to change the update interval, select the "Change Setting" link, which will bring up the following page:


The setting name for the update interval is "RunIntervalSeconds" and is of course specified in seconds. For example, to set the Warehouse to update every 30 minutes, enter RunIntervalSeconds in the settingID field and 1800 in the newValue field and then select Invoke. This should result in a new window opening with the following response:


Following the above steps will allow you to update the Update Interval of the TFS Data Warehouse.

Saturday, January 12, 2008

SQL Server Resource Governor - Understanding Pool Usage

In an earlier posting, I discussed some of the basics of configuring the SQL Server 2008 Resource Governor. Since that posting, I've been spending a fair amount of time digging into the internals of the Resource Governor and understanding exactly how it works. It's been an interesting trip considering the limited amount of information available on the subject (let's hope that by RTM they improve this!), but I think I have finally uncovered enough information to make a useful post.

Resource Governor Architecture

As discussed in my earlier post, the Resource Governor is made up of Resource Groups and Resource Pools. When a user (or process) connects to SQL Server 2008, it is "classified" (either through a built-in classifier or a user-defined classifier function) and then assigned to a resource group based on that classification. One or more resource groups are then assigned to specific resource pools. The architecture looks like this:

Resource Governor Functional Components

There are 2 built-in resource groups, the Internal Group and the Default Group. The Internal group is used to execute certain system functions (such as the Lazywriter and Checkpoint Processes) and the Default group is used when the session does not have a defined classification. (For example, if you follow the steps in my earlier post and create a classifier function to assign the SQL Management Studio to a limited resource pool, all sessions that originate from an application other than management studio would be assigned to the Default group)

Resource Governor Misconceptions

One of the things that I believe will be a contentious issue when SQL Server 2008 hits the street is the way in which the Resource Governor affects connections. For example, if I configure a resource pool to limit the maximum CPU time of any process in the pool, I expect that the governor will honor that and stop anything that exceeds the value. In reality, what happens is an event is fired and the application developer is expected to capture the event and react accordingly. For example, say that you create a resource pool and you set the REQUEST_MAX_CPU_TIME_SEC = 5. What you would expect to happen is when a query reaches 5 seconds of CPU time it becomes terminated. In reality what happens is once the query exceeds the CPU time by 5 seconds (that is 5 seconds above the configured value, so in my example here it would be a minimum of 10 seconds of CPU time) a CPU Threshold Exceeded event is fired. You can look for these events using the SQL trace events that are part of the Performance Monitor tool.

Monitoring Resource Pools

You can use the Performance Monitor (the example that I am showing here is from Windows Server 2008 so it may be different than your environment) to monitor Resource Pool Statistics. Start Perfmon and add the counters from the SQL Server: Resource Pool Stats collection for the pool that you want to monitor as shown here:


Once you start the trace, and begin to see activity within the classified connection, you will see a screen that looks similar to the following:


You can use this data to determine how well the pool is working and get a good baseline for how the applications that use that pool are consuming resources.

Tuesday, January 8, 2008

SQL Server 2008 Performance Studio

In a previous entry, I discussed how to configure SQL Server 2008's new Performance Data Collector. The data collector is responsible for maintaining performance-related information in an "easy to use" format that will allow DBAs to perform more effective performance tuning.

Configuring the Data Collectors

One of the new nodes in SQL Server Management Studio is the "Data Collection" node, which can be found under the "Management" top level node. Under the data collection node, there are 3 specific data collectors, "Disk Usage", "Query Statistics" and "Server Activity". These nodes make up the SQL Server 2008 "Performance Studio" and represent the individual data collectors that are working behind the scenes to collect data and import it into the Management Data Warehouse.

The data collectors work on an individual schedule that are configured through the data collector node. In order to access the configuration information, right-click on the data collector node and select "Properties". This will bring up the following dialog (This example shows configuration information for the "Disk Usage" collector):


In the General tab of the configuration, notice the cache options. Default configuration is that data is not cached and is scheduled to upload to the mgmt data warehouse every 6 hours. The real difference between cached and non-cached data collection is that the non-cached collection both collects and uploads the data on the same schedule. Non-cached data collection simply means that the data is periodically collected and stored on the disk (in the folder that you configured when you configured the data warehouse) and then uploaded to the data warehouse on a separate schedule. Generally speaking, you would use this option to reduce the load on busy systems. There is also a configuration area for "Collection Items". In this case, the items of interest are the data and log files. They have a default collection frequency of 5 seconds, and use a generic TSQL command in order to populate the data. The default query is shown in the input parameters configuration (shown below):

DECLARE @dbsize bigint 

DECLARE @logsize bigint 

DECLARE @ftsize bigint 

DECLARE @reservedpages bigint 

DECLARE @pages bigint 

DECLARE @usedpages bigint


SELECT @dbsize = SUM(convert(bigint,case when type = 0 then size else 0 end)) 

      ,@logsize = SUM(convert(bigint,case when type = 1 then size else 0 end)) 

      ,@ftsize = SUM(convert(bigint,case when type = 4 then size else 0 end)) 

FROM sys.database_files


SELECT @reservedpages = SUM(a.total_pages) 

       ,@usedpages = SUM(a.used_pages) 

       ,@pages = SUM(CASE 

                        WHEN it.internal_type IN (202,204) THEN 0 

                        WHEN a.type != 1 THEN a.used_pages 

                        WHEN p.index_id < 2 THEN a.data_pages 

                        ELSE 0 


FROM sys.partitions p  

JOIN sys.allocation_units a ON p.partition_id = a.container_id 

LEFT JOIN sys.internal_tables it ON p.object_id = it.object_id 



        @dbsize as 'dbsize',

        @logsize as 'logsize',

        @ftsize as 'ftsize',

        @reservedpages as 'reservedpages',

        @usedpages as 'usedpages',

        @pages as 'pages'


This is the query that is used to populate the data for disk size within the management data warehouse.

The other items to configure on the general tab are the account to use when collecting the data, and finally the retention settings..

Once the general settings are configured the data collectors are now ready to use. (You do not need to configure anything additional in the other tabs at this time)

Using the Management Studio Reports

Once the data collector is configured, you can either wait for a schedule to cycle, or you can right-click on the collector and choose "Collect and Upload Data Now" to get data into the Data Warehouse. Once data is present in the warehouse, you can view a report. To view a report, right-click on the collector and choose Reports, then Historical, and then the appropriate report for that node (for example, to view the disk usage report, click on the "Disk Usage" collector, choose Reports, Historical, "Disk Usage Summary", which will produce a report similar to the following:


The reports have hotlinks that allow you to drill into information. For example, you can see the details of a specific database by selecting the link in the report above, which will produce a report that looks like this:


One of the more interesting reports is the Server Activity report, which can be accessed by right-clicking the "Server Activity" node, selecting reports, historical and then Server Activity. It will produce a report that looks like:



The new performance studio included with SQL Server 2008 provides a wealth of information for DBAs and simplifies the task of monitoring the server.

Sunday, January 6, 2008

Performance Monitoring in SQL Server 2008

One of the problems that SQL Server DBAs face is the fact that performance problems can sometimes be very hard to track down unless you happen to be at the right place at the right time. Microsoft made this a little bit easier with the SQL Server 2005 Performance Dashboard (which I wrote about back in May 2007 in this blog entry) but that still didn't address many of the problems.

In SQL Server 2008, Microsoft has included an entire infrastructure devoted to ensuring DBAs have the information necessary to perform adequate performance tuning. A key component of this new infrastructure is the new Performance Data Collector. The Data Collector is made up of the following components: (Yes, this is direct out of Books Online, I didn't feel like re-creating the diagram)


One of the more interesting components is the Management Data Warehouse(MDW). The MDW is a user database that is used as a repository for performance data collection and is also used to generate reports.

Data Collector Architecture

The Data Collector itself is basically a series of SQLAgent jobs, SSIS packages and a specific API that allows client tools to connect and read/report on the data collected. Broken down into components, the architecture looks like this:


If you want to read more about the Data Collector architecture and how it all fits together, check out the following SQL Server 2008 books online (local copy) article: ms-help://MS.SQLCC.v10/MS.SQLSVR.v10.en/s10de_4deptrbl/html/b2c2bd5e-62c5-4129-a252-ef9439a05487.htm

Configuring the Data Collector

In order to use the Data Collector in SQL Server 2008, you must first initialize the Management Data Warehouse. You do this through Management Studio, under Management/Data Collection, right-click and select "Configure Management Data Warehouse" which brings up the following wizard page:


Click Next, and then select the local instance of SQL Server, and select "New" to create a new Database. Give it a name (I chose MgmtDataWarehouse) and then select a file system location to be used as a cache location (in order to ensure that SQL Server doesn't get overloaded collecting performance data, the filesystem is used as a temporary cache location for all "external" performance collections). The wizard page should look like this:


Click Next, and then select the appropriate permissions on the Data Warehouse (Basically you want to give system administrators full access to the warehouse and any DBAs read access) as shown in this wizard page:


Once you configure security properly, select next and then Finish, which should report success on all tasks as shown in the summary page:


Once you do this, Data Collection is now working on the server. You can verify this in Management Studio by looking in the Management/Data Collection/System Data Collection Sets as shown below:


Once the data collector is configured and operational, the system will start collecting performance information and uploading the data to the management data warehouse.

I will continue this blog post later after the server has time to collect some meaningful performance statistics.

Friday, January 4, 2008

TFS Times

There is a new online "Newsletter" related to Microsoft Visual Studio Team Foundation Server called "TFS Times"


Check it out here:

Thursday, January 3, 2008

Visual Studio 2005 Support for SQL Server 2008

If you've been playing around with SQL Server 2008 and have not yet upgraded to Visual Studio 2008, you might have come across several issues with database connectivity (to say the least!). Microsoft has recently resolved this issue through a CTP of the VS2005 Support for SQL Server 2008. Download it here:

Specific issues this CTP resolves:

This CTP resolves a problem that when you try to open a database connection to SQL Server 2008 by using Visual Studio 2005 design tools, you may receive the following error message:
“This server version is not supported. Only servers up to Microsoft SQL Server 2005 are supported.”
This CTP addresses this issue, and enables the following Visual Studio functionality for SQL Server 2008 CTP5 :

  • Server Explorer successfully connects to SQL Server 2008, and database objects such as stored procedures and table data can be viewed and edited. Note that table schemas still cannot be viewed or edited in this release.
  • SQL CLR projects that target SQL Server 2008 CTP5 can be created and deployed to the server.
  • T-SQL and SQL CLR debugging are now enabled for SQL Server 2008 CTP5.
  • Data binding features in Client and Web Projects are enabled.

This CTP does not support the following features for SQL Server 2008 Nov CTP:
  • Creating and editing table schemas in Table Designer or Database Diagrams. The table designer feature in SQL Server Management Studio 2008 can be used to edit table schemas in SQL Server 2008 CTP5.