Saturday, September 3, 2011

SQL Server general best practices

1. Isolate OS, SQL binaries and Data/Log files.

If possible, do not install SQL Server or any other application on C drive and be sure to isolate OS and SQL Server data/log files by having them on different physical drives. Never place data and transaction log files on the same physical disk. Here is an example configuration that will help avoid contention issues:

  • C: Operating System
  • D: SQL Server (or any other application)
  • E: SQL Server Log Files
  • F: SQL Server Data Files

2. Use the principle of least privilege and reduce the surface area of attack

Two general (but important) rules of security are:

  • Reduce the surface area of attack - This means installing only the REQUIRED components/features and disabling the ones that are not needed immediately. This will improve security by reducing the surface area of attack.
  • Use the principle of least privilege – This means granting just enough permissions to users and service accounts to do the job. Granting unnecessary elevated rights will increase the risk of both intentional and accidental damage to the system resources.
Also, ensure the physical security of each SQL Server instance, by preventing any unauthorized users to physically access the servers.

3. Use a dedicated domain account for SQL Server services

For service accounts, use a dedicated user account or domain account having just enough privileges needed to do the job (principle of least privilege). By dedicated I mean the account should not be used by any other service on the same server. Also, use a separate account for each service so that if one service account gets compromised then only one instance associated with the service will be damaged.

4. Leverage Instant File Initialization 

When SQL Server service is granted the right "Perform Volume Maintenance Tasks", SQL Server will perform file allocation "instantly" by skipping zero initialization. Zero Initialization is the process of filling the contents of a file with zeros. Therefore, whenever a file allocation request is made, SQL Server will claim disk space by overwriting the disk content instead of filling it with zeros. This will save a lot of time during database creation, restore, auto-growth events etc.
The reason why this privilege is not granted by default is because the requested disk space might contain sensitive information which can be accessed using the (undocumented) DBCC PAGE command.

Note1:Only data files can leverage this feature as log files cannot be initialized instantaneously (due to architectural reasons).

Note2: Instant file initialization is available only on Microsoft Windows XP, Windows Server 2003, or later systems.

5. Use SQL Server Configuration Manager (SSCM) to change service accounts

If you ever need to change the service account for SQL Server service or SQL Server Agent service for that matter, use SQL Server Configuration Manager. Though you can also use Services (Administrative Tools->Services) to accomplish the same thing, the recommended method is to use SQL Server Configuration Manager because SQL Server Configuration Manager will automatically grant all the required permissions and rights to the new account. If you use the Services add-in to change service account, you might have to add some additional permissions and rights to the new account manually.

Divide tempdb into as many files as your processors

Split tempdb into multiple data files of equal size and these multiple files need not be on different disks/spindles (unless there are I/O bottlenecks). The number of data files should be equal to the number of processors (physical or virtual) on the machine. Since SQL Server allows only one active worker on each scheduler at any given time, all the active workers trying to access tempdb at the same time will be logically split to a separate file in tempdb decreasing contention.

However, if you have memory-intensive work loads, writing out of the spilled data can be slow due to thread synchronization. In this case, the number of data files could be ½ or ¼ of the number of CPUs.

7. Configure default database properties using model database

Use model database to set common properties (recovery model, initial file size, auto-growth etc) across all the databases on your SQL Server instance. For example, on a test server you might want to set the recovery model to simple so that all the new databases created on this instance inherit this property.

8. Use Widows authentication, if possible

Unless the SQL Server instance is being used for applications that can’t work with Windows, don’t go for Mixed Mode authentication. If you do, make sure you choose a strong SA password and enforce strong password policies for all SQL Server logins. In case of windows authentication, change the random password created for the ‘sa’ account after the installation is complete. Windows authentication is more flexible as you can setup your domain policies for changing user's login and lockouts to apply automatically for SQL Server.

9. Use simple recovery model whenever appropriate

Recovery model needs to be business owner’s decision. Don’t be reluctant to use simple recovery model for your databases whenever suitable. It is totally fine to have a production database in simple recovery model as long as the users/business understand the implications. The benefit of having a database in simple recovery model (when appropriate) is that you don’t have to manage transaction logs as the inactive portion of the log will be cleared after check point. If the recovery model is set to full, be sure to schedule log backups. The frequency of which should be decided by business owners.

10. Allocate data/log files to a reasonable initial size and avoid using percentage for growth

Be sure that the database files and log files are large enough so that they will not automatically grow during normal daily operations (causing the users/processes to wait as it is a very slow process). Don’t use percentage for auto-growth, instead monitor and grow your files manually and use auto-growth only to deal with expected growth. Also, grow your databases by substantial amount, otherwise, you will end up with too many Virtual Log Files (VLFs) causing fragmentation.

11. T-SQL best practices
i. Do NOT use sp_xxx as a naming convention as this causes additional searches and added I/O. SQL Server will scan the procedure cache for Master, no matter what database it was executed from. SQL Server will then acquire an exclusive COMPILE lock to perform a second search in the local database. If a user stored procedure has same name as an sp_xxx stored procedure in MASTER, that user procedure will NEVER be used.
ii. Use SET NOCOUNT when creating Stored Procedures. SQL Server sends messages to the client after each T-SQL statement is executed. This option will turn off the messages that are sent back to the client.iii. Avoid using arithmetic operators/functions on WHERE clause column. This prevents optimizer from choosing index on the column.
iv. Fully qualify Database objects - This will reduce name resolution overhead, might avoid execution plan recompiles.
v. Operate on small result sets; don’t use “SELECT *”, and try to include a highly selective filter in the where clause if possible.

12. Use T-SQL for creating maintenance plans

Use T-SQL for maintenance plans as opposed to using the maintenance plan wizard/designer as T-SQL offers more granularity and flexibility. For example, using the wizard, there is no way to determine when to rebuild and when to reorganize an index. By using T-SQL, you can programmatically determine the level of fragmentation in an index and rebuild/reorganize the index based on the fragmentation. It may even be the case that you don’t have any fragmentation and so you don’t need to do anything. Also, don’t forget to include system databases in your maintenance plan. The system databases, master msdb and model should be backed up weekly or at least monthly.

No comments:

Post a Comment