Here are some areas that degrades SQL Server Performance. These are not in any order.
Improper Database and SQL Server Settings
tempdb is one of the busiest system database, It is a place for temporary storage such as when you create temp tables, table variables, cursors, joins and operations like sorting, row versioning etc. All these operations can hurt IO, CPU performance. So it is necessary to adequately design the storage structure of tempdb.
Another important problem can be with incorrect storage design for data files, log files, SQL program files and OS files. They should be placed on appropriate RAID levels.
Also you should properly set the configuration settings for SQL Server like Memory, Query time out, parallelism etc, Database settings like Managing Statistics, Auto growth of db files etc.
Database Design
Bad logical database design results in bad physical database design, and generally results in poor database performance. Both the logical and physical design must be right before you can expect to get good performance out of your database.
Database should be adequately normalized and de-normalized to improve the performance and reduce blocking. Normalized database reduces redundant data in turn reduces storage requirements. Row size should be well planned so that more rows can be fit on an 8KB data page.
Over normalized tables also as bad as under normalized database, this results in increased number of complexity of joins to retrieve the data.
Index and Statistics
One of the common and biggest performance problems on SQL Server is poor or missing indexes and outdated statistics. When a query executes, Query optimizer will search for the useful indexes, in the absence of proper index for a query may result in blocking, dead lock, a high amount of stress in Disk, memory and CPU because SQL Server needs to process or retrieve much more data while executing the query.
Coming to statistics, Statistics are the information about the distribution of data in columns or indexes. SQL Server query optimizer relies on this statistics to generate more efficient query plan to retrieve or update the data. A good updated statistics helps to create high quality plan. By default SQL Server automatically creates and updates statistics. An outdated statistics gives the query optimizer inaccurate information about data distribution like the number of rows returned etc, this creates inefficient plans to execute the query.
Excessive Fragmentation
SQL Server manages the indexes automatically whenever an INSERT, UPDATE, DELETE occur on the underlying data this results in page split, fragmentation. There are two types of fragmentation External fragmentation – the leaf level of index is fragmented in a non-orderly fashion. The other one is Internal Fragmentation - the index pages are not being used to their maximum volume. A property called fill factor of an index decides how much space to be left on index pages, severe internal fragmentation can lead to increased index size and cause additional reads to be performed to return needed data. These extra reads can lead to degradation in query performance. You can overcome the fragmentation by rebuilding or reorganizing the index.
Blocks and Dead Locks
When a modification occurs in a transaction, SQL Server Isolates it from other concurrent transactions, this prevents the transactions to see intermediate states of the data, i.e. A transaction either sees the data either before state or after state of the data. This is the default property of SQL Server, and is done by using a mechanism called locks. Because of this isolation Blocking occurs in a database
A deadlock occurs when two resources attempt to escalate or expand locked resources and conflict with one another. The query engine determines which process is the least costly to roll back and chooses it as the deadlock victim.
Thus the execution time of a query is adversely affected by the amount of blocking and deadlock it faces.
Poor Query Design
The effectiveness of index depends on the way how you write queries like use of functions on a column of a where clause e.g Year(SaleDate) = 2010, this will result in index scan rather than an index seek. Another problem is retrieving more data from the table than that is required. You must ensure that the queries are fine tuned and it can make use of the indexes effectively.
T-SQL is a set based language meaning it operates on sets of data rather than rows. It performs better when deals with set of data. Non-set based approach make use of cursors, loops etc rather than more efficient Joins and subqueries. These Non-Set based approach kills the performance. If you are forced to use cursor if you are forced to deal with cursors, be sure to use efficient cursor types such as fast-forward only.
Poor Execution plans and Re-Compliation
When a query comes for execution optimizer checks for query plan available to execute the query, if it is missing it will generate a fair plan and usually this plan will be cached in memory for future use once it is created. There is fair amount of CPU expense while creating the Query plan. If a plan is designed so that you cannot plug variables to it, SQL Server creates a query plan every time the query is resubmitted with different value. This eats up the CPU cycles, so it is important to submit your queries in a form that SQL Server can cache and reuse it.
One of the standard ways of ensuring a reusable execution plan, independent of variable values used in a query, is to use a stored procedure. Using stored procedure to execute a set of SQL queries allows SQL Server to create parameterized execution plan.