SQL Server: Find fragmentation of tables and indexes


Without adding so many paragraphs, since you are looking for fragmentation, you can below procedure. 

You can create it in the same database. 

CREATE PROCEDURE usp_CheckFragmentation @TableName NVARCHAR(128) = NULL AS BEGIN SET NOCOUNT ON; IF OBJECT_ID('tempdb..#Fragmentation') IS NOT NULL DROP TABLE #Fragmentation; CREATE TABLE #Fragmentation ( DatabaseName SYSNAME, SchemaName SYSNAME, TableName SYSNAME, IndexName SYSNAME, IndexTypeDesc NVARCHAR(60), AvgFragmentationInPercent FLOAT, PageCount BIGINT ); DECLARE @SQL NVARCHAR(MAX); SET @SQL = ' INSERT INTO #Fragmentation SELECT DB_NAME() AS DatabaseName, s.name AS SchemaName, t.name AS TableName, i.name AS IndexName, i.type_desc AS IndexTypeDesc, ps.avg_fragmentation_in_percent AS AvgFragmentationInPercent, ps.page_count AS PageCount FROM sys.dm_db_index_physical_stats (DB_ID(), NULL, NULL, NULL, ''LIMITED'') ps INNER JOIN sys.indexes i ON ps.object_id = i.object_id AND ps.index_id = i.index_id INNER JOIN sys.tables t ON ps.object_id = t.object_id INNER JOIN sys.schemas s ON t.schema_id = s.schema_id WHERE (@TableName IS NULL OR t.name = @TableName)'; EXEC sp_executesql @SQL, N'@TableName NVARCHAR(128)', @TableName; SELECT * FROM #Fragmentation ORDER BY AvgFragmentationInPercent DESC; DROP TABLE #Fragmentation; END; GO

SQL Server Interview Questions - Set-01



If you are preparing for SQL Server interview, it would be better if you prepare few mock interviews. Although, SQL Server is vast and employer might ask you questions based on their requirement, but still you can prepare some of the commonly asked questions. 

Suggestions: 

  • Avoid guessing answers. Only respond when you are confident in your answer. Incorrect guesses can frustrate the interviewer.
  • Try not to ask the interviewer to repeat the question multiple times.
  • Stay calm, even if the question is difficult or very technical. Do not question or criticize the interviewer for asking such questions.
  • Even if you are unable to answer a few questions, don’t start skipping the rest. Some interviewers assess overall knowledge and may still find value in your skillset, even if it is limited.
  • Create an accurate resume that reflects your true knowledge. Avoid exaggerating your skills and experiences.

  • Interview Questions: 

    1. What are the different types of SQL Server backups?

    2. Explain the difference between a full backup and a differential backup.

    3. How can you improve the performance of a slow-running query?

    4. What is a deadlock, and how can you resolve it in SQL Server?

    5. What are SQL Server indexes, and what types are there?

    6. How do you monitor and optimize database performance in SQL Server?

    7. Explain the concept of isolation levels in SQL Server.

    8. What is the purpose of SQL Server Agent, and how do you use it?

    9. Describe the differences between clustered and non-clustered indexes.

    10. How would you handle database corruption?

    11. What is the SQL Server Profiler, and how is it used?

    12. How do you perform database replication in SQL Server?

    13. What are the different types of SQL Server recovery models?

    14. How can you secure a SQL Server database?

    15. Explain the difference between a stored procedure and a function in SQL Server.

    16. What is the use of the `DBCC` command in SQL Server?

    17. How do you manage database connections and sessions in SQL Server?

    18. What is a SQL Server view, and how does it differ from a table?

    19. How do you migrate a database from one SQL Server instance to another?

    20. What is the purpose of SQL Server Integration Services (SSIS)?

    21. How do you handle data archiving in SQL Server?

    22. What are SQL Server Always On Availability Groups, and how do they work?

    23. What is the purpose of SQL Server Management Studio (SSMS)?

    24. How do you implement SQL Server auditing?

    25. What are the common performance tuning techniques in SQL Server?


    These questions cover a broad range of topics including backups, indexing, performance tuning, security, replication, and more. Preparing for these questions can help demonstrate a solid understanding of SQL Server administration.

    SQL Server : Understaning Checkpoint


    The checkpoint process in SQL Server is a crucial component of the database's recovery mechanism. It plays a vital role in maintaining database consistency and durability by ensuring that data changes are properly written to disk. This process helps minimize recovery time in the event of a system failure by reducing the amount of work required during database recovery.


    What is a Checkpoint?

    A checkpoint is an event in SQL Server that writes all dirty pages (pages modified in memory but not yet written to disk) from the buffer cache to disk. It also records information about the transaction log's current state, including the location of the last checkpoint, which is crucial for recovery purposes. Checkpoints can be triggered automatically by SQL Server or manually by a database administrator.


     Types of Checkpoints

    1. Automatic Checkpoints:

       - SQL Server triggers automatic checkpoints based on the "recovery interval" configuration setting, which defines the maximum time SQL Server should spend recovering a database. The frequency of automatic checkpoints depends on the database's workload and the amount of data modification.

    2. Manual Checkpoints:

       - A manual checkpoint can be initiated by a database administrator using the CHECKPOINT command. This is often done during maintenance tasks or before performing operations that could potentially impact the database's state.

    3. Indirect Checkpoints:

       - Introduced in SQL Server 2012, indirect checkpoints provide a more predictable and controlled way to manage the checkpoint process by specifying a target recovery time. The database engine adjusts the checkpoint frequency to meet the specified recovery time goal.

    4. Internal Checkpoints:

       - These occur automatically during certain database operations, such as database backups or when a database is being shut down.


    How Checkpoints Work

    To understand the checkpoint process, it's helpful to look at the components involved:

    1. Dirty Pages:

       - These are data pages in the buffer cache that have been modified but not yet written to disk. They represent unsaved changes made by transactions.

    2. Transaction Log:

       - The transaction log records all changes made to the database, ensuring that all transactions are recoverable. The checkpoint process updates the transaction log's internal structures to mark which parts of the log are no longer needed for recovery.

    3. Buffer Cache:

       - A memory area where SQL Server stores data pages. The checkpoint process involves scanning the buffer cache to identify dirty pages and writing them to disk.


    Checkpoint Process Flow

    The checkpoint process can be illustrated with the following diagram:


    +-------------+     +--------------+     +--------------+     +--------------+

    | Start       | --> | Identify     | --> | Write Dirty  | --> | Update        |

    | Checkpoint  |     | Dirty Pages  |     | Pages to     |     | Transaction   |

    |             |     |              |     | Disk         |     | Log and       |

    |             |     |              |     |              |     | Checkpoint    |

    |             |     |              |     |              |     | Complete      |

    +-------------+     +--------------+     +--------------+     +--------------+



    1. Start Checkpoint:

       - The checkpoint process begins, either automatically or manually.

    2. Identify Dirty Pages:

       - SQL Server scans the buffer cache to identify all dirty pages.

    3. Write Dirty Pages to Disk:

       - The identified dirty pages are written to the database files on disk, ensuring that the data is persisted.

    4. Update Transaction Log and Complete Checkpoint:

       - SQL Server updates the transaction log's metadata to mark the last checkpoint position. This information is crucial for the recovery process, as it defines the starting point for transaction log recovery. The checkpoint process is then complete.


    Importance of Checkpoints

    1. Data Durability:

       - Checkpoints help ensure that data changes are persisted to disk, preventing data loss in the event of a crash.

    2. Faster Recovery:

       - By writing dirty pages to disk regularly, checkpoints reduce the amount of data that needs to be processed during recovery, thus minimizing downtime.

    3. Efficient Log Management:

       - Checkpoints allow SQL Server to truncate the transaction log, freeing up space and ensuring that the log does not grow excessively.


    Configuring and Monitoring Checkpoints

    Configuring Recovery Interval:

    The recovery interval can be configured using the sp_configure system stored procedure:


    EXEC sp_configure 'recovery interval', 5; -- Sets recovery interval to 5 minutes

    RECONFIGURE;


    Monitoring Checkpoints:

    To monitor checkpoint activity, you can query the sys.dm_db_checkpoint_stats dynamic management view, which provides information on checkpoint occurrences and their impact:


    SELECT 

        database_id,

        checkpoint_time,

        last_checkpoint_lsn,

        checkpoint_duration

    FROM 

        sys.dm_db_checkpoint_stats;



    The checkpoint process in SQL Server is a fundamental mechanism for ensuring database consistency, durability, and efficient recovery. By understanding and properly configuring checkpoints, database administrators can maintain optimal database performance and minimize data loss in the event of failures.





    SQL Server : Query Store


    The Query Store feature in SQL Server is a powerful tool for monitoring and managing database performance. It was introduced in SQL Server 2016 and has continued to be enhanced in subsequent versions, including SQL Server 2022. Query Store captures a history of queries, plans, and runtime statistics, providing deep insights into the performance of your workloads. It helps in identifying performance regressions, understanding query performance patterns, and providing a mechanism to force plan choices when necessary.


    Key Features of Query Store

    1. Query Performance Insights:

       - Query Store captures and retains the query text, query plans, and runtime statistics for all queries executed on the database. This historical data helps in understanding how queries are performing over time.

       - It stores different versions of query plans for the same query, allowing analysis of plan changes and their impact on performance.

    2. Automatic Plan Forcing:

       - If a query's performance degrades due to a change in the execution plan, Query Store can force the use of a previously good plan, thereby stabilizing performance.

       - Plan forcing can be manual or automatic, based on performance regression detection.

    3. Configuration Flexibility:

       - Administrators can configure Query Store to control the amount of data it captures and retains. This includes setting retention policies, specifying which queries to capture, and defining the frequency of data collection.

    4. Reports and Visualization:

       - SQL Server Management Studio (SSMS) provides built-in reports that visualize Query Store data, making it easier to identify problematic queries, analyze plan changes, and understand workload patterns.


    Enabling and Configuring Query Store

    To enable Query Store on a database, use the following SQL command:


    ALTER DATABASE [YourDatabaseName] 

    SET QUERY_STORE = ON;


    You can also configure various settings for Query Store, such as the data collection mode, data retention period, and capture mode. Here's an example configuration:


    ALTER DATABASE [YourDatabaseName]

    SET QUERY_STORE (OPERATION_MODE = READ_WRITE, -- Enable read-write mode

                     CLEANUP_POLICY = (STALE_QUERY_THRESHOLD_DAYS = 30), -- Retain data for 30 days

                     DATA_FLUSH_INTERVAL_SECONDS = 900, -- Flush data every 15 minutes

                     MAX_STORAGE_SIZE_MB = 1024, -- Limit storage to 1 GB

                     QUERY_CAPTURE_MODE = AUTO); -- Capture queries automatically


    Analyzing Query Store Data

    Query Store stores its data in system views. Here are some useful views for analyzing query performance:

    1. sys.query_store_query: Contains information about the queries captured by Query Store.

    2. sys.query_store_plan: Contains execution plan details for each query.

    3. sys.query_store_runtime_stats: Contains runtime statistics such as execution count, duration, CPU time, and logical reads.


    Example Queries

    Finding Top 10 Longest Running Queries:

    SELECT TOP 10 

        qsqt.query_text_id,

        qsqt.query_sql_text,

        MAX(qsrs.avg_duration) AS MaxAvgDuration

    FROM 

        sys.query_store_query_text AS qsqt

    JOIN 

        sys.query_store_query AS qsq

        ON qsqt.query_text_id = qsq.query_text_id

    JOIN 

        sys.query_store_plan AS qsp

        ON qsq.query_id = qsp.query_id

    JOIN 

        sys.query_store_runtime_stats AS qsrs

        ON qsp.plan_id = qsrs.plan_id

    GROUP BY 

        qsqt.query_text_id, qsqt.query_sql_text

    ORDER BY 

        MaxAvgDuration DESC;



    Forcing a Specific Plan for a Query:

    To force a particular plan for a query, use the sp_query_store_force_plan stored procedure:


    EXEC sp_query_store_force_plan 

        @query_id = 1, -- Replace with your actual query ID

        @plan_id = 2; -- Replace with the plan ID you want to force


    Removing a Forced Plan:

    To remove a forced plan and let SQL Server choose the optimal plan, use the following command:


    EXEC sp_query_store_unforce_plan 

        @query_id = 1; -- Replace with your actual query ID


    Best Practices

    1. Monitor Storage Usage:

       - Regularly monitor the storage used by Query Store to ensure it doesn't exceed the allocated space.

    2. Use Appropriate Capture Mode:

       - For OLTP systems with a large number of queries, consider using the "AUTO" capture mode to limit data collection to significant queries.

    3. Regular Cleanup:

       - Set appropriate cleanup policies to manage the retention of historical data and prevent excessive storage usage.

    4. Leverage Built-in Reports:

       - Use the built-in reports in SSMS for a visual analysis of Query Store data. This helps in quickly identifying performance issues and trends.


    Conclusion

    Query Store is a valuable feature for maintaining and optimizing the performance of SQL Server databases. It provides the necessary insights and tools to diagnose and resolve performance issues, ensuring a stable and efficient database environment.

    SQL Server: Policy based Management



    Policy-Based Management (PBM) in SQL Server is a feature that allows administrators to define and enforce policies for SQL Server instances to ensure consistent configuration, security, and compliance across the server environment.

    Here’s a breakdown of how PBM works and its components:

    Key Components:

    1. Policies:
    A policy defines the rules or standards you want to enforce. It is based on a condition that specifies how the policy should be applied.

    2. Conditions:

    Conditions are the criteria used to evaluate whether a policy is being met. They use a set of expressions or logic to determine compliance. For example, a condition might check if a database has encryption enabled.

    3. Facets:

    Facets are predefined groups of properties or settings that are used to define the conditions. They categorize different types of settings and properties that can be managed. For example, the "Database" facet covers properties related to databases, such as recovery models and file sizes.

    4. Policies Evaluation:

    Policies can be evaluated manually or scheduled to run automatically. When a policy is evaluated, it checks the target instances against the defined conditions to determine compliance.

    5. Target Sets:

    Policies can be applied to one or more SQL Server instances, databases, or objects. You can create target sets to group the instances or objects where a policy should be enforced.


    Common Uses:

    1. Configuration Management:
    Ensure that SQL Server instances are configured consistently according to organizational standards, such as setting specific database options or ensuring proper file growth settings.

    2. Compliance:

    Enforce security and compliance standards, such as requiring that certain security features like encryption or auditing are enabled.

    3. Best Practices:

    Implement best practices, such as making sure that backups are being taken regularly or that databases have appropriate recovery models configured.


    How to Implement PBM?

    1. Create Conditions:

    Define the rules for your policy using the available facets.

    2. Create Policies:
    Combine conditions with appropriate targets to form policies. 

    3. Evaluate Policies:
    Run evaluations to check if the policies are being met. This can be done on-demand or scheduled.

    4. Enforce Policies:

    Some policies can be set to automatically remediate issues when they are found, while others might require manual intervention.


    Example Scenario: Ensuring All Databases Use Full Recovery Model


    1. Create a Condition:
    A condition defines the criteria that the policy will enforce. In this case, we want to check if the recovery model of a database is set to "Full."

    1. Open SQL Server Management Studio (SSMS).

    2. Connect to your SQL Server instance.

    3. Navigate to Management  > Policy Management > Conditions.

    4. Right-click Conditions and select "New Condition".

       - Name: "Database Recovery Model Full"

       - Facet: "Database"

       - Expression: Select "RecoveryModel" and set it to "Full".


    T-SQL: 

    USE master;

    GO

    EXEC msdb.dbo.sp_add_policy_condition

           @name = N'Database Recovery Model Full',

           @facet = N'Database',

           @expression = N'([RecoveryModel] = 1)';


    This example uses the "RecoveryModel" property, where "1" corresponds to the Full Recovery Model.


    2. Create a Policy:

    Once the condition is created, you need to create a policy that uses this condition.

    1. Navigate to Management > Policy Management > Policies.

    2. Right-click Policies and select "New Policy".


       - Name: "Ensure Full Recovery Model"

       - Condition : Select the condition you created earlier ("Database Recovery Model Full").

       - Evaluation Mode: Choose how often the policy should be evaluated. Options include "On Change - Prevent", "On Change - Log", "On Schedule", etc. For this example, select "On Schedule" and set a schedule if desired.

       - Target: Set the target to apply this policy to the relevant databases. You can choose to apply it to all databases or specify particular ones.


    T-SQL:

    USE msdb

    GO

    EXEC msdb.dbo.sp_add_policy

           @name = N'Ensure Full Recovery Model',

           @condition_name = N'Database Recovery Model Full',

           @target = N'All Databases',

           @evaluation_mode = N'On Schedule';


    3. Apply and Evaluate the Policy:

    1. After creating the policy, you need to evaluate it to see if any databases do not comply with the policy.

       - Go to Management > Policy Management > Policies.

       - Right-click the policy you created ("Ensure Full Recovery Model") and select Evaluate.


    SSMS will run the evaluation and show you which databases are compliant and which are not.


    2. If you have configured automatic remediation, the policy will attempt to correct any issues it finds. Otherwise, you may need to manually adjust the databases that do not comply.

    For manual adjustment, you can use the following T-SQL script to change the recovery model of a database:


    USE [YourDatabaseName]
    GO

    ALTER DATABASE [YourDatabaseName] SET RECOVERY FULL;
    GO

    Conclusion

    In this example, you created a condition to check if the recovery model of databases is set to Full, created a policy to enforce this condition, and then evaluated the policy. This ensures that all databases on your SQL Server instance are configured with the Full Recovery Model, helping you maintain consistency and adhere to best practices.

    Have a good day!


    SQL Server: Resource Governer



    The Resource Governor in SQL Server is a feature designed to manage and control SQL Server workload resources. By providing the ability to allocate CPU and memory resources to different workloads, it helps maintain performance consistency and ensures fair resource distribution among users and applications.


    Key Functions:

    1. Workload Classification: 

    Resource Governor allows you to classify incoming requests into different workload groups based on various criteria such as user sessions or application names. This classification is defined through classification functions and helps in managing resource usage more effectively.

       

    2. Resource Pools: 

    It allocates resources to workload groups using resource pools. Each resource pool can be configured with specific limits on CPU and memory usage. This ensures that high-priority workloads receive the resources they need while preventing any single workload from overwhelming the system.


    3. Resource Limits: 

    Administrators can set maximum and minimum limits for CPU and memory usage within each resource pool. This prevents resource contention and ensures that critical processes receive adequate resources.


    4. Dynamic Management Views: 

    SQL Server provides DMVs to monitor and analyze the performance of workloads and resource pools, allowing for real-time adjustments and fine-tuning of resource allocation.


    Benefits:

    - Improved Performance: By controlling how resources are allocated, it helps maintain stable performance across various workloads.

    - Fair Resource Distribution: Ensures that no single workload monopolizes server resources, leading to more predictable and balanced performance.

    - Efficient Resource Utilization: Allows administrators to make optimal use of available resources based on workload requirements.


    How to implement Resource Governer?

    Implementing Resource Governor in SQL Server involves several steps, including configuring resource pools, workload groups, and classification functions. Here’s a step-by-step guide to help you set it up:


    1. Enable Resource Governor

    Resource Governor is enabled by default in SQL Server. However, if it has been disabled, you can enable it using the following T-SQL command:

    ALTER RESOURCE GOVERNOR RECONFIGURE;


    2. Create Resource Pools: 

    Resource pools define how resources are allocated among different workloads. You can create custom resource pools with specific CPU and memory limits.

    Example: Create a Resource Pool

    CREATE RESOURCE POOL [CustomPool]

        WITH (MAX_CPU_PERCENT = 50, MAX_MEMORY_PERCENT = 50);


    3. Create Workload Groups:

    Workload groups use resource pools to manage how resources are allocated to different types of workloads. You can create workload groups and associate them with resource pools.

    Example: Create a Workload Group

    CREATE WORKLOAD GROUP [CustomGroup]

    USING [CustomPool];


    4. Create a Classification Function

    A classification function determines how incoming requests are assigned to workload groups. This function uses criteria such as login name, application name, or user session.

    Example: Create a Classification Function

    CREATE FUNCTION dbo.ClassifyWorkload()

    RETURNS SYSNAME

    WITH SCHEMABINDING

    AS

    BEGIN

        DECLARE @GroupName SYSNAME;


        -- Example classification logic

        IF ORIGINAL_LOGIN() = 'appuser1'

            SET @GroupName = 'CustomGroup';

        ELSE

            SET @GroupName = 'default';


        RETURN @GroupName;

    END;


    5. Associate the Classification Function with Resource Governor:

    Once you have created the classification function, you need to associate it with Resource Governor.

    Example: Set the Classification Function

    ALTER RESOURCE GOVERNOR

    WITH (CLASSIFIER_FUNCTION = dbo.ClassifyWorkload);


    6. Apply and Test Configuration

    After configuring Resource Governor, apply the changes using the following command:

    ALTER RESOURCE GOVERNOR RECONFIGURE;

    You should test your configuration to ensure that resources are being allocated as expected. You can use Dynamic Management Views (DMVs) to monitor resource usage:

    Example: Check Resource Pool Usage

    SELECT * 

    FROM sys.dm_resource_governor_resource_pools;


    Example: Check Workload Group Usage

    SELECT * 

    FROM sys.dm_resource_governor_workload_groups;


    7. Adjust Configuration as Needed

    Based on your observations and performance requirements, you may need to adjust resource pool settings, workload group configurations, or the classification function. Use `ALTER` commands to make changes and `ALTER RESOURCE GOVERNOR RECONFIGURE` to apply them.


     Below are few experience related facts: 

    1. Enable "Resource Governor" only if necessary.

    2. Create "Resource Pools" with appropriate resource limits. Too low or too high both are not advisable. 

    3. Create "Workload Groups" and link them to resource pools.

    4. Create and Implement a Classification Function** to direct workloads to appropriate groups.


    Definitely, you can effectively manage and optimize resource allocation in SQL Server using Resource Governor.


    Have a good day!

    Welcome


     

    Welcome to the world of DBAs. With so much experience as DBA, I have gained a lot of knowledge and ideas of best database administration. There are many "teachers" in the IT world whom people know, but real DBAs who work hard and have a lot of ideas keep silence. I want to break the silence and not only that this blog is going to answer them, but also it will reveal the mistake that so-called experts are spreading to people. 

    With all these thoughts, have a good day!