Showing posts with label DBA Worries Continued. Show all posts
Showing posts with label DBA Worries Continued. Show all posts

Jun 12, 2021

Cloud Witness in Azure

 Cloud Witness is a type of Fail-over Cluster quorum witness that uses Microsoft Azure to provide a vote on cluster quorum.

There are significant benefits which this approach:

  • No need for third separate DC for cluster witness.
  • Uses standard available Azure Blob Storage (no extra maintenance overhead of virtual machines hosted in public cloud).
  • Same Azure Storage Account can be used for multiple clusters (one blob file per cluster; cluster unique ID used as blob file name).
  • Low on-going $cost to the Storage Account (small data written per blob file, blob file updated only once when cluster nodes' state changes).
  • Built-in Cloud Witness resource type.

Problem with traditional cluster setup

Lets consider a typical cluster setup which is shown in below diagram. Here we have file share witness in primary DC and unfortunately there is power outage. Because here the setup needed minimum 3 votes to initiate fail-over but the nodes which are up in DR-DC have only 2 votes which is not sufficient to form a quorum and fail over.


Solution 


Cloud Witness is a new type of Fail-over Cluster quorum witness that uses Microsoft Azure as the arbitration point. It uses Azure Blob Storage to read/write a blob file, which is then used as an arbitration point if there is a split-brain resolution.

There are significant benefits with this approach:

  • Uses standard available Azure Blob Storage (no extra maintenance overhead of virtual machines hosted in public cloud).
  • Same Azure Storage Account can be used for multiple clusters (one blob file per cluster; cluster unique ID used as blob file name).
  • Low on-going $cost to the Storage Account (small data written per blob file, blob file updated only once when cluster nodes' state changes).
  • Built-in Cloud Witness resource type.
Solution Set up a Cloud Witness for a cluster
  1. Create an Azure Storage Account to use as a Cloud Witness
  2. Configure the Cloud Witness as a quorum witness for your cluster.

To create an Azure storage account
  1. Sign in to the Azure portal.
  2. On the Hub menu, select New -> Data + Storage -> Storage account.
  3. In the Create a storage account page, do the following:
    1. Enter a name for your storage account.
      Storage account names must be between 3 and 24 characters in length and may contain numbers and lowercase letters only. The storage account name must also be unique within Azure.

    2. For Account kind, select General purpose.
      You can't use a Blob storage account for a Cloud Witness.

    3. For Performance, select Standard.
      You can't use Azure Premium Storage for a Cloud Witness.

    4. For Replication, select Locally-redundant storage (LRS) .
      Fail-over Clustering uses the blob file as the arbitration point, which requires some consistency guarantees when reading the data. Therefore you must select Locally-redundant storage for Replication type.

View and copy storage access keys for your Azure Storage Account

When you create a Microsoft Azure Storage Account, it is associated with two Access Keys that are automatically generated - Primary Access key and Secondary Access key. For a first-time creation of Cloud Witness, use the Primary Access Key. There is no restriction regarding which key to use for Cloud Witness.

To view and copy storage access keys

In the Azure portal, navigate to your storage account, click All settings and then click Access Keys to view, copy, and regenerate your account access keys. The Access Keys blade also includes pre-configured connection strings using your primary and secondary keys that you can copy to use in your applications (see figure 4).





When you create a Storage Account, the following URLs are generated using the format: https://<Storage Account Name>.<Storage Type>.<Endpoint>

Cloud Witness always uses Blob as the storage type. Azure uses .core.windows.net as the Endpoint. When configuring Cloud Witness, it is possible that you configure it with a different endpoint as per your scenario (for example the Microsoft Azure datacenter in China has a different endpoint).

In the Azure portal, navigate to your storage account, click All settings and then click Properties to view and copy your endpoint URLs


Solution Configure Cloud Witness as a Quorum Witness

  1. Launch Failover Cluster Manager.

  2. Right-click the cluster -> More Actions -> Configure Cluster Quorum Settings (see figure 6). This launches the Configure Cluster Quorum wizard.

  3. On the Select Quorum Configurations page, select Select the quorum witness.

  4. On the Select Quorum Witness page, select Configure a cloud witness.


    5.  On the Configure Cloud Witness page, enter the following information:

  1.   (Required parameter) Azure Storage Account Name.

  2. (Required parameter) Access Key corresponding to the Storage Account.

    1. When creating for the first time, use Primary Access Key (see figure 5)
    2. When rotating the Primary Access Key, use Secondary Access Key (see figure 5)
  3. (Optional parameter) If you intend to use a different Azure service endpoint (for example the Microsoft Azure service in China), then update the endpoint server name.




6.  Upon successful configuration of Cloud Witness, you can view the newly created witness resource in the Fail over Cluster Manager.




Configuring Cloud Witness using PowerShell

You can configure Cloud Witness with the cmdlet Set-ClusterQuorum using the following PowerShell command:

Set-ClusterQuorum -CloudWitness -AccountName <StorageAccountName> -AccessKey <StorageAccountAccessKey>

In case you need to use a different endpoint... 

 Set-ClusterQuorum -CloudWitness -AccountName <StorageAccountName> -AccessKey <StorageAccountAccessKey> -Endpoint <servername>




           Stay Tuned!!!!!

               
--By Abhishek 

Mar 13, 2021

Moving A Table Data Into New Location Within Database

 


 

Problem

I often getting low disk space alert from one of my data drive. Adding space on the disk was not possible because of hard drive size limitations and barriers at OS max disk space size. So I started looking what can I do for database end and I found there is only one table which is having a huge in size. So I decided to move the table's older data to the other disk which I have a good amount of space free there.

Solution

Below are the few major steps for moving table's older data into now disk/location...

Create File group

Add files in File Group

Create Partition Function based on the logic which data you want to move.

Create Partition Scheme 

        Create Table on partition scheme. Here I used clustered index rebuild method to move data.


Step by step implementation

In my case [cz_air] was the table which was having huge in size on the DB and having unique PK on column name air_id.

-- Step 1

/*

get max value for clustered index column for the table. You can put any other value based on your requirement.

for Table cz_air column name is air_id

*/

 select max(air_id) from [dbo].[cz_air]

--2579

 

--step 2

/*

note down all the index info for the table...

*/

 exec sp_helpindex 'cz_air'

/*

index_name    index_description    index_keys

PK_cz_air_New   clustered, unique, primary key located on PS_cz_air      air_id

--Note the table is having only 1 index in mycase.

*/

  

--step 3

/* Create New File Group */

 ALTER DATABASE [air_c9] ADD FILEGROUP [FG_R_SQLdata_MTD]

  

--Step 4

/*

Create new data file pointing to the target location.

*/

 

ALTER DATABASE [air_c9]

ADD FILE

(

NAME='BigTables',

FILENAME = 'R:\SQL_Data\MoveTableData\cz_air.ndf'

)

TO FILEGROUP [FG_R_SQLdata_MTD]

 

--Step 5

/*

Partition Function Creation based on the max value of the clustered column

*/

CREATE PARTITION FUNCTION PF_cz_air_air_id(INT)

AS RANGE LEFT FOR VALUES (2579)

GO 

--Step 6

/*

Partition Scheme creation

*/

CREATE PARTITION SCHEME PS_cz_air

AS PARTITION PF_cz_air_air_id

TO ( [FG_R_SQLdata_MTD], [PRIMARY])

GO

 

--Step 7

/*

Recreate all the indexes on the new Partition Scheme...

In my case, there is only one index. The clustered one. I am lucky. Haha ha.

Once you rebuild the clustered index, all your data going to move into new location. This step can take a while based on size of data and machine's computing power.

*/

CREATE UNIQUE CLUSTERED INDEX PK_cz_air_New

ON cz_air ( air_id ASC )

WITH (DROP_EXISTING = ON)

ON PS_cz_air (air_id)

 

--Step 8 

--Now its time to validate data partition. Below is the script which I have used for the validation.

 SELECT distinct

p.partition_number AS PartitionNumber,

f.name AS PartitionFilegroup,

p.rows AS NumberOfRows

FROM sys.partitions p

JOIN sys.destination_data_spaces dds ON p.partition_number = dds.destination_id

JOIN sys.filegroups f ON dds.data_space_id = f.data_space_id

WHERE OBJECT_NAME(OBJECT_ID) = 'cz_air'

 --Output of the SQL query in my case... 




Feb 23, 2021

Ghost Records


Ghost Records
When any record get deleted from DB, SQL Server does not delete the records physically from disk immediately. The deleted records which exists physically in storage called Ghost Records.



SQL Server does not use synchronous method to delete the records physically for DML operation because if SQL Server delete the records synchronously, this will going to drastically reduce the operation's performance.



SQL Server delete the records physically from storage by a thread called Ghost Cleanup Thread (GCT).  Ghost Cleanup Thread physically remove the deleted/updated records.

Few facts about Ghost Cleanup Thread

1. SQL OS invokes Ghost Cleanup Thread every 10 seconds interval.

2. Ghost Cleanup Thread run at SQL instance level and sweep one bye one each DB starting from master.

3. Ghost Cleanup Thread scans "PFS" (Page Free Space) pages of the DB to get the pages which has ghost records.

4. Ghost Cleanup Thread remove records which are marks as deleted.

5. Skip the database if it is not able to take a shared lock for the database or database is not Open in read/write mode.

--By Abhishek....


Stay In Touch:





Feb 6, 2021

Let’s talk about In memory OLTP



In-memory OLTP frequently referred to by its code name "Hekaton", was introduced in SQL Server 2014 but it got more popular from SQL Server 2016. This feature was being developed by Microsoft to stay in the game for high speed OLTP transaction speed and recent hardware changes in terms of the cost and throughput.


Many people utilize this feature for maximum performance where they want ultimate speed in  response from SQL server.  



The evaluating and migration of the normal disk based table and workload is made very simple by SQL server, the standard report “Transaction performance analysis overview”can scan through table and stored procedures based on usage. 

****


The data stored in the memory is separate from the buffer pool as like buffer pool paging out the disk when there is memory pressure but this is not with memory optimized tables.

Memory optimized tables use different structures to store data not like data pages. All the data gets loaded to memory at the time of SQL server startup.There is a separate checkpoint file created to just boost the database startup process. 






 Supported Data types:


  • bit 

  • All integer types: tinyint, smallint, int, bigint 

  • All money types: money, smallmoney  

  • All floating types: float, real  

  • date/time types: datetime, smalldatetime, datetime2, date, time  numeric and decimal types  String types: char(n), varchar(n), nchar(n), nvarchar(n), sysname, varchar(MAX),   Varchar(MAX) 

  •  Binary types: binary(n), varbinary(n), varbinary(MAX)  Uniqueidentifie



Steps to implement 

ALTER DATABASE AdventureWorks2016CTP3 

SET MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT = ON; 

 

ALTER DATABASE AdventureWorks2016CTP3 

ADD FILEGROUP AdventureWorks2016CTP3_mod CONTAINS MEMORY_OPTIMIZED_DATA;

  

ALTER DATABASE AdventureWorks2016CTP3 

ADD FILE (name='AdventureWorks2016CTP3_mod1', 

filename='E:\data\AdventureWorks2016CTP3') 

TO FILEGROUP AdventureWorks2016CTP3_mod 

  

USE AdventureWorks2016CTP3

CREATE TABLE dbo.orderdetails

OrderID INTEGER NOT NULL IDENTITY PRIMARY KEY NONCLUSTEREDItemNumber   INTEGER    NOT NULL,

  OrderDate DATETIME   NOT NULL )  WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA); 



Performance benchmark for disk-based table and memory-optimized table.



SQLQueryStress is a simple, lightweight performance testing tool, designed to load test individual queries using this tool we tried generating load on the SQL server 

We did random inserts from 100 connections with 500 iterations on both Disk based table and memory optimized table.


As performance benchmark memory based table load was very fast , it was just 1.15 minutes to complete where on disk based table , the same operation took around 09:30 Minutes.

 




Disk based load


[RML] C:\Program Files\Microsoft Corporation\RMLUtils>     ostress.exe -n100 -r500 -S. -E -dAdventureWorks2016 -q -i"C:\Temp\Disk-based Inserts.sql"






Memory optimized table load



[RML] C:\Program Files\Microsoft Corporation\RMLUtils>    ostress.exe -n100 -r500 -S. -E -dAdventureWorks2016 -q -i"C:\temp\Memory-optimized Inserts.sql"








For new application development and design this feature is great has very sound on performance perspective but on existing tables there is some over head and challenges on data type and referential integrity if those can be handled its great feature in the SQL server 

 

--By Santosh Sethi 

Nov 7, 2020

Mirroring Witness getting disappear from Mirroring Setup...

Recently we have faced one issue where DB mirroring witness getting disappear from mirroring setup.

This is happening once Principal get down or SQL services restarts. In the case, we found DB fail-over is also is not working. Once the Principal server come online the DB start serving role as Principal DB and get in sync with Mirror DB without any issue. 

But if we go to DB mirroring property, we found that Witness Server info got washout from the mirroring setup.

Lets me share with you few screenshots which we have taken...

SQL Instances where we have configured DB mirroring on i3 DB with witness and Mirroring Setup Configuration…


Let’s bring down primary server for some time and see the witness status once Primary come online…

 

We can see DB fail over does not happened and Witness got disappear from Mirroring Setup

 


 

Root cause of the issue which we found was...  

The reason behind the issue was, all the members (Principal, Mirror and Witness) having different service account and it was not configured at the time of setting-up the DB mirroring.