Tuesday, February 23, 2016

How to minimize the performance impact of locking in database

Database locks work as a solution to the problems associated with concurrency, consistency and integrity. But it comes with its own drawback of performance hit. In this article, we will talk about steps, we could take during database design to reduce the performance impact of locking.
Introduce optimum indexes - Introducing optimum indexes improves the performance impact of locking caused by those update and delete statements, which have got filter criteria associated with them. When a update or delete query with filter criteria is fired in database, it initially takes a lock on complete table to identify the records getting impacted. Once the records are identified, Sybase can take the lock on the selected rows, depending on the locking level set on the table . Implementing the index, would minimize the time required in identifying rows to be updated or deleted, which in turn would reduce the all page level locking time.
Choose the optimum locking level on server- Sybase ASE provide option to set the default locking scheme to be applied on newly created tables.It can be set as "allpages", "datapages" or "datarows "level. In "allpages" lock level, all the pages of table gets locked during any DML operation. In "datapages" lock level only the pages, which contains impacted data, gets locked. In "datarows " lock level, only the rows impacted with DML operations,gets locked.
 
The default locking scheme can be checked by firing following query -
 sp_configure "lock scheme"
  
Default locking scheme can be changed by following query - 
sp_configure "lock scheme", 0,  {allpages | datapages | datarows}
Choose optimum lock level on tables -It might not always be possible to set least restrictive locking scheme on the database server level. For these cases, Sybase ASE provide option to set locking level on table level. This locking level can be set during table creation as a part of create statement or changed later via alter statement.
Option to set locking level in create table statement -
create table table_name (column_name_list)[lock {datarows | datapages | allpages}]

Option to set locking level in alter table statement -
alter table table_name lock {allpages | datapages | datarows}
Keeping the transaction duration to minimum- locks are taken as a part of transaction and released at the end of transaction. Keeping the transaction duration sort helps us minimizing the lock duration and its impact on performance.
Partition the table- In a non-partitioned table, with no clustered index, all the inserts happens on the last page of the table. In case of concurrent inserts, it would create performance hit as every transaction would cause lock on the last page and let another transaction wait until the completion. In case of partitioned table, there would be multiple last pages causing less impact of locking.
Creating the clustered index-Clustered index provides multiple last page insertion points, which helps in reducing the impact of locks in concurrent inserts. But it also introduces an overhead of ordering, which might again have an impact on performance.
I hope, this article would have helped you in understanding the way, we could minimize the impact of locking in our application. Let me know your experience as well, if this article could help you some way or you could find some other ways as well to minimize the performance impact of locking.

No comments:

Post a Comment