ITPub博客

首页 > 应用开发 > IT综合 > Locking

Locking

原创 IT综合 作者:xxdxxdxxd 时间:2007-03-23 12:28:55 0 删除 编辑
Microsoft® SQL Server™ 2000 uses locking to ensure transactional integrity and database consistency. Locking prevents users from reading data being changed by other users, and prevents multiple users from changing the same data at the same time. If locking is not used, data within the database may become logically incorrect, and queries executed against that data may produce unexpected results.

Although SQL Server enforces locking automatically, you can design applications that are more efficient by understanding and customizing locking in your applications.

[@more@]
Accessing and Changing Relational Data
Concurrency Problems

If locking is not available and several users access a database concurrently, problems may occur if their transactions use the same data at the same time. Concurrency problems include:

Lost Updates

Lost updates occur when two or more transactions select the same row and then update the row based on the value originally selected. Each transaction is unaware of other transactions. The last update overwrites updates made by the other transactions, which results in lost data.

For example, two editors make an electronic copy of the same document. Each editor changes the copy independently and then saves the changed copy, thereby overwriting the original document. The editor who saves the changed copy last overwrites changes made by the first editor. This problem could be avoided if the second editor could not make changes until the first editor had finished.

Uncommitted Dependency (Dirty Read)

Uncommitted dependency occurs when a second transaction selects a row that is being updated by another transaction. The second transaction is reading data that has not been committed yet and may be changed by the transaction updating the row.

For example, an editor is making changes to an electronic document. During the changes, a second editor takes a copy of the document that includes all the changes made so far, and distributes the document to the intended audience. The first editor then decides the changes made so far are wrong and removes the edits and saves the document. The distributed document contains edits that no longer exist, and should be treated as if they never existed. This problem could be avoided if no one could read the changed document until the first editor determined that the changes were final.

Inconsistent Analysis (Nonrepeatable Read)

Inconsistent analysis occurs when a second transaction accesses the same row several times and reads different data each time. Inconsistent analysis is similar to uncommitted dependency in that another transaction is changing the data that a second transaction is reading. However, in inconsistent analysis, the data read by the second transaction was committed by the transaction that made the change. Also, inconsistent analysis involves multiple reads (two or more) of the same row and each time the information is changed by another transaction; thus, the term nonrepeatable read.

For example, an editor reads the same document twice, but between each reading, the writer rewrites the document. When the editor reads the document for the second time, it has changed. The original read was not repeatable. This problem could be avoided if the editor could read the document only after the writer has finished writing it.

Phantom Reads

Phantom reads occur when an insert or delete action is performed against a row that belongs to a range of rows being read by a transaction. The transaction's first read of the range of rows shows a row that no longer exists in the second or succeeding read, as a result of a deletion by a different transaction. Similarly, as the result of an insert by a different transaction, the transaction's second or succeeding read shows a row that did not exist in the original read.

For example, an editor makes changes to a document submitted by a writer, but when the changes are incorporated into the master copy of the document by the production department, they find that new unedited material has been added to the document by the author. This problem could be avoided if no one could add new material to the document until the editor and production department finish working with the original document.

Accessing and Changing Relational Data
Optimistic and Pessimistic Concurrency

Microsoft® SQL Server™ 2000 offers both optimistic and pessimistic concurrency control. Optimistic concurrency control uses cursors. Pessimistic concurrency control is the default for SQL Server.

Optimistic Concurrency

Optimistic concurrency control works on the assumption that resource conflicts between multiple users are unlikely (but not impossible), and allows transactions to execute without locking any resources. Only when attempting to change data are resources checked to determine if any conflicts have occurred. If a conflict occurs, the application must read the data and attempt the change again.

Pessimistic Concurrency

Pessimistic concurrency control locks resources as they are required, for the duration of a transaction. Unless deadlocks occur, a transaction is assured of successful completion.

See Also

Cursor Concurrency

Deadlocking

Accessing and Changing Relational Data
Isolation Levels

When locking is used as the concurrency control mechanism, it solves concurrency problems. This allows all transactions to run in complete isolation of one another, although there can be more than one transaction running at any time.

Serializability is the database state achieved by running a set of concurrent transactions equivalent to the database state that would be achieved if the set of transactions were executed serially in order.

SQL-92 Isolation Levels

Although serialization is important to transactions to ensure that the data in the database is correct at all times, many transactions do not always require full isolation. For example, several writers are working on different chapters of the same book. New chapters can be submitted to the project at any time. However, after a chapter has been edited, a writer cannot make any changes to the chapter without the editor's approval. This way, the editor can be assured of the accuracy of the book project at any point in time, despite the arrival of new unedited chapters. The editor can see both previously edited chapters and recently submitted chapters.

The level at which a transaction is prepared to accept inconsistent data is termed the isolation level. The isolation level is the degree to which one transaction must be isolated from other transactions. A lower isolation level increases concurrency, but at the expense of data correctness. Conversely, a higher isolation level ensures that data is correct, but can affect concurrency negatively. The isolation level required by an application determines the locking behavior SQL Server uses.

SQL-92 defines the following isolation levels, all of which are supported by SQL Server:

  • Read uncommitted (the lowest level where transactions are isolated only enough to ensure that physically corrupt data is not read).
  • Read committed (SQL Server default level).
  • Repeatable read.
  • Serializable (the highest level, where transactions are completely isolated from one another).

If transactions are run at an isolation level of serializable, any concurrent overlapping transactions are guaranteed to be serializable.

These isolation levels allow different types of behavior.

Isolation level Dirty read Nonrepeatable read Phantom
Read uncommitted Yes Yes Yes
Read committed No Yes Yes
Repeatable read No No Yes
Serializable No No No

Transactions must be run at an isolation level of repeatable read or higher to prevent lost updates that can occur when two transactions each retrieve the same row, and then later update the row based on the originally retrieved values. If the two transactions update rows using a single UPDATE statement and do not base the update on the previously retrieved values, lost updates cannot occur at the default isolation level of read committed.

Accessing and Changing Relational Data
Understanding Locking in SQL Server

Microsoft® SQL Server™ 2000 has multigranular locking that allows different types of resources to be locked by a transaction. To minimize the cost of locking, SQL Server locks resources automatically at a level appropriate to the task. Locking at a smaller granularity, such as rows, increases concurrency, but has a higher overhead because more locks must be held if many rows are locked. Locking at a larger granularity, such as tables, are expensive in terms of concurrency because locking an entire table restricts access to any part of the table by other transactions, but has a lower overhead because fewer locks are being maintained.

SQL Server can lock these resources (listed in order of increasing granularity).

Resource Description
RID Row identifier. Used to lock a single row within a table.
Key Row lock within an index. Used to protect key ranges in serializable transactions.
Page 8 kilobyte –(KB) data page or index page.
Extent Contiguous group of eight data pages or index pages.
Table Entire table, including all data and indexes.
DB Database.

SQL Server locks resources using different lock modes that determine how the resources can be accessed by concurrent transactions.

SQL Server uses these resource lock modes.

Lock mode Description
Shared (S) Used for operations that do not change or update data (read-only operations), such as a SELECT statement.
Update (U) Used on resources that can be updated. Prevents a common form of deadlock that occurs when multiple sessions are reading, locking, and potentially updating resources later.
Exclusive (X) Used for data-modification operations, such as INSERT, UPDATE, or DELETE. Ensures that multiple updates cannot be made to the same resource at the same time.
Intent Used to establish a lock hierarchy. The types of intent locks are: intent shared (IS), intent exclusive (IX), and shared with intent exclusive (SIX).
Schema Used when an operation dependent on the schema of a table is executing. The types of schema locks are: schema modification (Sch-M) and schema stability (Sch-S).
Bulk Update (BU) Used when bulk-copying data into a table and the TABLOCK hint is specified.

Shared Locks

Shared (S) locks allow concurrent transactions to read (SELECT) a resource. No other transactions can modify the data while shared (S) locks exist on the resource. Shared (S) locks on a resource are released as soon as the data has been read, unless the transaction isolation level is set to repeatable read or higher, or a locking hint is used to retain the shared (S) locks for the duration of the transaction.

Update Locks

Update (U) locks prevent a common form of deadlock. A typical update pattern consists of a transaction reading a record, acquiring a shared (S) lock on the resource (page or row), and then modifying the row, which requires lock conversion to an exclusive (X) lock. If two transactions acquire shared-mode locks on a resource and then attempt to update data concurrently, one transaction attempts the lock conversion to an exclusive (X) lock. The shared-mode-to-exclusive lock conversion must wait because the exclusive lock for one transaction is not compatible with the shared-mode lock of the other transaction; a lock wait occurs. The second transaction attempts to acquire an exclusive (X) lock for its update. Because both transactions are converting to exclusive (X) locks, and they are each waiting for the other transaction to release its shared-mode lock, a deadlock occurs.

To avoid this potential deadlock problem, update (U) locks are used. Only one transaction can obtain an update (U) lock to a resource at a time. If a transaction modifies a resource, the update (U) lock is converted to an exclusive (X) lock. Otherwise, the lock is converted to a shared-mode lock.

Exclusive Locks

Exclusive (X) locks prevent access to a resource by concurrent transactions. No other transactions can read or modify data locked with an exclusive (X) lock.

Intent Locks

An intent lock indicates that SQL Server wants to acquire a shared (S) lock or exclusive (X) lock on some of the resources lower down in the hierarchy. For example, a shared intent lock placed at the table level means that a transaction intends on placing shared (S) locks on pages or rows within that table. Setting an intent lock at the table level prevents another transaction from subsequently acquiring an exclusive (X) lock on the table containing that page. Intent locks improve performance because SQL Server examines intent locks only at the table level to determine if a transaction can safely acquire a lock on that table. This removes the requirement to examine every row or page lock on the table to determine if a transaction can lock the entire table.

Intent locks include intent shared (IS), intent exclusive (IX), and shared with intent exclusive (SIX).

Lock mode Description
Intent shared (IS) Indicates the intention of a transaction to read some (but not all) resources lower in the hierarchy by placing S locks on those individual resources.
Intent exclusive (IX) Indicates the intention of a transaction to modify some (but not all) resources lower in the hierarchy by placing X locks on those individual resources. IX is a superset of IS.
Shared with intent exclusive (SIX) Indicates the intention of the transaction to read all of the resources lower in the hierarchy and modify some (but not all) resources lower in the hierarchy by placing IX locks on those individual resources. Concurrent IS locks at the top-level resource are allowed. For example, an SIX lock on a table places an SIX lock on the table (allowing concurrent IS locks), and IX locks on the pages being modified (and X locks on the modified rows). There can be only one SIX lock per resource at one time, preventing updates to the resource made by other transactions, although other transactions can read resources lower in the hierarchy by obtaining IS locks at the table level.

Schema Locks

Schema modification (Sch-M) locks are used when a table data definition language (DDL) operation (such as adding a column or dropping a table) is being performed.

Schema stability (Sch-S) locks are used when compiling queries. Schema stability (Sch-S) locks do not block any transactional locks, including exclusive (X) locks. Therefore, other transactions can continue to run while a query is being compiled, including transactions with exclusive (X) locks on a table. However, DDL operations cannot be performed on the table.

Bulk Update Locks

Bulk update (BU) locks are used when bulk copying data into a table and either the TABLOCK hint is specified or the table lock on bulk load table option is set using sp_tableoption. Bulk update (BU) locks allow processes to bulk copy data concurrently into the same table while preventing other processes that are not bulk copying data from accessing the table.

See Also

Deadlocking

Cursor Locking

Locking Hints

ccessing and Changing Relational Data
Lock Compatibility

Only compatible lock types can be placed on a resource that is already locked. For example, while an exclusive (X) lock is held, no other transaction can acquire a lock of any kind (shared, update, or exclusive) on that resource until the exclusive (X) lock is released at the end of the first transaction. Alternatively, if a shared (S) lock has been applied to a resource, other transactions can also acquire a shared lock or an update (U) lock on that item, even if the first transaction has not completed. However, other transactions cannot acquire an exclusive lock until the shared lock has been released.

Resource lock modes have a compatibility matrix that shows which locks are compatible with other locks obtained on the same resource (listed in increasing lock strength).

Existing granted mode
Requested mode IS S U IX SIX X
Intent shared (IS) Yes Yes Yes Yes Yes No
Shared (S) Yes Yes Yes No No No
Update (U) Yes Yes No No No No
Intent exclusive (IX) Yes No No Yes No No
Shared with intent exclusive (SIX) Yes No No No No No
Exclusive (X) No No No No No No

Note An intent exclusive (IX) lock is compatible with an IX lock mode because IX means the intention to update only some of the rows rather than all of them. Other transactions that want to read or update some of the rows are also permitted providing they are not the same rows being updated by other transactions.

The schema stability (Sch-S) lock is compatible with all lock modes except the schema modification (Sch-M) lock mode.

The schema modification (Sch-M) lock is incompatible with all lock modes.

The bulk update (BU) lock is compatible only with schema stability (Sch-S) and other bulk update (BU) locks.

Accessing and Changing Relational Data
Key-Range Locking

Key-range locking solves the phantom read concurrency problem and supports serializable transactions. Key-range locks cover individual records and the ranges between records, preventing phantom insertions or deletions into a set of records accessed by a transaction. Key-range locks are used only on behalf of transactions operating at the serializable isolation level.

Serializability requires that any query executed during a transaction must obtain the same set of rows if it is executed again at some later point within the same transaction. If this query attempts to fetch a row that does not exist, the row must not be inserted by other transactions until the transaction that attempts to access the row completes. If a second transaction were allowed to insert the row, it would appear as a phantom.

If a second transaction attempts to insert a row that resides on a locked data page, page-level locking prevents the phantom row from being added, and serializability is maintained. However, if the row is added to a data page not already locked by the first transaction, a locking mechanism should be in place to prevent the row from being added.

A key-range lock works by covering the index rows and the ranges between those index rows rather than locking the entire base table rows. Because any attempt to insert, update, or delete any row within the range by a second transaction requires a modification to the index, the second transaction is blocked until the first transaction completes because key-range locks cover the index entries.

Key-Range Lock Modes

Key-range locks include both a range and a row component, specified in range-row format:

  • Range represents the lock mode protecting the range between two consecutive index entries.
  • Row represents the lock mode protecting the index entry.
  • Mode represents the combined lock mode used. Key-range lock modes consist of two parts. The first represents the type of lock used to lock the index range (RangeT) and the second represents the lock type used to lock a specific key (K). The two parts are connected with an underscore (_), such as RangeT_K.
Range Row Mode Description
RangeS S RangeS_S Shared range, shared resource lock; serializable range scan.
RangeS U RangeS_U Shared range, update resource lock; serializable update scan.
RangeI Null RangeI_N Insert range, null resource lock; used to test ranges before inserting a new key into an index.
RangeX X RangeX_X Exclusive range, exclusive resource lock; used when updating a key in a range.

Note The internal Null lock mode is compatible with all other lock modes.

Key-range lock modes have a compatibility matrix that shows which locks are compatible with other locks obtained on overlapping keys and ranges.

Existing granted mode
Requested mode S U X RangeS_S RangeS_U RangeI_N RangeX_X
Shared (S) Yes Yes No Yes Yes Yes No
Update (U) Yes No No Yes No Yes No
Exclusive (X) No No No No No Yes No
RangeS_S Yes Yes No Yes Yes No No
RangeS_U Yes No No Yes No No No
RangeI_N Yes Yes Yes No No Yes No
RangeX_X No No No No No No No

Conversion Locks

Conversion locks are created when a key-range lock overlaps another lock.

Lock 1 Lock 2 Conversion Lock
S RangeI_N RangeI_S
U RangeI_N RangeI_U
X RangeI_N RangeI_X
RangeI_N RangeS_S RangeX_S
RangeI_N RangeS_U RangeX_U

Conversion locks can be observed for a short period of time under different complex circumstances, sometimes while running concurrent processes.

Serializable Range Scan, Singleton Fetch, Delete, and Insert

Key-range locking ensures that these scenarios are serializable:

  • Range scan query
  • Singleton fetch of nonexistent row
  • Delete operation
  • Insert operation

However, the following conditions must be satisfied before key-range locking can occur:

  • The transaction-isolation level must be set to SERIALIZABLE.
  • The operation performed on the data must use an index range access. Range locking is activated only when query processing (such as the optimizer) chooses an index path to access the data.

The following examples for each of the scenarios are based upon this table and index.

Range Scan Query

To ensure a range scan query is serializable, the same query should return the same results each time it is executed within the same transaction. New rows must not be inserted within the range scan query by other transactions; otherwise, these become phantom inserts. For example, the following query uses the table and index in the previous illustration:

SELECT name FROM mytable WHERE name BETWEEN 'A' AND 'C'

Key-range locks are placed on the index entries corresponding to the range of data rows where the name is between the values Adam and Dale, preventing new rows qualifying in the previous query from being added or deleted. Although the first name in this range is Adam, the RangeS_S mode key-range lock on this index entry ensures that no new names beginning with the letter A can be added before Adam, such as Abigail. Similarly, the RangeS_S key-range lock on the index entry for Dale ensures that no new names beginning with the letter C can be added after Carlos, such as Clive.

Note The number of RangeS_S locks held is n+1, where n is the number of rows that satisfy the query.

Singleton Fetch of Nonexistent Data

If a query within a transaction attempts to select a row that does not exist, issuing the query at a later point within the same transaction has to return the same result. No other transaction can be allowed to insert that nonexistent row. For example, given this query:

SELECT name FROM mytable WHERE name = 'Bill'

A key-range lock is placed on the index entry corresponding to the name range from Ben to Bing because the name Bill would be inserted between these two adjacent index entries. The RangeS_S mode key-range lock is placed on the index entry Bing. This prevents any other transaction from inserting values, such as Bill, between the index entries Ben and Bing.

Delete Operation

When deleting a value within a transaction, the range the value falls into does not have to be locked for the duration of the transaction performing the delete operation. Locking the deleted key value until the end of the transaction is sufficient to maintain serializability. For example, given this DELETE statement:

DELETE mytable WHERE name = 'Bob'

An exclusive (X) lock is placed on the index entry corresponding to the name Bob. Other transactions can insert or delete values before or after the deleted value Bob. However, any transaction attempting to read, insert, or delete the value Bob will be blocked until the deleting transaction either commits or rolls back.

Range delete can be executed using three basic lock modes: row, page, or table lock. The page, table, or row locking strategy is decided by query optimizer, or can be specified by the user through optimizer hints such as ROWLOCK, PAGLOCK, or TABLOCK. In case page or table lock is used, SQL Server immediately releases the index page containing the deleted rows assuming that all rows are deleted from this page. In contrast, when row lock is used, all deleted rows are marked only as deleted; they are removed from the index page later using a background task.

Insert Operation

When inserting a value within a transaction, the range the value falls into does not have to be locked for the duration of the transaction performing the insert operation. Locking the inserted key value until the end of the transaction is sufficient to maintain serializability. For example, given this INSERT statement:

INSERT mytable VALUES ('Dan')

The RangeI_N mode key-range lock is placed on the index entry corresponding to the name David to test the range. If the lock is granted, Dan is inserted and an exclusive (X) lock is placed on the value Dan. The RangeI_N mode key-range lock is necessary only to test the range and is not held for the duration of the transaction performing the insert operation. Other transactions can insert or delete values before or after the inserted value Dan. However, any transaction attempting to read, insert, or delete the value Dan will be locked until the inserting transaction either commits or rolls back.

Accessing and Changing Relational Data
Lock Escalation

Lock escalation is the process of converting many fine-grain locks into fewer coarse-grain locks, reducing system overhead. Microsoft® SQL Server™ 2000 automatically escalates row locks and page locks into table locks when a transaction exceeds its escalation threshold.

For example, when a transaction requests rows from a table, SQL Server automatically acquires locks on those rows affected and places higher-level intent locks on the pages and table, or index, which contain those rows. When the number of locks held by the transaction exceeds its threshold, SQL Server attempts to change the intent lock on the table to a stronger lock (for example, an intent exclusive (IX) would change to an exclusive (X) lock). After acquiring the stronger lock, all page and row level locks held by the transaction on the table are released, reducing lock overhead.

SQL Server may choose to do both row and page locking for the same query, for example, placing page locks on the index (if enough contiguous keys in a nonclustered index node are selected to satisfy the query) and row locks on the data. This reduces the likelihood that lock escalation will be necessary.

Lock escalation thresholds are determined dynamically by SQL Server and do not require configuration.

See Also

locks Option

Accessing and Changing Relational Data
Dynamic Locking

Although row level locks increase concurrency, it is at the cost of system overhead. Table or page locks lower overhead, but at the expense of lowering concurrency.

Microsoft® SQL Server™ 2000 uses a dynamic locking strategy to determine the most cost-effective locks. SQL Server automatically determines what locks are most appropriate when the query is executed, based on the characteristics of the schema and query. For example, to reduce the overhead of locking, the optimizer may choose page-level locks in an index when performing an index scan.

Dynamic locking has the following advantages:

  • Simplified database administration. Database administrators no longer have to be concerned with adjusting lock escalation thresholds.
  • Increased performance. SQL Server minimizes system overhead by using locks appropriate to the task.
  • Application developers can concentrate on development. SQL Server adjusts locking automatically.
Accessing and Changing Relational Data
Displaying Locking Information

Microsoft® SQL Server™ 2000 provides a report of the active locks when the sp_lock system stored procedure is executed.

Here is the result set.

spid
dbid
ObjId
IndId
Type
Resource
Mode
Status
1
1
0
0
DB
 
S
GRANT
6
1
0
0
DB
 
S
GRANT
7
1
0
0
DB
 
S
GRANT
8
1
0
0
DB
 
S
GRANT
8
1
1396200024
0
RID
1:1225:2
X
GRANT
8
1
1396200024
0
PAG
1:1225
IX
GRANT
8
1
1396200024
2
PAG
1:1240
IX
GRANT
8
1
21575115
0
TAB
 
IS
GRANT
8
1
1396200024
2
KEY
(03000100cb04)
X
GRANT
8
1
1396200024
0
TAB
 
IX
GRANT

Type Column

The Type column shows the type of the resource currently locked.

Resource type Description
RID Row identifier used to lock a single row within a table.
KEY Row lock within an index. Used to protect key ranges in serializable transactions.
PAG Data or index page.
EXT Contiguous group of eight data pages or index pages.
TAB Entire table, including all data and indexes.
DB Database.

Resource Column

The Resource column provides information about the resource being locked.

Resource type Description
RID Row identifier of the locked row within the table. The row is identified by a fileid:page:rid combination, where rid is the row identifier on the page.
KEY Hexadecimal number used internally by SQL Server.
PAG Page number. The page is identified by a fileid:page combination, where fileid is the fileid in the sysfiles table, and page is the logical page number within that file.
EXT First page number in the extent being locked. The page is identified by a fileid:page combination.
TAB No information is provided because the ObjId column already contains the object ID of the table.
DB No information is provided because the dbid column already contains the database ID of the database.

In the result set from sp_lock, the RID resource type being locked has a resource description of 1:1225:2. This indicates that row identifier 2, on page number 1225, on fileid 1 has a lock applied to it. For more information, see Troubleshooting Deadlocks.

Mode Column

The Mode column describes the type of lock being applied to the resource. The types of locks include any multigranular lock.

Status Column

The Status column shows if the lock has been obtained (GRANT), is blocking on another process (WAIT), or is being converted to another lock (CNVT). A lock being converted to another lock is held in one mode, but is waiting to acquire a stronger lock mode (for example, update to exclusive). When diagnosing blocking issues, a CNVT can be considered similar to WAIT.

Other Tools for Monitoring Locking Activity

Using sp_lock to display locking information may not always be feasible when many locks are held and released faster than sp_lock can display them. In this case, SQL Profiler can be used to monitor and record locking information. Additionally, Windows Performance Monitor can be used to monitor lock activity using the SQL Server Locks Object counter.

To view the current locks

Transact-SQL

Enterprise Manager

SQL-DMO

See Also

Locks Event Category

syslockinfo

SQL Server: Locks Object

sp_lock

Accessing and Changing Relational Data
Customizing Locking with SQL Server

Although Microsoft® SQL Server™ 2000 implements locking automatically, it is possible to customize this in applications by:

  • Handling deadlocks and setting the deadlock priority.
  • Handling time-outs and setting the lock time-out duration.
  • Setting the transaction isolation level.
  • Using table-level locking hints with the SELECT, INSERT, UPDATE, and DELETE statements.
  • Configuring the locking granularity for an index.
Accessing and Changing Relational Data
Deadlocking

A deadlock occurs when there is a cyclic dependency between two or more threads for some set of resources.

Deadlock is a condition that can occur on any system with multiple threads, not just on a relational database management system. A thread in a multi-threaded system may acquire one or more resources (for example, locks). If the resource being acquired is currently owned by another thread, the first thread may have to wait for the owning thread to release the target resource. The waiting thread is said to have a dependency on the owning thread for that particular resource.

If the owning thread wants to acquire another resource that is currently owned by the waiting thread, the situation becomes a deadlock: both threads cannot release the resources they own until their transactions are committed or rolled back, and their transactions cannot be committed or rolled back because they are waiting on resources the other owns. For example, thread T1 running transaction 1 has an exclusive lock on the Supplier table. Thread T2 running transaction 2 obtains an exclusive lock on the Part table, and then wants a lock on the Supplier table. Transaction 2 cannot obtain the lock because transaction 1 has it. Transaction 2 is blocked, waiting on transaction 1. Transaction 1 then wants a lock on the Part table, but cannot obtain it because transaction 2 has it locked. The transactions cannot release the locks held until the transaction is committed or rolled back. The transactions cannot commit or roll back because they require a lock held by the other transaction to continue.

Note Deadlocking is often confused with normal blocking. When one transaction has a lock on a resource that another transaction wants, the second transaction waits for the lock to be released. By default, SQL Server transactions do not time out (unless LOCK_TIMEOUT is set). The second transaction is blocked, not deadlocked. For more information, see Customizing the Lock Time-out.

In this illustration, thread T1 has a dependency on thread T2 for the Part table lock resource. Similarly, thread T2 has a dependency on thread T1 for the Supplier table lock resource. Because these dependencies form a cycle, there is a deadlock between threads T1 and T2.

Detecting and Ending Deadlocks

Detecting and Ending Deadlocks

In Microsoft® SQL Server™ 2000, a single user session may have one or more threads running on its behalf. Each thread may acquire or wait to acquire a variety of resources, such as:

  • Locks.
  • Parallel query execution-related resources (coordinator, producer, consumer threads associated with an exchange port).
  • Threads.
  • Memory.

All these resources, except memory, participate in the SQL Server deadlock detection scheme. For memory, SQL Server uses a time-out based mechanism, which is controlled by the query wait option in sp_configure.

In SQL Server 2000, deadlock detection is performed by a separate thread called the lock monitor thread. The lock monitor thread initiates a deadlock search for a particular thread in one of the following conditions:

  • The thread has been waiting for the same resource for a specified period of time. The lock monitor thread periodically wakes up and identifies all the threads waiting on some resource. If these threads continue to wait on the same resource when the lock monitor wakes up again, it initiates a deadlock search for the waiting thread.
  • The thread waits on a resource and initiates an eager deadlock search.

SQL Server typically performs periodic deadlock detection only; it does not use the eager mode. Because the number of deadlocks encountered in the system is usually small, periodic deadlock detection helps to reduce the overhead of deadlock detection in the system.

When the lock monitor initiates deadlock search for a particular thread, it identifies the resource on which the thread is waiting. The lock monitor then finds the owner(s) for that particular resource and recursively continues the deadlock search for those threads until it finds a cycle. A cycle identified in this manner forms a deadlock.

After a deadlock is identified, SQL Server ends a deadlock by choosing the thread automatically (the deadlock victim) that can break the deadlock. SQL Server rolls back the deadlock victim's transaction, notifies the thread's application (by returning error message number 1205), cancels the thread's current request, and then allows the transactions of the nonbreaking threads to continue.

Typically, SQL Server chooses the thread running the transaction that is least expensive to undo as the deadlock victim. Alternatively, a user can set the DEADLOCK_PRIORITY of a session to LOW, using the SET statement. The DEADLOCK_PRIORITY option controls how sessions are weighed in deadlock situations. If a session's setting is set to LOW, that session becomes the preferred victim when involved in a deadlock situation.

Identifying Deadlocks

After a deadlock is identified, SQL Server chooses a particular thread as the deadlock victim and returns an error message with a list of resources involved in the deadlock. The deadlock message takes the following form:

Your transaction (process ID #52) was deadlocked on {lock | communication buffer | thread} resources with another process and has been chosen as the deadlock victim. Rerun your transaction.

The threads and resources involved in a deadlock are located in the error log. For more information about how to identify the deadlocked threads and the resources involved in a deadlock, see Troubleshooting Deadlocks.

See Also

SET DEADLOCK_PRIORITY

Handling Deadlocks

Handling Deadlocks

When a transaction submitted by an application is chosen as the deadlock victim, the transaction is terminated automatically and rolled back, and error message 1205 is returned to the application. Because any application submitting SQL queries can be chosen as the deadlock victim, applications should have an error handler that can trap error message 1205. If an application does not trap the error, it can proceed unaware that its transaction has been rolled back, and errors can occur.

Implementing an error handler that traps error message 1205 allows an application to handle the deadlock situation and take remedial action (for example, automatically resubmitting the query that was involved in the deadlock). Resubmitting the query automatically can mean that the user does not need to know that a deadlock occurred.

Before resubmitting a query automatically, client programs should pause to give the transaction holding the required locks a chance to complete and release those locks. This minimizes the likelihood of the transaction being deadlocked again as it attempts to obtain those locks.

Note A deadlock does not always cancel the batch in which the error was returned. It is important for the client program to check for errors because a deadlock does not always return a failed return code. In most cases, if a deadlock has occurred and the batch has not been canceled automatically, the application should cancel the current query. If this is not done, SQL Server may still have results pending on the connection that it expects the client to process. If any pending results are not processed, an error occurs when the application next tries to send a command to SQL Server.

See Also

Handling Errors and Messages

Minimizing Deadlocks

Minimizing Deadlocks

Although deadlocks cannot be avoided completely, the number of deadlocks can be minimized. Minimizing deadlocks can increase transaction throughput and reduce system overhead because fewer transactions are:

  • Rolled back, undoing all the work performed by the transaction.
  • Resubmitted by applications because they were rolled back when deadlocked.

To help minimize deadlocks:

  • Access objects in the same order.
  • Avoid user interaction in transactions.
  • Keep transactions short and in one batch.
  • Use a low isolation level.
  • Use bound connections.
Access Objects in the Same Order

If all concurrent transactions access objects in the same order, deadlocks are less likely to occur. For example, if two concurrent transactions obtain a lock on the Supplier table, and then on the Part table, one transaction is blocked on the Supplier table until the other transaction is completed. After the first transaction commits or rolls back, the second continues. A deadlock does not occur. Using stored procedures for all data modifications can standardize the order of accessing objects.

Avoid User Interaction in Transactions

Avoid writing transactions that include user interaction because the speed of batches running without user intervention is much faster than the speed at which a user can manually respond to queries, such as replying to a prompt for a parameter requested by an application. For example, if a transaction is waiting for user input, and the user goes to lunch, or even home for the weekend, the user holds up the transaction from completing. This degrades system throughput because any locks held by the transaction are released only when the transaction is committed or rolled back. Even if a deadlock situation does not arise, other transactions accessing the same resources are blocked, waiting for the transaction to complete.

Keep Transactions Short and in One Batch

A deadlock typically occurs when several long-running transactions execute concurrently in the same database. The longer the transaction, the longer the exclusive or update locks are held, blocking other activity and leading to possible deadlock situations.

Keeping transactions in one batch minimizes network roundtrips during a transaction, reducing possible delays in completing the transaction and releasing locks.

Use a Low Isolation Level

Determine whether a transaction can run at a lower isolation level. Implementing read committed allows a transaction to read data previously read (not modified) by another transaction without waiting for the first transaction to complete. Using a lower isolation level, such as read committed, holds shared locks for a shorter duration than a higher isolation level such as serializable, thereby reducing locking contention.

Use Bound Connections

Using bound connections, two or more connections opened by the same application can cooperate. Any locks acquired by the secondary connections are held as if they were acquired by the primary connection, and vice versa, and therefore do not block each other.

See Also

Using Bound Connections

Customizing the Lock Time-out

When Microsoft® SQL Server™ 2000 cannot grant a lock to a transaction on a resource because another transaction already owns a conflicting lock on that resource, the first transaction becomes blocked waiting on that resource. If this causes a deadlock, SQL Server terminates one of the participating transactions (with no time-out involved). If there is no deadlock, the transaction requesting the lock is blocked until the other transaction releases the lock. By default, there is no mandatory time-out period, and no way to test if a resource is locked before locking it, except to attempt

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/1697933/viewspace-906546/,如需转载,请注明出处,否则将追究法律责任。

上一篇: sp_lock
请登录后发表评论 登录
全部评论