Wednesday, March 28, 2012

READ UNCOMMITTED data

1. In this topic
http://groups.google.com/group/comp...b21516252b65e7c,
someone wrote: "I've implemented SET TRANSACTION ISOLATION LEVEL READ
UNCOMMITTED at the beginning
of a number of stored procedures and, then SET TRANSACTION ISOLATION
LEVEL READ
COMMITTED at the end to minimize the disruption to the application.".
My question is, do you really need to set READ COMMITTED at the end of
stored procedure? What scope does that command affect?

2. Could someone write some real world example where i should never
read uncommitted data... i'm having trouble understanding when i
should and when i should not use it.
"Igor" <jerosimic@.gmail.comwrote in message
news:1177332206.775022.264630@.b58g2000hsg.googlegr oups.com...

Quote:

Originally Posted by

1. In this topic
http://groups.google.com/group/comp...b21516252b65e7c,
someone wrote: "I've implemented SET TRANSACTION ISOLATION LEVEL READ
UNCOMMITTED at the beginning
of a number of stored procedures and, then SET TRANSACTION ISOLATION
LEVEL READ
COMMITTED at the end to minimize the disruption to the application.".
My question is, do you really need to set READ COMMITTED at the end of
stored procedure? What scope does that command affect?


I believe the READ COMMITTED is pointless there.

Quote:

Originally Posted by

>
2. Could someone write some real world example where i should never
read uncommitted data... i'm having trouble understanding when i
should and when i should not use it.
>


It depends. If you don't mind showing possibly inaccurate information
faster, then READ UNCOMMITTED may be for you.

In some cases, this is fine. HOWEVER, in many others not only is it not
fine, it's downright wrong.

For example if you're writing say a banking application, the user would
NEVER want to see "wrong data".

(for example only partially posted transactions, phantom ones, etc.)

--
Greg Moore
SQL Server DBA Consulting Remote and Onsite available!
Email: sql (at) greenms.com http://www.greenms.com/sqlserver.html|||Igor (jerosimic@.gmail.com) writes:

Quote:

Originally Posted by

someone wrote: "I've implemented SET TRANSACTION ISOLATION LEVEL READ
UNCOMMITTED at the beginning
of a number of stored procedures and, then SET TRANSACTION ISOLATION
LEVEL READ
COMMITTED at the end to minimize the disruption to the application.".
My question is, do you really need to set READ COMMITTED at the end of
stored procedure? What scope does that command affect?


No, you don't need it. A SET command only affects the current scope, and
any scope it invokes. When the scope terminates, the SET option reverts
to its previous setting. (With one exception, SET CONTEXT_INFO.)

Quote:

Originally Posted by

2. Could someone write some real world example where i should never
read uncommitted data... i'm having trouble understanding when i
should and when i should not use it.


The basic rule is: if you don't understand the consequences of it, don't
use it.

Then there are cases, where it is even more wrong than others. Say that
you write a query which returns the balance on a number of accounts, and
the result of this query is to be used to reconciliate the books. If
you read with NOLOCK, you may come in the middle of a transaction that
moves money from one account to other. You could then display the same
amount in two accounts - or it is missing from both accounts.

NOLOCK is more OK if you need to read large amounts of historic data,
and there is no requirement for complete accuracy. The same table also
has active insertion of current data. In this case, locks could be
table locks which would prevent insertions from taking place. NOLOCK
can prevent this.

Never use NOLOCK or READ UNCOMMITTED as a matter of routine!

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||Thanks for you reply's.

I currently use READ UNCOMMITTED only for data that is not important,
like getting lists of referrers or today's top users. But i'm having
problems with my sp for user login, in combination with some other
stored procedures i get deadlocks, don't really know of a way to
resolve those deadlocks other than using NOLOCK on users and
users_online tables.

So generally for that kind of situation where you have to update and
calculate very important data like some money transfers, i should put
REPEATABLE READ or SERIALIZABLE.
One more question, why would one use SERIALIZABLE over REPEATABLE
READ, what are the benefits of SERIALIZABLE?|||Igor (jerosimic@.gmail.com) writes:

Quote:

Originally Posted by

I currently use READ UNCOMMITTED only for data that is not important,
like getting lists of referrers or today's top users. But i'm having
problems with my sp for user login, in combination with some other
stored procedures i get deadlocks, don't really know of a way to
resolve those deadlocks other than using NOLOCK on users and
users_online tables.


Often, but always, deadlocks can be avoided with proper indexes, and also
proper access order.

If the deadlock is between user-interface activities and background tasks,
the command SET DEADLOCK_PRIORITY LOW can be an option, to avoid users
getting deadlock errors slapped in their face.

There are also some "classic" errors you can do if you use the HOLDLOCK
hint.

If you are on SQL 2005, snapshot isolation which comes in two flavours
may be an alternative to READ UNCOMMITTED. With snapshot isolation,
you read consistent committed data - but that may be stale.

Quote:

Originally Posted by

So generally for that kind of situation where you have to update and
calculate very important data like some money transfers, i should put
REPEATABLE READ or SERIALIZABLE.
One more question, why would one use SERIALIZABLE over REPEATABLE
READ, what are the benefits of SERIALIZABLE?


I would rather ask the question in the other way: in most situations when
READ COMMITTED is not enough, SERIALIZABLE is what you need. That is,
you cannot accept that the rows you have read change, and you cannot
accept that new rows creep in. But if you have read the balance on an
account, and don't want it do change before you update it, REPEATABLE
READ is enough.

But SERIALIZABLE is not good for throughput, and increases the risk for
deadlocks.

You should also be aware of the UPDLOCK hint. An UPDLOCK is a shared
lock that does not block other readers. But only one process can hold
an UPDLOCK, so if two processes try to get an UPDLOCK, one will be
blocked at this point. So going back to the update of the account
balance, UPDLOCK is what you should use, not REPEATABLE READ. If you
use REPEATABLE READ only, you get deadlocks.

You can combine UPDLOCK with SERIALIZABLE, but it seems that SQL Server
sometimes take the range locks in different order, leading to deadlocks
anyway.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||On Apr 24, 11:56 pm, Erland Sommarskog <esq...@.sommarskog.sewrote:

Quote:

Originally Posted by

Igor (jerosi...@.gmail.com) writes:

Quote:

Originally Posted by

I currently use READ UNCOMMITTED only for data that is not important,
like getting lists of referrers or today's top users. But i'm having
problems with my sp for user login, in combination with some other
stored procedures i get deadlocks, don't really know of a way to
resolve those deadlocks other than using NOLOCK on users and
users_online tables.


>
Often, but always, deadlocks can be avoided with proper indexes, and also
proper access order.
>
If the deadlock is between user-interface activities and background tasks,
the command SET DEADLOCK_PRIORITY LOW can be an option, to avoid users
getting deadlock errors slapped in their face.
>
There are also some "classic" errors you can do if you use the HOLDLOCK
hint.
>
If you are on SQL 2005, snapshot isolation which comes in two flavours
may be an alternative to READ UNCOMMITTED. With snapshot isolation,
you read consistent committed data - but that may be stale.
>

Quote:

Originally Posted by

So generally for that kind of situation where you have to update and
calculate very important data like some money transfers, i should put
REPEATABLE READ or SERIALIZABLE.
One more question, why would one use SERIALIZABLE over REPEATABLE
READ, what are the benefits of SERIALIZABLE?


>
I would rather ask the question in the other way: in most situations when
READ COMMITTED is not enough, SERIALIZABLE is what you need. That is,
you cannot accept that the rows you have read change, and you cannot
accept that new rows creep in. But if you have read the balance on an
account, and don't want it do change before you update it, REPEATABLE
READ is enough.
>
But SERIALIZABLE is not good for throughput, and increases the risk for
deadlocks.
>
You should also be aware of the UPDLOCK hint. An UPDLOCK is a shared
lock that does not block other readers. But only one process can hold
an UPDLOCK, so if two processes try to get an UPDLOCK, one will be
blocked at this point. So going back to the update of the account
balance, UPDLOCK is what you should use, not REPEATABLE READ. If you
use REPEATABLE READ only, you get deadlocks.
>
You can combine UPDLOCK with SERIALIZABLE, but it seems that SQL Server
sometimes take the range locks in different order, leading to deadlocks
anyway.
>
--
Erland Sommarskog, SQL Server MVP, esq...@.sommarskog.se
>
Books Online for SQL Server 2005 athttp://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books...
Books Online for SQL Server 2000 athttp://www.microsoft.com/sql/prodinfo/previousversions/books.mspx


Thank you, you have been great help as always.

Can you give me an example for this "deadlocks can be avoided with
proper indexes, and also
proper access order." or if you have any websites that explain this if
it's not to much trouble.

Unfortunately my hosting is still on MSSQL 2000.|||Igor (jerosimic@.gmail.com) writes:

Quote:

Originally Posted by

Can you give me an example for this "deadlocks can be avoided with
proper indexes, and also proper access order." or if you have any websites
that explain this if it's not to much trouble.


What I had in mind when I talked about indexing, is that if your indexing
is not the best one, you can get table scans, and this increases the risk
for deadlock, for at least two reasons: 1) more rows have to be locked (and
thus more likely that two processes want to lock the same resource at
the same time) and 2) transactions are longer (so locks are held for a
longer time).

Here is a very simple example where you get a deadlock because of access
order. First run:

CREATE TABLE tbl1 (a int NOT NULL)
CREATE TABLE tbl2 (a int NOT NULL)
go
INSERT tbl1 (a) VALUES(1)
INSERT tbl2 (a) VALUES(1)
go

The run in one window:

BEGIN TRANSACTION
UPDATE tbl1 SET a = 123
WAITFOR DELAY '00:00:05'
UPDATE tbl2 SET a = 123
COMMIT TRANSACTION

And in another:

BEGIN TRANSACTION
UPDATE tbl2 SET a = 123
WAITFOR DELAY '00:00:05'
UPDATE tbl1 SET a = 123
COMMIT TRANSACTION

If you want to read about troubleshooting deadlocks, there is a section in
Books Online. It's far from whole-covering, but I doubt that any text
on deadlocking is. Some deadlocks are very easy to avoid, whereas others
can only be remedied with a serious redesign of the application.

In any basic text, you will find the advice of always accessing tables
in the same order. But in a complex application, this is far from a
trivial matter.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||On Apr 26, 11:31 pm, Erland Sommarskog <esq...@.sommarskog.sewrote:

Quote:

Originally Posted by

Igor (jerosi...@.gmail.com) writes:

Quote:

Originally Posted by

Can you give me an example for this "deadlocks can be avoided with
proper indexes, and also proper access order." or if you have any websites
that explain this if it's not to much trouble.


>
What I had in mind when I talked about indexing, is that if your indexing
is not the best one, you can get table scans, and this increases the risk
for deadlock, for at least two reasons: 1) more rows have to be locked (and
thus more likely that two processes want to lock the same resource at
the same time) and 2) transactions are longer (so locks are held for a
longer time).
>
Here is a very simple example where you get a deadlock because of access
order. First run:
>
CREATE TABLE tbl1 (a int NOT NULL)
CREATE TABLE tbl2 (a int NOT NULL)
go
INSERT tbl1 (a) VALUES(1)
INSERT tbl2 (a) VALUES(1)
go
>
The run in one window:
>
BEGIN TRANSACTION
UPDATE tbl1 SET a = 123
WAITFOR DELAY '00:00:05'
UPDATE tbl2 SET a = 123
COMMIT TRANSACTION
>
And in another:
>
BEGIN TRANSACTION
UPDATE tbl2 SET a = 123
WAITFOR DELAY '00:00:05'
UPDATE tbl1 SET a = 123
COMMIT TRANSACTION
>
If you want to read about troubleshooting deadlocks, there is a section in
Books Online. It's far from whole-covering, but I doubt that any text
on deadlocking is. Some deadlocks are very easy to avoid, whereas others
can only be remedied with a serious redesign of the application.
>
In any basic text, you will find the advice of always accessing tables
in the same order. But in a complex application, this is far from a
trivial matter.
>
--
Erland Sommarskog, SQL Server MVP, esq...@.sommarskog.se
>
Books Online for SQL Server 2005 athttp://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books...
Books Online for SQL Server 2000 athttp://www.microsoft.com/sql/prodinfo/previousversions/books.mspx


I finally solved deadlock that was occurring in user login sp
i had something like this:
begin tran
if exists (select * from users_online where user_id=@.id)
update users_online data=... where user_id=@.id
else
insert into users_online (user_id, data) values (@.id, ...)
commit

so i put a xlock on select statement and now i don't get any deadlocks
begin tran
if exists (select * from users_online with (xlock) where user_id=@.id)
update users_online data=... where user_id=@.id
else
insert into users_online (user_id, data) values (@.id, ...)
commit

No comments:

Post a Comment