isolation levels

I gots to know…

Big web companies who use MySQL – what isolation level(s) do you use? Do you take the default? Do you think about it at all? How about if you use, eg, memcached, what there? And, does your code think about optimistic or pessimistic locking?

I’ve been reading up on isolation levels after seeing Peter Bailis’
(thanks to Mark Callaghan for re-tweeting about it).

I’m learning things…  here’s one for today –

I always thought that the “repeatable” in repeatable read applied to the data in general. Meaning that a query run twice in a transaction would “repeat” the same results. I guess it actually is supposed to apply to records?  Ie, other transactions’ updates to an existing row will not affect our view of the record.  According to ISO ANSI 92, apparently, repeatable read does allow the appearance of “phantom” (newly added) rows. (MySQL seems to be less permissive than that.)

But enough of my talking – I’m going back to my reading.  So, big MySQL users – let me know your thoughts on isolation levels, application-wide.  Do you care about them?  How do you enforce, or not enforce them at an application level, and why?


Tags: ,

2 Responses to “isolation levels”

  1. markcallaghan (@markcallaghan) Says:

    People expect phantoms with lock-based implementations of repeatable read. People don’t expect them with RR for MVCC/snapshot-isolation. InnoDB chose interesting behavior that most users don’t even know about — WHERE clause for UPDATE and others sees current values of committed rows, and this explains the need for gap locking and a really, really complex behavior that still hasn’t been fully explained. As much as I love InnoDB, and it is too late to change, this isn’t my favorite part.

    I assume all of this is possible based on reading the InnoDB docs. It isn’t fun behavior.

    Note that a SELECT run twice might not see the same rows with InnoDB repeatable read. A simple example is below.
    * its InnoDB snapshot starts at t1, not at t0
    * assume a concurrent transaction commits a change between time t1 and t2 that adds a row to the set of rows matched by WHERE $X
    * at t2 this transaction does see that change because it runs with the snapshot from t1
    * then the UPDATE statement “pulls” the row committed by the concurrent transaction into visibility for this one (courtesy of the, um, interesting InnoDB semantics for repeatable read). Lets assume those rows are not updated here because of the “and $Y” clause
    * then the SELECT statement from t2 is repeated after the UPDATE statement and it gets to see the rows committed by the concurrent transaction because the UPDATE here made them visible
    t0) BEGIN
    t1) SELECT .. WHERE $Z
    t2) SELECT .. WHERE $X
    t3) UPDATE .. WHERE $X and $Y
    t4) SELECT .. WHERE $X

  2. ben Says:

    Mark – thanks for your note! I am surprised to find that, not only can I see different rows, but I can make the rows’ contents incorporate the other session’s changes from one SELECT to the next, on existing rows. (Seems against the idea of RR, but now I see it’s documented at in a note under repeatable-read.)

    Good to see that people “in the field” care about isolation. I see RocksDB also has some work around it (thanks to you and Yoshi).

    Now I have to read all the references in the page you gave above!

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: