Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Update for CR14/v2.14/17-for-jdk15 rebased to jdk-15+25.

Table of Contents:

Table of Contents
TOC

Summary

This page describes adding support for Async Monitor Deflation to OpenJDK. The primary goal of this project is to reduce the time spent in safepoint cleanup operations.

RFE: 8153224 Monitor deflation prolong safepoints
         https://bugs.openjdk.java.net/browse/JDK-8153224

Full Webrev: http://cr.openjdk.java.net/~dcubed/8153224-webrev/917-for-jdk14jdk15+24.v2.0615.full/

Inc Webrev: http://cr.openjdk.java.net/~dcubed/8153224-webrev/9-for-jdk1417-for-jdk15+24.v2.0615.inc/

Background

This patch for Async Monitor Deflation is based on Carsten Varming's

...

The current idle monitor deflation mechanism executes at a safepoint during cleanup operations. Due to this execution environment, the current mechanism does not have to worry about interference from concurrently executing JavaThreads. Async Monitor Deflation uses the ServiceThread to deflate idle monitors so the new mechanism has to detect interference and adapt as appropriate. In other words, data races are natural part of Async Monitor Deflation and the algorithms have to detect the races and react without data loss or corruption.

Key Parts of the Algorithm

1) Deflation With Interference Detection

Async Monitor Deflation is performed in two stages: stage one performs the two part protocol described in "Deflation With Interference Detection" below and moves the async deflated ObjectMonitors from an in-use list to a global wait list; the ServiceThread performs a handshake (or a safepoint) with all other JavaThreads after stage one is complete and that forces any racing threads to make forward progress; stage two moves the ObjectMonitors from the global wait list to the global free list. The special values that mark an ObjectMonitor as async deflated remain in their fields until the ObjectMonitor is moved from the global free list to a per-thread free list which is sometime after stage two has completed.

Key Parts of the Algorithm

1) Deflation With Interference Detection

ObjectSynchronizer::deflate_monitor_using_JT() is the new counterpart to ObjectSynchronizer::deflate_monitor() and does the heavy lifting of asynchronously deflating a monitor using a two ObjectSynchronizer::deflate_monitor_using_JT() is the new counterpart to ObjectSynchronizer::deflate_monitor() and does the heavy lifting of asynchronously deflating a monitor using a three part prototcol:

  1. Setting a NULL owner field to DEFLATER_MARKER with cmpxchg() forces any contending thread through the slow path. A racing thread would be trying to set the owner field.
  2. Making a zero ref_count contentions field a large negative value with cmpxchg() forces racing threads to retry. A racing thread would would be trying to increment the ref_count contentions field.

If

...

we

...

If we lose any of the races, the monitor cannot be deflated at this time.

Once we know it is safe to deflate the monitor (which is mostly field resetting and monitor list management), we have to restore the object's header. That's another racy operation that is described below in "Restoring the Header With Interference Detection".

2) Restoring the Header With Interference Detection

The setting of the special values that mark an ObjectMonitor as async deflated and the restoration of the object's header comprise the first stage of Async Monitor Deflation.

2) Restoring the Header With Interference Detection

ObjectMonitor::install_displacedObjectMonitor::install_displaced_markword_in_object() is the new piece of code that handles all the racy situations with restoring an object's header asynchronously. The function is called from two three places (deflation and saving an ObjectMonitor* in an ObjectMonitorHandle). The restoration protocol for the object's header uses the mark bit along with the hash() value staying at zero to indicate that the object's header is being restored, ObjectMonitor::enter(), and FastHashCode). Only one of the possible racing scenarios can win and the losing scenarios all adapt to the winning scenario's object header value.

3) Using "owner" or "

...

contentions" With Interference Detection

Various code paths have been updated to recognize an owner field equal to DEFLATER_MARKER or a negative ref_count contentions field and those code paths will retry their operation. This is the shortest "Key Part" description, but don't be fooled. See "Gory Details" below.

An Example of ObjectMonitor Interference Detection

ObjectMonitor::save_om_ptrenter() is used to safely save an ObjectMonitor* in an ObjectMonitorHandlecan change an idle monitor into a busy monitor. ObjectSynchronizer::deflate_monitor_using_JT() is used to asynchronously deflate an idle monitor. save_om_ptrenter() and deflate_monitor_using_JT() can interfere with each other. The thread calling save_om_ptrenter() (T-saveenter) is potentially racing with another JavaThread (T-deflate) so both threads have to check the results of the races.

Start of the Race

    T-save         enter                   ObjectMonitor              T-deflate
------------------------  +-----------------------+  ----------------------------------------
save_om_ptrenter() {   | owner=NULL            | deflate_monitor_using_JT() {
   1> atomic inc ref_countcontentions | ref_countcontentions=0              | 1> cmpxchg(DEFLATER_MARKER, &owner, NULL)
                    +-----------------------+
    • The data fields are at their starting values.
    • The "1>" markers are showing where each thread is at for the ObjectMonitor box:
      • T-deflate is about to execute cmpxchg().
      • T-save enter is about to increment the ref_countcontentions.

Racing Threads

    T-save           enter                   ObjectMonitor              T-deflate
    ------------------------ +-----------------------+  --------------------------------------------
    save_om_ptrenter() { | owner=DEFLATER_MARKER | deflate_monitor_using_JT() {
   1> atomic inc ref_countadd_to_contentions(1) | ref_countcontentions=0           cmpxchg(try_set_owner_from(NULL, DEFLATER_MARKER, &owner, NULL)
      +-----------------------+  :
1> prev = cmpxchg(&contentions, 0, -max_jint, &ref_count, 0)
    • T-deflate has executed cmpxchg() and set owner to DEFLATEDEFLATER_MARKER.
    • T-save enter still hasn't done anything yet
    • The "1>" markers are showing where each thread is at for the ObjectMonitor box:
      • T-save enter and T-deflate are racing to update the ref_count contentions field.

T-deflate Wins

    T-save                enter                            ObjectMonitor           ObjectMonitor                  T-deflate
    ---------------------------------- +-------------------------+  --------------------------------------------
    save_om_ptrenter() {   | owner=DEFLATER_MARKER |  deflate_monitor_using_JT() {
    atomic inc ref_countadd_to_contentions(1)    | ref_countcontentions=-max_jint+1 |  cmpxchg(try_set_owner_from(NULL, DEFLATER_MARKER, &owner, NULL)
   1> if (owner == DEFLATER_MARKER &&is_being_async_deflated()) { +-------------------------+  :
    restore obj ref_countheader <= 0) {                         ||              prev = cmpxchg(&contentions, 0, -max_jint, &ref_count, 0)
        restore obj headeradd_to_contentions(-1)                      \/                   1> if (prev == 0) &&
      atomic dec ref_count           {
     2> return false to force retry  +-------------------------+             owner == DEFLATER_MARKER) {restore obj header
     2> return} false to force retry    | owner=DEFLATER_MARKER | restore obj header
    }                                | ref_count=-max_jint |   2> finish the deflation
    | owner=DEFLATER_MARKER | 2> finish the deflation
                                    | contentions=-max_jint |   }
+-------------------------+ }
    • This diagram starts after "Racing Threads".
    • The "1>" markers are showing where each thread is at for that ObjectMonitor box:
      • T-save enter and T-deflate both observe owner == DEFLATER_MARKER and a negative ref_count contentions field.
    • T-save enter has lost the race: it restores the obj header (not shown) and decrements the ref_countcontentions.
    • T-deflate restores the obj header (not shown).
    • The "2>" markers are showing where each thread is at for that ObjectMonitor box.
    • T-save enter returns false to cause the caller to retry.
    • T-deflate finishes the deflation.
    • Note: As of CR5/v2.05/8-for-jdk13, the owner == DEFLATER_MARKER value is allowed to linger until a deflated ObjectMonitor is reused for an enter operation. This prevents the C2 ObjectMonitor enter optimization from racing with async deflation.

...

T-enter Wins

    T-save          enter                             ObjectMonitor                T-deflate
    ---------------------------------- +-------------------------+  ---------------------------------------------
    save_om_ptrenter() { | owner=DEFLATER_MARKER |  deflate_monitor_using_JT() {
    atomic inc ref_count add_to_contentions(1)   | ref_countcontentions=1          |  cmpxchg(DEFLATER_MARKER, &owner, NULLtry_set_owner_from(NULL, DEFLATER_MARKER)
   1> if (owner == DEFLATER_MARKER &&is_being_async_deflated()) { +-------------------------+  :
    } ref_count <= 0) {                    ||                                prev = cmpxchg(&contentions, 0, -max_jint, &ref_count, 0)
    } else {)
   2> <continue contended enter> \/         1> if (prev == 0) {
       \/         1> if (prev == 0 &&
    save om_ptr in the           +-------------------------+         owner  == DEFLATER_MARKER)} else {
ObjectMonitorHandle | owner=NULL | } else {
2> return true | owner=NULL | try_set_owner_from(DEFLATER_MARKER, NULL)
| ref_countcontentions=1          | cmpxchg(NULL, &owner, DEFLATER_MARKER)2> return
+-------------------------+ 2> return
    • This diagram starts after "Racing Threads".
    • The "1>" markers are showing where each thread is at for the ObjectMonitor box:
      • T-save enter and T-deflate both observe a ref_count contentions field > 0.
    • T-save enter has won the race and it saves the ObjectMonitor* in the ObjectMonitorHandle (not shown)continues with the contended enter protocol.
    • T-deflate detects that it has lost the race (prev != 0) and bails out on deflating the ObjectMonitor:
      • Before bailing out T-deflate tries to restore the owner field to NULL if it is still DEFLATER_MARKER.
    • The "2>" markers are showing where each thread is at for that ObjectMonitor box.

T-enter Wins By A-B-A

    • Note: The owner == DEFLATER_MARKER and contentions < 0 values that are set by T-deflate (stage one of async deflation) remain in place until after T-deflate does a handshake (or safepoint) operation with all JavaThreads. This handshake forces T-enter to make forward progress and see that the ObjectMonitor is being async deflated before T-enter checks in for the handshake.

T-enter Wins By Cancellation Via DEFLATER_MARKER Swap

    T-enter              T-enter                                           ObjectMonitor                T-deflate
    -------------------------------------------- +-------------------------+  --------------------------------------------
    ObjectMonitor::enter() { | owner=DEFLATER_MARKER |  deflate_monitor_using_JT() {
    <owner is contended>   add_to_contentions(1)   | ref_countcontentions=1                 cmpxchg(try_set_owner_from(NULL, DEFLATER_MARKER, &owner, NULL)
   1> EnterI() {   +-------------------------+ 1> :
  if (owner == try_set_owner_from(DEFLATER_MARKER && , || 2> : <thread_stalls>
      cmpxchg(Self, &owner,Self) == DEFLATER_MARKER) {                    \/ :
    // Add marker DEFLATER_MARKER) for cancellation +-------------------------+ :
add_to_contentions(1) == DEFLATER_MARKER) { | owner=Self/T-enter | :
// EnterI is done | ref_countcontentions=02 | : <thread_resumes>
return +-------------------------+ prev = cmpxchg(&contentions, 0, -max_jint, &ref_count, 0)
} || if (prev == 0) &&{
2> } // enter( add_to_contentions(-1) is done \/ \/ 3> } else {
} 3> owner == DEFLATER_MARKER) {
// enter() is done ~OMH: atomic dec ref_count +-------------------------+ } else {
2>if (try_set_owner_from(DEFLATER_MARKER,
: <does app work> | owner=Self/T-enter|NULL | cmpxchg(NULL, &owner,) != DEFLATER_MARKER) {
3> : | ref_count=-max_jintcontentions=1 | atomic add max_jint to ref_count_contentions(-1)
exit() monitor +-------------------------+ 4> bailout on deflation}
4> owner = NULL || 4> bailout }on deflation
\/ }
+-------------------------+
| owner=Self/T-enter|NULL |
| ref_countcontentions=0 |
+-------------------------+
    • T-deflate has executed cmpxchg() and set owner to DEFLATEDEFLATER_MARKER.
    • T-enter has called ObjectMonitor::enter() with "ref_count == 1", noticed that the owner is contended, increments contentions, and is about to call ObjectMonitor::EnterI().
    • The first ObjectMonitor box is showing the fields at this point and the "1>" markers are showing where each thread is at for that ObjectMonitor box.
    • T-deflate stalls after setting the owner field to DEFLATER_MARKER.
    • T-enter calls EnterI() to do the contended enter work:
      • EnterI() observes owner == sets the owner field from DEFLATER_MARKER and uses cmpxchg() to set the owner field to Self/T-enter.
      • T-enter owns the monitor, returns from EnterI(), and returns from enter().
      • The ObjectMonitorHandle destructor decrements the ref_count.
      T-enter is now ready to do work that requires the monitor to be owned.
      • EnterI() increments contentions one extra time since it cancelled async deflation via a DEFLATER_MARKER swap.
      • Note: The extra increment also makes the return value from is_being_async_deflated() stable; the previous A-B-A algorithm would allow the contentions field to flicker from 0 → -max_jint and back to zero. With the current algorithm, a negative contentions field value is a linearization point so once it is negative, we are committed to performing async deflation.
      • T-enter owns the monitor and returns from EnterI() (contentions still has both increments).
    • The second ObjectMonitor box is showing the fields at this point and the "2>" markers are showing where each thread is at for that ObjectMonitor box.
    • T-enter decrements contentions and returns from enter() (contentions still has the extra increment).
    • T-enter is now ready to do work that requires the monitor to be owned.
    • T-enter is doing app work (but it also could have finished and exited the monitor and it still has the extra increment).
    • T-deflate resumes, calls cmpxchg() tries to set the ref_count contentions field to -max_jint , and passes the first part of the bailout expression because "prev == 0"and fails because contentions == 1 (the extra increment comes into play!).
    • The third ObjectMonitor box is showing the fields at this point and the "3>" markers are showing where each thread is at for that ObjectMonitor box.
    • T-deflate performs the A-B-A check which observes that "owner != DEFLATE_MARKER" and bails out on deflation:
      • Depending on when T-deflate resumes after the stall, it will see "owner == T-enter" or "owner == NULL".
      • Both of those values will cause deflation to bailout so we have to conditionally undo work:
        • restore the owner field to NULL if it is still DEFLATER_MARKER (it's not DEFLATER_MARKER)
        • undo setting ref_count to -max_jint by atomically adding max_jint to ref_count which will restore ref_count to its proper value.
      • If the T-enter thread has managed to enter but not exit the monitor during the T-deflate stall, then our owner field A-B-A transition is:
        • NULL → DEFLATE_MARKER → Self/T-enter

      • so we really have A1-B-A2, but the A-B-A principal still holds.

      • If the T-enter thread has managed to enter and exit the monitor during the T-deflate stall, then our owner field A-B-A transition is:

        • NULL → DEFLATE_MARKER → Self/T-enter  → NULL

      • so we really have A-B1-B2-A, but the A-B-A principal still holds.

    • T-enter finished doing app work and is about to exit the monitor (or it has already exited the monitor).

    • The fourth ObjectMonitor box is showing the fields at this point and the "4>" markers are showing where each thread is at for that ObjectMonitor box.

An Example of Object Header Interference

    • tries to restore the owner field from DEFLATER_MARKER to NULL:
      • If it does not succeed, then the EnterI() call managed to cancel async deflation via a DEFLATER_MARKER swap so T-deflate decrements contentions to get rid of the extra increment that EnterI() did as a marker for this type of cancellation.
      • If it does succeed, then EnterI() did not cancel async deflation via a DEFLATER_MARKER swap and we don't have an extra increment to get rid of.
      • Note: For the previous bullet, async deflation is still cancelled because the ObjectMonitor is now busy with a contended enter.
    • T-enter finished doing app work and is about to exit the monitor (or it has already exited the monitor).

    • The fourth ObjectMonitor box is showing the fields at this point and the "4>" markers are showing where each thread is at for that ObjectMonitor box.

An Example of Object Header Interference

After T-deflate has won the race for deflating an ObjectMonitor it has to restore the header in the associated object. Of course another thread can After T-deflate has won the race for deflating an ObjectMonitor it has to restore the header in the associated object. Of course another thread can be trying to do something to the object's header at the same time. Isn't asynchronous work exciting?!?!

ObjectMonitor::install_displaced_markword_in_object() is called from two places so we can have a race between a T-save enter thread and a T-deflate thread:

Start of the Race

    T-saveenter                                                                            object           T-deflate
    -----------------------------------------------  +-------------+  -----------------------------------------------
install_displaced_markword_in_object(oop obj) { | mark=om_ptr |  install_displaced_markword_in_object(oop obj) {
    dmw = header()                    +-------------+  dmw = header()
    if (!dmw->is_marked() &&                                     if (!dmw->is_marked() &&
      dmw->hash() == 0) {                                          dmw->hash() == 0) {
      create marked_dmw                    create marked_dmw
    dmw = cmpxchg(marked_dmw, &header, dmw)                      dmw = cmpxchg(marked_dmw, &header, dmw)
      obj->cas_set_mark(dmw, this)                                  obj->cas_set_mark(dmw, this) } }}
    • The data field (mark) is at its starting value.
    • 'dmw' and 'marked_dmw' are local copies is a local copy in each thread.
    • T-save enter and T-deflate are both calling install_displaced_markword_in_object() at the same time.
    • Both threads are poised to call cmpxchgcas_set_mark() at the same time.

...

Either Thread Wins the Race

    T-save                                    enter                                          object            T-deflate
    -----------------------------------------------  +-------------+   -----------------------------------------------
    install_displaced_markword_in_object(oop obj) {   | mark=om_ptrdmw    |  install_displaced_markword_in_object(oop obj) {
     dmw = header()                                      +-------------+  dmw = header()
   if (!dmw->is_marked() &&  obj->cas_set_mark(dmw, this)                               if (!dmw->is_marked() &&
         dmw->hash() == 0) {                                           dmw->hash() == 0) {
       create marked_dmw                                             create marked_dmw
       dmw = cmpxchg(marked_dmw, &header, dmw)                       dmw = cmpxchg(marked_dmw, &header, dmw)
     }                                                             }
     // dmw == marked_dmw here                                     // dmw == original dmw here
     if (dmw->is_marked())                                         if (dmw->is_marked())
      unmark dmw                                                    unmark dmw
    obj = object()                                                obj = object()
    obj->cas_set_mark(dmw, this)                                  obj->cas_set_mark(dmw, this)
    • The return value from cmpxchg() in each thread will be different.
    • Since T-deflate won the race, its 'dmw' variable contains the header/dmw from the ObjectMonitor.
    • Since T-save lost the race, its 'dmw' variable contains the 'marked_dmw' set by T-deflate.
      • T-save will unmark its 'dmw' variable.
    • Both threads are poised to call cas_set_mark() at the same time.

T-save Wins First Race

    T-save                                       object            T-deflate
    -------------------------------------------  +-------------+   -------------------------------------------
    install_displaced_markword_in_object() {    | mark=om_ptr |  install_displaced_markword_in_object() {
    dmw = header()                    +-------------+  dmw = header()
if (!dmw->is_marked() && if (!dmw->is_marked() &&
         dmw->hash() == 0) {                                           dmw->hash() == 0) {
       create marked_dmw                                             create marked_dmw
       dmw = cmpxchg(marked_dmw, &header, dmw)                       dmw = cmpxchg(marked_dmw, &header, dmw)
    }                                                             }
    // dmw == original dmw here                                   // dmw == marked_dmw here
    if (dmw->is_marked())                                         if (dmw->is_marked())
       unmark dmw                                                    unmark dmw
    obj = object()                                                obj = object()
    obj->cas_set_mark(dmw, this)                                  obj->cas_set_mark(dmw, this)
    • This diagram is the same as "T-deflate Wins First Race" except we've swapped the post cmpxchg() comments.
    • Since T-save won the race, its 'dmw' variable contains the header/dmw from the ObjectMonitor.
    • Since T-deflate lost the race, its 'dmw' variable contains the 'marked_dmw' set by T-save.
      • T-deflate will unmark its 'dmw' variable.
    • Both threads are poised to call cas_set_mark() at the same time.

Either Wins the Second Race

   obj->cas_set_mark(dmw, this)
    • It does not matter whether T-enter or T-deflate won the cas_set_mark() call; in this scenario both were trying to restore the same value.
    • The object's mark field has changed from 'om_ptr' → 'dmw'.

Please notice that install_displaced_markword_in_object() does not do any retries on any code path:

    • If a thread loses the cas_set_mark() race, there is no need to retry because the object's header has been restored by the other thread.

Hashcodes and Object Header Interference

There are a few races that can occur between a T-deflate thread and a thread trying to get/set a hashcode (T-hash) in an ObjectMonitor:

  1. If the object has an ObjectMonitor (i.e., is inflated) and if the ObjectMonitor has a hashcode, then the hashcode value can be carefully fetched from the ObjectMonitor and returned to the caller (T-hash). If there is a race with async deflation, then we have to retry.
  2. There are several reasons why we might have to inflate the ObjectMonitor in order to set the hashcode:
    1. The object is neutral, does not contain a hashcode and we (T-hash) lost the race to try an install a hashcode in the mark word.
    2. The object is stack locked and does not contain a hashcode in the mark word.
    3. The object has an ObjectMonitor and the ObjectMonitor does not have a hashcode.
      Note: In this case, the inflate() call on the common fall thru code path is almost always a no-op since the existing ObjectMonitor is not likely to be async deflated before inflate() sees that the object already has an ObjectMonitor and bails out.

The common fall thru code path (executed by T-hash) that inflates the ObjectMonitor in order to set the hashcode can race with an async deflation (T-deflate). After the hashcode has been stored in the ObjectMonitor, we (T-hash) check if the ObjectMonitor has been async deflated (by T-deflate). If it has, then we (T-hash) retry because we don't know if the hashcode was stored in the ObjectMonitor before the object's header was restored (by T-deflate). Retrying (by T-hash) will result in the hashcode being stored in either object's header or in the re-inflated ObjectMonitor's header as appropriate.

Spin-Lock Monitor List Management In Theory

Use of specialized measurement code with the CR5/v2.05/8-for-jdk13 bits revealed that the gListLock contention is responsible for much of the performance degradation observed with SPECjbb2015. Consequently the primary focus of the next round of changes is/was on switching from course grained Thread::muxAcquire(&gListLock) and Thread::muxRelease(&gListLock) pairs to spin-lock monitor list management. Of course, since the Java Monitor subsystem is full of special cases, the spin-lock list management code has to have a number of special cases which are described here.

The Spin-Lock Monitor List management code was pushed to JDK15 using the following bug id:

JDK-8235795 replace monitor list mux{Acquire,Release}(&gListLock) with spin locks

The Async Monitor Deflation project makes a few additional changes on top of what was pushed via JDK-8235795.

The Simple Case

There is one simple case of spin-lock list management with the Java Monitor subsystem so we'll start with that code as a way to introduce the spin-lock concepts:

     L1:    while (true) {
     L2:      PaddedObjectMonitor* cur = Atomic::load(&g_block_list);
     L3:      Atomic::store(&new_blk[0]._next_om, cur);
     L4:      if (Atomic::cmpxchg(&g_block_list, cur, new_blk) == cur) {
     L5:        Atomic::add(&om_list_globals.population, _BLOCKSIZE - 1);
     L6:        break;
     L7:      }
     L8:    }

What the above block of code does is:

    • prepends a 'new_blk' to the front of 'g_block_list'
    • increments the 'om_list_globals.population' counter to include the number of new elements

The above block of code can be called by multiple threads in parallel and must not lose track of any blocks. Of course, the "must not lose track of any blocks" part is where all the details come in:

    • L2 loads the current 'g_block_list' value into 'cur'.
    • L3 stores 'cur' into the 0th element's next field for 'new_blk'.
    • L4 is the critical decision point for this list update. cmpxchg will change 'g_block_list' to 'new_blk' iff 'g_block_list' == 'cur' (publish it).
      • if the cmpxchg return value is 'cur', then we succeeded with the list update and we atomically update 'om_list_globals.population' to match.
      • Otherwise we loop around and do everything again from L2. This is the "spin" part of spin-lock. (smile)

At the point that cmpxchg has published the new 'g_block_list' value, 'new_blk' is now first block in the list and the 0th element's next field is used to find the previous first block; all of the monitor list blocks are chained together via the next field in the block's 0th element. It is the use of cmpxchg to update 'g_block_list' and the checking of the return value from cmpxchg that insures that we don't lose track of any blocks.

This example is considered to be the "simple case" because we only prepend to the list (no deletes) and we only use:

    • one load
    • one store and
    • one cmpxchg

to achieve the safe update of the 'g_block_list' value; the atomic increment of the 'om_list_globals.population' counter is considered to be just accounting (pun intended).

The concepts introduced here are:

    • update the new thing to refer to head of existing list
    • try to update the head of the existing list to refer to the new thing
    • retry as needed

Note: The above code snippet comes from ObjectSynchronizer::prepend_block_to_lists(); see that function for more complete context (and comments).

The Not So Simple Case or Taking and Prepending on the Same List Leads to A-B-A Races

Note: This subsection is talking about "Simple Take" and "Simple Prepend" in abstract terms. The purpose of this code and A-B-A example is to introduce the race concepts. The code shown here is not an exact match for the project code and the specific A-B-A example is not (currently) found in the project code.

The left hand column shows "T1" taking a node "A" from the front of the list and it shows the simple code that does that operation. The right hand column shows "T2" prepending a node "B" to the front of the list and it shows the simple code that does that operation. We have a third thread, "T3", that does a take followed by a prepend, but we don't show a column for "T3". Instead we have a column in the middle that shows the results of the interleaved operations of all three threads:

    T1: Simple Take:                           |                               T-save                                       object            T-deflate
    -------------------------------------------  +-------------+   -------------------------------------------
    install_displaced_markword_in_object() {   | mark=dmw    |  install_displaced_markword_in_object() {
     dmw = header()                   +-------------+  dmw = header()
if (!dmw->is_marked() && if (!dmw->is_marked() &&
         dmw->hash() == 0) {                                           dmw->hash() == 0) {
       create marked_dmw                                             create marked_dmw
       dmw = cmpxchg(marked_dmw, &header, dmw)                       dmw = cmpxchg(marked_dmw, &header, dmw)
     }                                                             }
     // dmw == ...                                    //| dmwT2: == ...
   Simple Prepend:
if (dmw->is_marked())                                         if (dmw->is_marked())
       unmark dmw                                                    unmark dmw
     obj = object()                                                obj = object()
     obj->cas_set_mark(dmw, this)                                  obj->cas_set_mark(dmw, this)
    • It does not matter whether T-save or T-deflate won the cmpxchg() call so the comment does not say who won.
    • It does not matter whether T-save or T-deflate won the cas_set_mark() call; in this scenario both were trying to restore the same value.
    • The object's mark field has changed from 'om_ptr' → 'dmw'.

Please notice that install_displaced_markword_in_object() does not do any retries on any code path:

    • Instead the code adapts to being the loser in a cmpxchg() by unmarking its copy of the dmw.
    • In the second race, if a thread loses the cas_set_mark() race, there is also no need to retry because the object's header has been restored by the other thread.

Hashcodes and Object Header Interference

If we have a race between a T-deflate thread and a thread trying to get/set a hashcode (T-hash), then the race is between the ObjectMonitorHandle.save_om_ptr(obj, mark) call in T-hash and deflation protocol in T-deflate.

Start of the Race

    T-hash                  ObjectMonitor              T-deflate
    ----------------------  +-----------------------+  ----------------------------------------
    save_om_ptr() {         | owner=NULL            |  deflate_monitor_using_JT() {
      :                     | ref_count=0          | 1> cmpxchg(DEFLATER_MARKER, &owner, NULL)
   1> atomic inc ref_count  +-----------------------+
    • The data fields are at their starting values.
    • T-deflate is about to execute cmpxchg().
    • T-hash is about to increment ref_count.
    • The "1>" markers are showing where each thread is at for the ObjectMonitor box.

Racing Threads

    T-hash                  ObjectMonitor              T-deflate
    ----------------------  +-----------------------+  ------------------------------------------
    save_om_ptr() {         | owner=DEFLATER_MARKER | deflate_monitor_using_JT() {
      : ---------------- | T1 and T3 see this initial list: | -------------------
+---+ +---+ +---+ | +---+ +---+ +---+ | +---+ +---+
head -> | A | -> | X | -> | Y | | head -> | A | -> | X | -> | Y | | head -> | X | -> | Y |
+---+ +---+ +---+ | +---+ +---+ +---+ | +---+ +---+
  | ref_count=0   |  cmpxchg(DEFLATER_MARKER, &owner, NULL)
   1> atomic inc ref_count  +-----------------------+ if (contentions != 0 || waiters != 0) {
                            | T3 takes "A", T2 sees this list:   }|
Take a node || | +---+ +---+ 1> prev = cmpxchg(-max_jint, &ref_count, 0)
    • T-deflate has set the owner field to DEFLATER_MARKER.
    • The "1>" markers are showing where each thread is at for the ObjectMonitor box:
      • T-deflate is about to execute cmpxchg().
      • T-save is about to increment the ref_count.

T-deflate Wins

If T-deflate wins the race, then T-hash will have to retry at most once.

    T-hash                      ObjectMonitor              T-deflate
    -------------------------  +-----------------------+  ------------------------------------------
    save_om_ptr() {           | owner=DEFLATER_MARKER |  deflate_monitor_using_JT() {
   1> atomic inc ref_count    | ref_count=-max_jint |  cmpxchg(DEFLATER_MARKER, &owner, NULL)
   if (owner ==           +-----------------------+  if (contentions != 0 || waiters != 0) {
          DEFLATER_MARKER &&  | Prepend a node ||
from the front || | head -> | X | -> | Y | | to the front ||
of the list || | ||+---+ +---+   }
        ref_count <= 0) {    |           \/             of the list prev = cmpxchg(-max_jint, &ref_count, 0)
        restore obj header ||
+-----------------------+ 1> if (prev == 0 &&
     atomic dec\/ ref_count  | owner=DEFLATER_MARKER |     owner == DEFLATER_MARKER) {
     2> return false to | T2 prepends "B":   | ref_count=-max_jint   restore obj header
    | cause a retry      +-----------------------+  2> finish the deflation\/
} | +---+ }
    • T-deflate made it past the cmpxchg() of ref_count before T-hash incremented it.
    • T-deflate set the ref_count field to -max_jint and is about to make the last of the protocol checks.
    • The first ObjectMonitor box is showing the fields at this point and the "1>" markers are showing where each thread is at for that ObjectMonitor box.
    • T-deflate sees "prev == 0 && owner == DEFLATER_MARKER" so it knows that it has won the race.
    • T-deflate restores obj header (not shown).
    • T-hash increments the ref_count.
    • T-hash observes "owner == DEFLATER_MARKER && ref_count <= 0" so it restores obj header (not shown) and decrements ref_count.
    • The second ObjectMonitor box is showing the fields at this point and the "2>" markers are showing where each thread is at for that ObjectMonitor box.
    • T-deflate finishes the deflation work.
    • T-hash returns false to cause a retry and when T-hash retries:
      • it observes the restored object header (done by T-hash or T-deflate):
        • if the object's header does not have a hash, then generate a hash and merge it with the object's header.
        • Otherwise, extract the hash from the object's header and return it.

T-hash Wins

If T-hash wins the race, then the ref_count will cause T-deflate to bail out on deflating the monitor.

Note: header is not mentioned in any of the previous sections for simplicity.

    T-hash                      ObjectMonitor              T-deflate
    -------------------------  +-----------------------+  ------------------------------------------
    save_om_ptr() {           | header=dmw_no_hash | deflate_monitor_using_JT() {
      atomic inc ref_count    | owner=DEFLATER_MARKER |   cmpxchg(DEFLATER_MARKER, &owner, NULL)
   1> if (owner ==            | ref_count=1      | if (contentions != 0 || waiters != 0) {
          DEFLATER_MARKER && +-----------------------+   }
         ref_count <= 0) {  ||  1> prev = cmpxchg(-max_jint, &ref_count, 0)
      } else {               \/              if (prev == 0 &&
   2> save om_ptr in the +-----------------------+ owner == DEFLATER_MARKER) {
       ObjectMonitorHandle | header=dmw_no_hash | } else {
        return true | owner=NULL | cmpxchg(NULL, &owner, DEFLATER_MARKER)
      } | ref_count=1 | 2> bailout on deflation
    } +-----------------------+ }
    if save_om_ptr() { ||
      if no hash \/
      gen hash & merge +-----------------------+
   hash = hash(header) | header=dmw_hash |
   } | owner=NULL |
3> atomic dec ref_count | ref_count=1 |
return hash +-----------------------+
    • T-deflate has set the owner field to DEFLATER_MARKER.
    • T-hash has incremented ref_count before T-deflate made it to cmpxchg().
    • The first ObjectMonitor box is showing the fields at this point and the "1>" markers are showing where each thread is at for that ObjectMonitor box.
    • T-deflate bails out on deflation, but first it tries to restore the owner field:
      • The return value of cmpxchg() is not checked here.
      • If T-deflate cannot restore the owner field to NULL, then another thread has managed to enter the monitor (or enter and exit the monitor) and we don't want to overwrite that information.
    • T-hash observes:
      • "owner == DEFLATER_MARKER && ref_count > 0" or
      • "owner == NULL && ref_count > 0" so it gets ready to save the ObjectMonitor*.
    • The second ObjectMonitor box is showing the fields at this point and the "2>" markers are showing where each thread is at for that ObjectMonitor box.
    • T-hash saves the ObjectMonitor* in the ObjectMonitorHandle (not shown) and returns to the caller.
    • save_om_ptr() returns true since the ObjectMonitor is safe:
      • if ObjectMonitor's 'header/dmw' field does not have a hash, then generate a hash and merge it with the 'header/dmw' field.
      • Otherwise, extract the hash from the ObjectMonitor's 'header/dmw' field.
    • The third ObjectMonitor box is showing the fields at this point and the "3>" marker is showing where T-hash is at for that ObjectMonitor box.
    • T-hash decrements the ref_count field.
    • T-hash returns the hash value.

Please note that in Carsten's original prototype, there was another race in ObjectSynchronizer::FastHashCode() when the object's monitor had to be inflated. The setting of the hashcode in the ObjectMonitor's header/dmw could race with T-deflate. That race is resolved in this version by the use of an ObjectMonitorHandle in the call to ObjectSynchronizer::inflate(). The ObjectMonitor* returned by ObjectMonitorHandle.om_ptr() has a non-zero ref_count so no additional races with T-deflate are possible.

Lock-Free Monitor List Management

Use of specialized measurement code with the CR5/v2.05/8-for-jdk13 bits revealed that the gListLock contention is responsible for much of the performance degradation observed with SPECjbb2015. Consequently the primary focus of the next round of changes is/was on switching to lock-free monitor list management. Of course, since the Java Monitor subsystem is full of special cases, the lock-free list management code has to have a number of special cases which will be described here.

The Simple Case

There is one simple case of lock-free list management with the Java Monitor subsystem so we'll start with that code as a way to introduce the lock-free concepts:

    L1:  while (true) {
    L2: PaddedObjectMonitor* cur = OrderAccess::load_acquire(&g_block_list);
    L3: OrderAccess::release_store(&new_blk[0]._next_om, cur);
    L4: if (Atomic::cmpxchg(new_blk, &g_block_list, cur) == cur) {
   L5: Atomic::add(_BLOCKSIZE - 1, &g_om_population);
    L6: break;
    L7: }
    L8:  }

What the above block of code does is:

    • prepends a 'new_blk' to the front of 'g_block_list'
    • increments the 'g_om_population' counter to include the number of new elements

The above block of code can be called by multiple threads in parallel and does not lose track of any blocks. Of course, the "does not lose track of any blocks" part is where all the details come in:

    • L2 load-acquires the current 'g_block_list' value into 'cur'; the use of load-acquire is necessary to get the latest value cmpxchg'ed by another thread.
    • L3 release-stores 'cur' into the 0th element's next field for 'new_blk'; the use of release-store is necessary to make sure that proper next field value is visible as soon as 'g_block_list' is updated to refer to 'new_blk' (publish it).
    • L4 is the critical decision point for this lock-free update. cmpxchg will change 'g_block_list' to 'new_blk' iff 'g_block_list' == 'cur' (publish it).
      • if the cmpxchg return value is 'cur', then we succeeded with the lock-free update and we atomically update 'g_om_population' to match.
      • Otherwise we loop around and do everything again from L2.

At the point that cmpxchg has published the new 'g_block_list' value, 'new_blk' is now first block in the list and the 0th element's next field is used to find the previous first block; all of the monitor list blocks are chained together via the next field in the block's 0th element. It is the use of cmpxchg to update 'g_block_list' and the checking of the return value from cmpxchg that insures that we don't lose track of any blocks.

This example is considered to be the "simple case" because we only prepend to the list (no deletes) and we only use:

    • one load-acquire
    • one release-store and
    • one cmpxchg

to achieve the safe update of the 'g_block_list' value; the atomic increment of the 'g_om_population' counter is considered to be just accounting (pun intended).

The concepts introduced here are:

    • update the new thing to refer to head of existing list
    • try to update the head of the existing list to refer to the new thing
    • retry as needed

Note: The above code snippet comes from ObjectSynchronizer::prepend_block_to_lists(); see that function for more complete context (and comments).

Prepending To A List That Also Allows Deletes

 +---+    +---+           |
+---+ +---+ | head -> | B | -> | X | -> | Y | | +---+ +---+ +---+
head -> | X | -> | Y | | +---+ +---+ +---+ | head -> | B | -> | X | -> | Y |
+---+ +---+ | T3 prepends "A": | +---+ +---+ +---+
+---+ | +---+ +---+ +---+ +---+ |
cur -> | A | | head -> | A | -> | B | -> | X | -> | Y | |
+---+ | +---+ +---+ +---+ +---+ |
| T1 takes "A", loses "B": |
// "take" a node: | +---+ | // "prepend" a node:
while (true) { | | B | ----+ | while (true) {
cur = head; | +---+ | | cur = head;
next = cur->next; | V | new->next = cur;
if (cmpxchg(next, &head, cur) == cur) { | +---+ +---+ | if (cmpxchg(new, &head, cur) == cur) {
break; // success changing head | head -> | X | -> | Y | | break; // success changing head
} | +---+ +---+ | }
} | +---+ | }
return cur; | cur -> | A | |
| +---+ |

The "Simple Take" and "Simple Prepend" algorithms are just fine by themselves. The "Simple Prepend" algorithm is almost identical to the algorithm in the "The Simple Case" and just like that algorithm, it works fine if we are only doing prepend operations on the list. Similarly, the "Simple Take" algorithm works just fine if we are only doing take operations on the list; the only thing missing is an empty list check, but that would have clouded the example.

When we allow simultaneous take and prepend operations on the same list, the simple algorithms are exposed to A-B-A races. An A-B-A race is a situation where the head of the list can change from node "A" to node "B" and back to node "A" again without the simple algorithm being aware that critical state has changed. In the middle column of the above diagram, we show what happens when T3 causes the head of the list to change from node "A" to node "B" (a take operation) and back to node "A" (a prepend operation). That A-B-A race causes T1 to lose node "B" when it updates the list head to node "X" instead of node "B" because T1 was unaware that its local 'next' value was stale.

Here's the diagram again with the code in T1 and T2 lined up with the effects of the A-B-A race executed by T3:

    T1: Simple Take:                           |                                            | T2: Simple Prepend:
---------------- | T1 and T3 see this initial list: | -------------------
while (true) { | +---+ +---+ +---+ | :
cur = head; | head -> | A | -> | X | -> | Y | | :
next = cur->next; | +---+ +---+ +---+ | :
: | T3 takes "A", T2 sees this list: | :
: | +---+ +---+ | :
: | head -> | X | -> | Y | | :
: | +---+ +---+ | while (true) {
: | T2 prepends "B": | cur = head;
: | +---+ +---+ +---+ | new->next = cur;
: | head -> | B | -> | X | -> | Y | | if (cmpxchg(new, &head, cur) == cur) {
: | +---+ +---+ +---+ | break;
: | T3 prepends "A": | }
: | +---+ +---+ +---+ +---+ | }
: | head -> | A | -> | B | -> | X | -> | Y | |
: | +---+ +---+ +---+ +---+ |
: | T1 takes "A", loses "B": |
: | +---+ |
: | | B | ----+ |
: | +---+ | |
: | V |
: | +---+ +---+ |
if (cmpxchg(next, &head, cur) == cur) { | head -> | X | -> | Y | |
} | +---+ +---+ |
} | +---+ |
return cur; | cur -> | A | |
| +---+ |

So the simple algorithms are not sufficient when we allow simultaneous take and prepend operations.

Spin-Locking to Solve the A-B-A Race

Note: This subsection is talking about "Spin-Locking" as a solution to the A-B-A race in abstract terms. The purpose of this spin-locking code and A-B-A example is to introduce the solution concepts. The code shown here is not an exact match for the project code.

One solution to the A-B-A race is to spin-lock the next field in a node to indicate that the node is busy. Only one thread can successfully spin-lock the next field in a node at a time and other threads must loop around and retry their spin-locking operation until they succeed. Each thread that spin-locks the next field in a node must unlock the next field when it is done with the node so that other threads can proceed.

Here's the take algorithm modified with spin-locking (still ignores the empty list for clarity):

    // "take" a node with locking:
while (true) {
cur = head;
if (!try_om_lock(cur)) {
// could not lock cur so try again
continue;
}
if (head != cur) {
// head changed while locking cur so try again
om_unlock(cur);
continue;
}
next = unmarked_next(cur);
// list head is now locked so switch it to next which also makes list head unlocked
Atomic::store(&head, next);
om_unlock(cur); // unlock cur and return it
return cur;
}

The modified take algorithm does not change the list head pointer until it has successfully locked the list head node. Notice that after we lock the list head node we have to verify that the list head pointer hasn't changed in the mean time. Only after we have verified that the node we locked is still the list head is it safe to modify the list head pointer. The locking of the list head prevents the take algorithm from executing in parallel with a prepend algorithm and losing a node.

Also notice that we update the list head pointer with store instead of with cmpxchg. Since we have the list head locked, we are not racing with other threads to change the list head pointer so we can use a simple store instead of the heavy cmpxchg hammer.

Here's the prepend algorithm modified with locking (ignores the empty list for clarity):

    // "prepend" a node with locking:
while (true) {
cur = head;
if (!try_om_lock(cur)) {
// could not lock cur so try again
continue;
}
if (head != cur) {
// head changed while locking cur so try again
om_unlock(cur);
continue;
}
next = unmarked_next(cur);
// list head is now locked so switch it to 'new' which also makes list head unlocked
Atomic::store(&head, new);
om_unlock(cur); // unlock the previous list head
}

The modified prepend algorithm does not change the list head pointer until it has successfully locked the list head node. Notice that after we lock the list head node we have to verify that the list head pointer hasn't changed in the mean time. Only after we have verified that the node we locked is still the list head is it safe to modify the list head pointer. The locking of the list head prevents the prepend algorithm from executing in parallel with the take algorithm and losing a node.

Also notice that we update the list head pointer with store instead of with cmpxchg for the same reasons as the previous algorithm.

Background: ObjectMonitor Movement Between the Lists

The purpose of this subsection is to provide background information about how ObjectMonitors move between the various lists. This project changes the way these movements are implemented, but does not change the movements themselves. For example, newly allocated blocks of ObjectMonitors are always prepending to the global free list; this is true in the baseline and is true in this project. One exception is the addition of the global wait list (see below).

ObjectMonitor Allocation Path

    • ObjectMonitors are allocated by ObjectSynchronizer::om_alloc().
    • Assume that the calling JavaThread has an empty free list and the global free list is also empty:
      • A block of ObjectMonitors is allocated by the calling JavaThread and prepended to the global free list.
      • ObjectMonitors are taken from the front of the global free list by the calling JavaThread and prepended to the JavaThread's free list by ObjectSynchronizer::om_release().
      • An ObjectMonitor is taken from the front of the JavaThread's free list and prepended to the JavaThread's in-use list (optimistically).

ObjectMonitor Deflation Path

    • ObjectMonitors are deflated at a safepoint by:
          ObjectSynchronizer::deflate_monitor_list() calling ObjectSynchronizer::deflate_monitor()
      And when Async Monitor Deflation is enabled, they are deflated by:
          ObjectSynchronizer::deflate_monitor_list_using_JT() calling ObjectSynchronizer::deflate_monitor_using_JT()

    • Idle ObjectMonitors are deflated by the ServiceThread when Async Monitor Deflation is enabled. They can also be deflated at a safepoint by the VMThread or by a task worker thread. Safepoint deflation is used when Async Monitor Deflation is disabled or when there is a special deflation request, e.g., System.gc().

    • An idle ObjectMonitor is deflated and extracted from its in-use list and prepended to the global wait list. The in-use list can be either the global in-use list or a per-thread in-use list. Deflated ObjectMonitors are always prepended to the global wait list.

      • The om_list_globals.wait_list allows ObjectMonitors to be safely deflated without reuse races.
      • After a handshake/safepoint with all JavaThreads, the ObjectMonitors on the om_list_globals.wait_list are prepended to the global free list.

ObjectMonitor Flush Path

    • ObjectMonitors are flushed by ObjectSynchronizer::om_flush().
    • When a JavaThread exits, the ObjectMonitors on its in-use list are prepended on the global in-use list and the ObjectMonitors on its free list are prepended on the global free list.

ObjectMonitor Linkage Path

    • ObjectMonitors are linked with objects by ObjectSynchronizer::inflate().
    • An inflate() call by one JavaThread can race with an inflate() call by another JavaThread for the same object.
    • When inflate() realizes that it failed to link an ObjectMonitor with the target object, it calls ObjectSynchronizer::om_release() to extract the ObjectMonitor from the JavaThread's in-use list and prepends it on the JavaThread's free list.
      Note: Remember that ObjectSynchronizer::om_alloc() optimistically added the newly allocated ObjectMonitor to the JavaThread's in-use list.
    • When inflate() successfully links an ObjectMonitor with the target object, that ObjectMonitor stays on the JavaThread's in-use list.

The Lists and Which Threads  Touch Them

    • global free list:
      • prepended to by JavaThreads that allocated a new block of ObjectMonitors (malloc time)
      • prepended to by JavaThreads that are exiting (and have a non-empty per-thread free list)
      • taken from the head by JavaThreads that need to allocate ObjectMonitor(s) for their per-thread free list (reprovision)
      • prepended to by deflation done by:
        • either the VMThread or a worker thread for safepoint based
        • or the ServiceThread for async monitor deflation
    • global in-use list:
      • prepended to by JavaThreads that are exiting (and have a non-empty per-thread free list)
      • extracted from by deflation done by:
        • either the VMThread or a worker thread for safepoint based
        • or the ServiceThread for async monitor deflation
    • global wait list:
      • prepended by the ServiceThread during async deflation
      • entire list detached and prepended to the global free list by the ServiceThread during async deflation
      • Note: The global wait list serves the same function as Carsten's gFreeListNextSafepoint list in his prototype.
    • per-thread free list:
      • prepended to by a JavaThread when it needs to allocate new ObjectMonitor(s) (reprovision)
      • taken from the head by a JavaThread when it needs to allocate a new ObjectMonitor (inflation)
      • prepended to by a JavaThread when it isn't able to link the object to the ObjectMonitor (failed inflation)
      • entire list detached and prepended to the global free list when the JavaThread is exiting
    • per-thread in-use list:
      • prepended to by a JavaThread when it allocates a new ObjectMonitor (inflation, optimistically in-use)
      • extracted from by deflation done by:
        • either the VMThread or a worker thread for safepoint based
        • or the ServiceThread for async monitor deflation
      • entire list detached and prepended to the global in-use list when the JavaThread is exiting

Spin-Lock Monitor List Management In Reality

Prepending To A List That Also Allows Deletes

It is now time to switch from algorithms to real snippets from the code.

The next case to consider for spin-lock The next case to consider for lock-free list management with the Java Monitor subsystem is prepending to a list that also allows deletes. As you might imagine, the possibility of a prepend racing with a delete makes things more complicated. The solution is to "mark" lock the next field in the ObjectMonitor at the head of the list we're trying to prepend to. A successful mark lock tells other prependers or deleters that the marked locked ObjectMonitor is busy and they will need to retry their own mark operation.lock operation.

    L01:    while (true) {
    L02:      om_lock(m);  // Lock m so we can safely update its next field.
    L03:      ObjectMonitor* cur = NULL;
    L04:      // Lock the list head to guard against A-B-A race:
    L05:      if ((cur = get_list_head_locked(list_p)) != NULL) {
    L06:        // List head is now locked so we can safely switch it.
    L07:        m->set_next_om(cur);  // m now points to cur (and unlocks m)
    L08:        Atomic::store(list_p, m);  // Switch list head to unlocked m.
    L09:        om_unlock(cur);
    L10:        break;
    L11:      }
    L12:      // The list is empty so try to set the list head.
    L13:      assert(cur == NULL, "cur must be NULL: cur=" INTPTR_FORMAT, p2i(cur));
    L14:      m->set_next_om(  L01: while (true) {
L02: ObjectMonitor* cur = OrderAccess::load_acquire(list_p);
L03:    ObjectMonitor* next = NULL;
L04:    if (!mark_next(m, &next)) {
L05:      continue;  // failed to mark next field so try it all again
L06:    }
L07:    set_next(m, cur);  // m now points to curNULL (and unmarksunlocks m)
    L08:   L15:      if (Atomic::cmpxchg(list_p, cur, m) == NULLcur) {
    L16:        // List L09:head is now unlocked m.
    // No potential race with other prependers since *list_p is empty.
L10:      if (Atomic::cmpxchg(m, list_p, cur) == cur) {
L11: // Successfully switched *list_p to 'm'.
L12:        Atomic::inc(count_p);
L13:        break;
L14:      }
L15:      // Implied else: try it all again
L16:    } else {
L17:      // Try to mark next field to guard against races:
L18:      if (!mark_next(cur, &next)) {
L19:        continue;  // failed to mark next field so try it all again
L20:      }
L21:      // We marked the next field so try to switch *list_p to 'm'.
L22:      if (Atomic::cmpxchg(m, list_p, cur) != cur) {
L23:        // The list head has changed so unmark the next field and try again:
L24:        set_next(cur, next);
L25:        continue;
L26:      }
L27:      Atomic::inc(count_p);
L28:      set_next(cur, next);  // unmark next field
L29:      break;
L30:    }
L31:  }

What the above block of code does is:

    • prepends an ObjectMonitor 'm' to the front of the list referred to by list_p
      • mark 'm's next field and update 'm' to refer to the list head
      • mark the list head's next field
      • update 'list_p' to refer to 'm'
      • unmark the next field in the previous list head
    • increments the counter referred to by 'count_p' by one

The above block of code can be called by multiple prependers in parallel or with deleters running in parallel and does not lose track of any ObjectMonitor. Of course, the "does not lose track of any ObjectMonitor" part is where all the details come in:

    • L02 load-acquires the current 'list_p' value into 'cur'; the use of load-acquire is necessary to get the latest value cmpxchg'ed by another thread.
    • L04 tries to mark 'm's next field; if marking fails, then another thread (T2) has 'm' marked and we try again until it is unmarked.
      You might be asking yourself: why does T2 have 'm' marked?
      • Our thread (T1) wants to prepend 'm' to an in-use list.
      • T2 just finished prepending 'm' to a free list where T1 found 'm', but T2 has not yet unmarked 'm'.
      • If our thread (T1) does not mark 'm', then T2's umarking of 'm' could erase the next value that T1 wants to put in 'm'.
    • L07 sets 'm's next field to the current list head 'cur' (which also unmarks 'm').
    • L08 → L13 recognizes that the current list is empty and tries to cmpxchg 'list_p' to 'm':
      • if cmpxchg works, then the counter referred to by 'count_p' is incremented by one and we're done.
      • Otherwise, another prepender won the race to update the list head so we have to try again.
    • L16 → L29 is where we handle a non empty current list:
      • L18 tries to mark the current list head 'cur'; if marking fails, then another thread (a prepender or a deleter) has 'cur' marked and we try again until it is unmarked.
      • Once our thread has 'cur' marked, another prepender or deleter will have to retry until we have unmarked 'cur'.
      • L22 tries to cmpxchg 'list_p' to 'm':
        • if cmpxchg does not work, then we unmark 'cur' and try again; the cmpxchg can fail if another thread has managed to change the list head 'list_p' and unmarked 'cur' after we load-acquired list_p on L02 and before we tried to cmpxchg it on L22.
        • Otherwise, the counter referred to by 'count_p' is incremented by one, we unmark 'cur' and we're done.

ObjectMonitor 'm' is safely on the list at the point that we have updated 'list_p' to refer to 'm'. In this subsection's block of code, we also called two new functions, mark_next() and set_next(), that are explained in the next subsection.

Note: The above code snippet comes from prepend_to_common(); see that function for more context and a few more comments.

mark_next(), mark_om_ptr(), and set_next() Helper Functions

Managing marks on ObjectMonitors has been abstracted into a few helper functions. mark_next() is the first interesting one:

    L01:  static bool mark_next(ObjectMonitor* om, ObjectMonitor** next_p) {
L02:    // Get current next field without any marking value.
L03:    ObjectMonitor* next = (ObjectMonitor*)
L04:        ((intptr_t)OrderAccess::load_acquire(&om->_next_om) & ~0x1);
L05:    if (Atomic::cmpxchg(mark_om_ptr(next), &om->_next_om, next) != next) {
L06:      return false;  // Could not mark the next field or it was already marked.
L07:    } 
L08:    *next_p = next;
L09:    return true;
L10:  }

The above function tries to mark the next field in an ObjectMonitor:

    • If marking is successful, then the unmarked next field is returned via parameter and true is returned.
    • Otherwise, false is returned.

The function can be called by multiple threads at the same time and only one thread will succeed in the marking operation (return == true) and all other threads will get return == false. Of course, the "only one thread will succeed" part is where all the details come in:

    • L0[34] load-acquires the ObjectMonitor's next field and strips the marking bit:
      • The unmarked value is saved in 'next'.
      • We need the unmarked next value in order to properly detect if the next field was already marked.
    • L05 tries to cmpxchg a marked 'next' value into the ObjectMonitor's next field:
      • if cmpxchg does not work, then we return false:
        • The cmpxchg will not work if the next field changes after we load-acquired the value on L04.
        • The cmpxchg will not work if the next field is already marked.
      • Otherwise, we return the unmarked 'next' value via 'next_p' and return true.

The mark_next() function calls another helper function, mark_om_ptr(), that needs a quick explanation:

    L1:  static ObjectMonitor* mark_om_ptr(ObjectMonitor* om) {
    L2:    return (ObjectMonitor*)((intptr_t)om | 0x1);
    L3:  }

This function encapsulates the setting of the marking bit in an ObjectMonitor* for the purpose of hiding the details and making the calling code easier to read:

    • L2 casts the ObjectMonitor* into a type that will allow the '|' operator to be used.
    • We use the 0x1 bit as our marking value because ObjectMonitors are aligned on a cache line so the low order bit is not used by the normal addressing of an ObjectMonitor*.

set_next() is the next interesting function and it also only needs a quick explanation:

    L1:  static void set_next(ObjectMonitor* om, ObjectMonitor* value) {
    L2:    OrderAccess::release_store(&om->_next_om, value);
    L3:  }

This function encapsulates the setting of the next field in an ObjectMonitor for the purpose of hiding the details and making the calling code easier to read:

    • This function is simply a wrapper around a release-store of an ObjectMonitor* into the next field in an ObjectMonitor.
    • The typical "set_next(cur, next)" call sequence is easier to read than "OrderAccess::release_store(&cur→_next_om, next)".

Taking From The Start Of A List

The next case to consider for lock-free list management with the Java Monitor subsystem is taking an ObjectMonitor from the start of a list. Taking an ObjectMonitor from the start of a list is a specialized form of delete that is guaranteed to interact with a thread that is prepending to the same list at the same time. Again, the core of the solution is to "mark" the next field in the ObjectMonitor at the head of the list we're trying to take the ObjectMonitor from, but we use slightly different code because we have less linkages to make than a prepend.

    L01:  static ObjectMonitor* take_from_start_of_common(ObjectMonitor* volatile * list_p,
    L02:                                                  int volatile * count_p) {
    L03:    ObjectMonitor* next = NULL;
    L04:    ObjectMonitor* take = NULL;
    L05:    // Mark the list head to guard against A-B-A race:
    L06:    if (!mark_list_head(list_p, &take, &next)) {
    L07:      return NULL;  // None are available.
    L08:    }
    L09:    // Switch marked list head to next (which unmarks the list head, but
    L10:    // leaves take marked):
    L11:    OrderAccess::release_store(list_p, next);
    L12:    Atomic::dec(count_p);
    L13:    // Unmark take, but leave the next value for any lagging list
    L14:    // walkers. It will get cleaned up when take is prepended to
    L15:    // the in-use list:
    L16:    set_next(take, next);
    L17:    return take;
    L18:  }

What the above function does is:

    • Tries to mark the ObjectMonitor at the head of the list:
      • Marking will only fail if the list is empty so that NULL can be returned.
      • Otherwise mark_list_head() will loop until the ObjectMonitor at the list head has been marked.
    • Updates the list head to refer to the next ObjectMonitor.
    • Decrements the counter referred to by 'count_p'.
    • Unmarks the next field in the taken ObjectMonitor.

The function can be called by more than one thread at a time and each thread will take a unique ObjectMonitor from the start of the list (if one is available) without losing any other ObjectMonitors on the list. Of course, the "take a unique ObjectMonitor" and "without losing any other ObjectMonitors" parts are where all the details come in:

    • L03 tries to mark the list head:
      • mark_list_head() returns false if the list is empty so we return NULL on L07.
      • Otherwise, 'take' is a pointer to the marked list head and 'next' is the unmarked next field in the list head.
    • L11 release-stores 'next' into 'list_p'.
      You might be asking yourself: Why release-store instead of cmpxchg?
      • mark_list_head() only returns to the caller when it has marked the next field in the ObjectMonitor at the head of the list.
      • Because of that guarantee, any prepender or deleter thread that is running in parallel must loop until we have release-stored 'next' into 'list_p' which unmarks the list head.
    • L12 decrements the counter referred to by 'count_p'.
    • L16 unmarks 'take' using the unmarked 'next' value we got from mark_list_head():
      • Keeping the 'next' value in take's next field allows any lagging list walker to get to the next ObjectMonitor on that list.
      • take's next field will get cleaned up when take is prepended to its target in-use list.
    • L17 returns 'take' to the caller.

The take_from_start_of_common() function calls another helper function, mark_list_head(), that is explained in the next subsection.

mark_list_head() Helper Function

mark_list_head() is the next interesting helper function:

    L01:  static bool mark_list_head(ObjectMonitor* volatile * list_p,
    L02:                             ObjectMonitor** mid_p, ObjectMonitor** next_p) {
    L03:    while (true) {
    L04:      ObjectMonitor* mid = OrderAccess::load_acquire(list_p);
    L05:      if (mid == NULL) {
    L06:        return false;  // The list is empty so nothing to mark.
    L07:      }
    L08:      if (mark_next(mid, next_p)) {
    L09:        if (OrderAccess::load_acquire(list_p) != mid) {
    L10:          // The list head changed so we have to retry.
    L11:          set_next(mid, *next_p);  // unmark mid
    L12:          continue;
    L13:        }
    L14:        // We marked next field to guard against races.
    L15:        *mid_p = mid;
    L16:        return true;
    L17:      }
    L18:    }
    L19: }

The above function tries to mark the next field in the list head's ObjectMonitor:

    • If the list is empty, false is returned.
    • Otherwise, the list head's ObjectMonitor* is returned via parameter (mid_p), the unmarked next field is returned via parameter (next_p) and true is returned.

The function can be called by more than one thread on the same 'list_p' at a time. False is only returned when 'list_p' refers to an empty list. Otherwise only one thread will return true at a time with the 'mid_p' and 'next_p' return parameters set. Since the next field in 'mid_p' is marked, any parallel callers to mark_list_head() will loop until the next field in the list head's ObjectMonitor is no longer marked. That typically happens when the list head's ObjectMonitor is taken off the list and 'list_p' is advanced to the next ObjectMonitor on the list. Of course, making sure that "only one thread will return true at a time" is where all the details come in:

    • L04 load-acquires the current 'list_p' value into 'mid'; the use of load-acquire is necessary to get the latest value release-stored or cmpxchg'ed by another thread.
    • L0[56] is the empty list check and the only time that false is returned by this function.
    • L08 tries to mark the next field in 'mid':
      • If marking is not successful, we loop around to try it all again.
      • If marking is successful, then 'next_p' contains mid's unmarked next field value.
      • L09 load-acquires the current 'list_p' value to see if it still matches 'mid':
        • If the list head has changed, then we unmark mid on L11 and try it all again.
        • Otherwise, 'mid' is returned via 'mid_p' and we return true.

When this function returns true, the next field in 'mid_p' is marked and any parallel callers of mark_list_head() on the same list will be looping until the next field in the list head's ObjectMonitor is no longer marked. The caller that just got the 'true' return needs to finish up its work with 'mid_p' quickly.

...

L17:        break;
    L18:      }
    L19:      // Implied else: try it all again
    L20:    }
    L21:    Atomic::inc(count_p);

What the above block of code does is:

    • prepends an ObjectMonitor 'm' to the front of the list referred to by list_p
      • lock 'm'
      • lock the list head
      • update 'm' to refer to the list head
      • update 'list_p' to refer to 'm'
      • unlock the previous list head
    • increments the counter referred to by 'count_p' by one

The above block of code can be called by multiple prependers in parallel or with deleters running in parallel and must not lose track of any ObjectMonitor. Of course, the "must not lose track of any ObjectMonitor" part is where all the details come in:

    • L02 locks 'm'; internally we have to loop because another thread (T2) might have 'm' locked and we try again until we have locked it.
      You might be asking yourself: why does T2 have 'm' locked?
      • Before T1 was trying to prepend 'm' to an in-use list, T1 and T2 were racing to take an ObjectMonitor off the free list.
      • T1 won the race, locked 'm', removed 'm' from the free list and unlocked 'm'; T2 stalled before trying to lock 'm'.
      • T2 resumed and locked 'm', realized that 'm' was no longer the head of the free list, unlocked 'm' and tried it all again.
      • If our thread (T1) does not lock 'm' before it tries to prepend it to an in-use list, then T2's unlocking of 'm' could erase the next value that T1 wants to put in 'm'.
    • L05 tries to lock the list head 'list_p'; if get_list_head_locked() returns non-NULL, we have the list head locked and can safely update it:
      • L07: Update 'm's next field to point to the current list head (which unlocks 'm').
      • L08: store 'm' into 'list_p' which switches the list head to an unlocked 'm'.
      • L09: We unlock the previous list head.
    • If get_list_head_locked() returned NULL, we have an empty list:
      • L14: Update 'm's next field to NULL (which unlocks 'm').
      • L15: Try to cmpxchg 'list_p' to 'm':
        • if cmpxchg works, then we're done.
        • Otherwise, another prepender won the race to update the list head so we have to try again.
    • The counter referred to by 'count_p' is incremented by one.

ObjectMonitor 'm' is safely on the list at the point that we have updated 'list_p' to refer to 'm'. In this subsection's block of code, we also called three new functions: om_lock(), get_list_head_locked() and set_next_om(), that are explained in the next few subsections about helper functions.

Note: The above code snippet comes from prepend_to_common(); see that function for more context and a few more comments.

try_om_lock(), mark_om_ptr(), and set_next_om() Helper Functions

Managing spin-locks on ObjectMonitors has been abstracted into a few helper functions. try_om_lock() is the first interesting one:

    L1:  static bool try_om_lock(ObjectMonitor* om) {
    L2:    // Get current next field without any OM_LOCK_BIT value.
    L3:    ObjectMonitor* next = unmarked_next(om);
    L4:    if (om->try_set_next_om(next, mark_om_ptr(next)) != next) {
    L5:      return false;  // Cannot lock the ObjectMonitor.
    L6:    }
    L7:    return true;
    L8:  }

The above function tries to lock the ObjectMonitor:

    • If locking is successful, then true is returned.
    • Otherwise, false is returned.

The function can be called by multiple threads at the same time and only one thread will succeed in the locking operation (return == true) and all other threads will get return == false. Of course, the "only one thread will succeed" part is where all the details come in:

    • L3 loads the ObjectMonitor's next field and strips the locking bit:
      • The unlocked value is saved in 'next'.
      • We need the unlocked next value in order to properly detect if the next field was already locked.
    • L4 tries to cmpxchg a locked 'next' value into the ObjectMonitor's next field:
      • if cmpxchg does not work, then we return false:
        • The cmpxchg will not work if the next field changes after we loaded the value on L3.
        • The cmpxchg will not work if the next field is already locked.
      • Otherwise, we return true.

The try_om_lock() function calls another helper function, mark_om_ptr(), that needs a quick explanation:

    L1:  static ObjectMonitor* mark_om_ptr(ObjectMonitor* om) {
    L2:    return (ObjectMonitor*)((intptr_t)om | OM_LOCK_BIT);
    L3:  }

This function encapsulates the setting of the locking bit in an ObjectMonitor* for the purpose of hiding the details and making the calling code easier to read:

    • L2 casts the ObjectMonitor* into a type that will allow the '|' operator to be used.
    • We use the 0x1 (OM_LOCK_BIT) bit as our locking value because ObjectMonitors are aligned on a cache line so the low order bit is not used by the normal addressing of an ObjectMonitor*.

set_next_om() is the next interesting function and it also only needs a quick explanation:

    L1:  inline void ObjectMonitor::set_next_om(ObjectMonitor* value) {
    L2:    Atomic::store(&_next_om, value);
    L3:  }

This function encapsulates the setting of the next field in an ObjectMonitor for the purpose of hiding the details and making the calling code easier to read:

    • This function is simply a wrapper around a store of an ObjectMonitor* into the next field in an ObjectMonitor.
    • The typical "cur->set_next_om(next)" call sequence is easier to read than "OrderAccess::release_store(&cur→_next_om, next)".

om_lock() Helper Function

om_lock() is the next interesting helper function:

    L1:  static void om_lock(ObjectMonitor* om) {
    L2:    while (true) {
    L3:      if (try_om_lock(om)) {
    L4:        return;
    L5:      }
    L6:    }
    L7:  }

The above function loops until it locks the target ObjectMonitor. There is nothing particularly special about this function so we don't need any line specific annotations.

Debugging Tip: If there's a bug where an ObjectMonitor's next field is not properly unlocked, then this function will loop forever and the caller will be stuck.

get_list_head_locked() Helper Function

get_list_head_locked() is the next interesting helper function:

    L01:  static ObjectMonitor* get_list_head_locked(ObjectMonitor** list_p) {
    L02:    while (true) {
    L03:      ObjectMonitor* mid = Atomic::load(list_p);
    L04:      if (mid == NULL) {
    L05:        return NULL;  // The list is empty.
    L06:      }
    L07:      if (try_om_lock(mid)) {
    L08:        if (Atomic::load(list_p) != mid) {
    L09:          // The list head changed so we have to retry.
    L10:          om_unlock(mid);
    L11:          continue;
    L12:        }
    L13:        return mid;
    L14:      }
    L15:    }
    L16:  }

The above function tries to lock the list head's ObjectMonitor:

    • If the list is empty, NULL is returned.
    • Otherwise, the list head's ObjectMonitor* is returned.

The function can be called by more than one thread on the same 'list_p' at a time. False is only returned when 'list_p' refers to an empty list. Otherwise only one thread will return an ObjectMonitor* at a time. Since the ObjectMonitor is locked, any parallel callers to get_list_head_locked() will loop until the list head's ObjectMonitor is no longer locked. That typically happens when the list head's ObjectMonitor is taken off the list and 'list_p' is advanced to the next ObjectMonitor on the list. Of course, making sure that "only one thread will return true at a time" is where all the details come in:

    • L03 loads the current 'list_p' value into 'mid'.
    • L0[45] is the empty list check and the only time that NULL is returned by this function.
    • L07 tries to lock 'mid':
      • If locking is not successful, we loop around to try it all again (the "spin" part of spin-lock).
      • L08 loads the current 'list_p' value to see if it still matches 'mid':
        • If the list head has changed, then we unlock mid on L10 and try it all again.
        • Otherwise, 'mid' is returned.

When this function returns a non-NULL ObjectMonitor*, the ObjectMonitor is locked and any parallel callers of get_list_head_locked() on the same list will be looping until the list head's ObjectMonitor is no longer locked. The caller that just got the ObjectMonintor* needs to finish up its work quickly.

Debugging Tip: If there's a bug where the list head ObjectMonitor is not properly unlocked, then this function will loop forever and the caller will be stuck.

Taking From The Start Of A List

The next case to consider for spin-lock list management with the Java Monitor subsystem is taking an ObjectMonitor from the start of a list. Taking an ObjectMonitor from the start of a list is a specialized form of delete that is guaranteed to interact with a thread that is prepending to the same list at the same time. Again, the core of the solution is to lock the ObjectMonitor at the head of the list we're trying to take the ObjectMonitor from, but we use slightly different code because we have less linkages to make than a prepend.

    L01:  static ObjectMonitor* take_from_start_of_common(ObjectMonitor** list_p,
    L02:                                                  int* count_p) {
    L03:    ObjectMonitor* take = NULL;
    L04:    // Lock the list head to guard against A-B-A race:
    L05:    if ((take = get_list_head_locked(list_p)) == NULL) {
    L06:      return NULL;  // None are available.
    L07:    }
    L08:    ObjectMonitor* next = unmarked_next(take);
    L09:    // Switch locked list head to next (which unlocks the list head, but
    L10:    // leaves take locked):
    L11:    Atomic::store(list_p, next);
    L12:    Atomic::dec(count_p);
    L13:    // Unlock take, but leave the next value for any lagging list
    L14:    // walkers. It will get cleaned up when take is prepended to
    L15:    // the in-use list:
    L16:    om_unlock(take);
    L17:    return take;
    L18:  }

What the above function does is:

    • Tries to lock the ObjectMonitor at the head of the list:
      • Locking will only fail if the list is empty so that NULL can be returned.
      • Otherwise get_list_head_locked() will loop until the ObjectMonitor at the list head has been locked.
    • Get the next pointer from the taken ObjectMonitor.
    • Updates the list head to refer to the next ObjectMonitor.
    • Decrements the counter referred to by 'count_p'.
    • Unlocks the taken ObjectMonitor.

The function can be called by more than one thread at a time and each thread will take a unique ObjectMonitor from the start of the list (if one is available) without losing any other ObjectMonitors on the list. Of course, the "take a unique ObjectMonitor" and "without losing any other ObjectMonitors" parts are where all the details come in:

    • L05 tries to lock the list head:
      • get_list_head_locked() returns NULL if the list is empty so we return NULL on L06.
      • Otherwise, 'take' is a pointer to the locked list head.
    • L08 gets the next pointer from take.
    • L11 stores 'next' into 'list_p'.
      You might be asking yourself: Why store instead of cmpxchg?
      • get_list_head_locked() only returns to the caller when it has locked the ObjectMonitor at the head of the list.
      • Because of that guarantee, any prepender or deleter thread that is running in parallel must loop until we have stored 'next' into 'list_p' which unlocks the list head.
    • L12 decrements the counter referred to by 'count_p'.
    • L16 unlocks 'take':
      • Keeping the 'next' value in take's next field allows any lagging list walker to get to the next ObjectMonitor on that list.
      • take's next field will get cleaned up when take is prepended to its target in-use list.
    • L17 returns 'take' to the caller.

lock_next_for_traversal() Helper Function

This last helper function exists for making life easier for list walker code. List walker code calls get_list_head_locked() to get the locked list head and then walks the list applying its particular logic to elements in the list. In order to safely walk to the 'next' ObjectMonitor in a list, the list walker code must lock the 'next' ObjectMonitor before unlocking the 'current' ObjectMonitor that it has locked. If a list walker unlocks 'current' before locking 'next', then there is race where 'current' could be modified to refer to something other than the 'next' value that was in place when 'current' was locked. By locking 'next' first and then unlocking 'current', the list walker can safely advance to 'next'.

    L01:  static ObjectMonitor* lock_next_for_traversal(ObjectMonitor* cur) {
    L02:    assert(is_locked(cur), "cur=" INTPTR_FORMAT " must be locked", p2i(cur));
    L03:    ObjectMonitor* next = unmarked_next(cur);
    L04:    if (next == NULL) {  // Reached the end of the list.
    L05:      om_unlock(cur);
    L06:      return NULL;
    L07:    }
    L08:    om_lock(next);   // Lock next before unlocking current to keep
    L09:    om_unlock(cur);  // from being by-passed by another thread.
    L10:    return next;
    L11:  }

This function is pretty straight forward so there are no detailed notes for it.

Using The New Spin-Lock Monitor List Functions

ObjectSynchronizer::om_alloc(Thread* self, ...)

...

2) Try to allocate from the global free list (up to self→om_free_provision times):

    • take_from_start_of_gglobal_free_list() takes an ObjectMonitor from the global free list (if possible).
    • om_release(self, take, false) prepends the newly taken ObjectMonitor to self's free list.
    • Retry the allocation from step 1.

...

ObjectSynchronizer::om_release() is responsible for putting an ObjectMonitor on self's free list. If 'from_per_thread_alloc' is true, then om_release() is also responsible for extracting the ObjectMonitor from self's in-use list. The extraction from self's in-use list must happen first:

    L01:     if (from_per_thread_alloc) {
    L02:    mark      if ((mid = get_list_head_locked(&self->om_in_use_list, &mid, &next)) == NULL) {
    L03:        fatal("thread=" INTPTR_FORMAT " in-use list must not be empty.", p2i(self));
    L03L04:      }
    while (true) {L05:      next = unmarked_next(mid);
    L04L06:      if (m == mid) {
    L05L07:        if (Atomic::cmpxchgstore(next, &self->om_in_use_list, mid) != midnext);
    L08:      } else if (m == next) {
    L06:          ObjectMonitor* marked_L09:        mid = mark_next;
    L10:        om_ptrlock(mid);
    L07L11:                 Atomic::cmpxchg(next, &cur_midnext = unmarked_next(mid);
    L12:        self->om_in_use_list->>set_next_om, marked_mid(next);
    L08L13:             } else {
    L09L14:        ObjectMonitor* extractedanchor = truenext;
    L10L15:        Atomic::dec(&self->om_in_use_countom_lock(anchor);
    L11L16:        setom_nextunlock(mid, next);
    L12L17:        break;
    L13:      }
    L14:      if (cur_mid_in_usewhile ((mid = unmarked_next(anchor)) != NULL) {
    L15L18:        set_next(cur_mid_in_use, if (m == mid);  // umark cur_mid_in_use {
    L16L19:               }
    L17:      cur_mid_in_use next = mid;
    L18:      mid = nextunmarked_next(mid);
    L19L20:             next = markanchor->set_next_loopom(midnext);
    L20:    }L21:        break;
    L21L22:         } else {
    L22:  prepend_to_om_free_list(self, m);

Most of the above code block extracts 'm' from self's in-use list; it is not an exact quote from om_release(), but it is the highlights:

    • L02 is used mark self's in-use list head:
      • 'mid' is self's in-use list head and its next field is marked.
      • 'next' is the unmarked next field from 'mid'.
    • L03 → L20: self's in-use list is traversed looking for the target ObjectMonitor 'm':
      • L04: if the current 'mid' matches 'm':
        • L05: if we can't cmpxchg self's in-use list head to the 'next' ObjectMonitor*:
          Note: This cmpxchg only works if 'm' is self's in-use list head and no-harm-no-foul if 'm' is not self's in-use list head. This is faster than doing a load-acquire of self's in-use list head, checking the value and then calling cmpxchg.
          • L06: make a marked copy of 'mid'
          • L07: we cmpxchg cur_mid_in_use's next field from 'marked_mid' to 'next'.
            Note: We use cmpxchg here instead of release-store so that we can sanity check the result; see the real code.
        • L09 → L12: we've successfully extracted 'm' from self's in-use list so we decrement self's in-use counter, unmark the next field in 'mid' and we're done.
      • L1[45]: if cur_mid_in_use != NULL, then unmark its next field.
      • L17: set 'cur_mid_in_use' to 'mid'
        Note: cur_mid_in_use keeps the marked next field so that it remains stable for a possible next field change. It cannot be deflated while it is marked.
      • L18: set 'mid' to 'next'.
      • L19: mark next field in the new 'mid' and update 'next'; loop around and do it all again.

The last line of the code block (L22) prepends 'm' to self's free list.

mark_next_loop() Helper Function

mark_next_loop() is the next interesting helper function:

    L1:  static ObjectMonitor* mark_next_loop(ObjectMonitor* om) {
    L2:    ObjectMonitor* next;
    L3:    while (true) {
    L4:      if (mark_next(om, &next)) {
    L5:        // Marked om's next field so return the unmarked value.
    L6:        return next;
    L7:      }
    L8:    }
    L9:  }
L23:            om_lock(mid);
    L24:          om_unlock(anchor);
    L25:            anchor = mid;
    L26:        }
    L27:        }
    L28:      }
    L29:      Atomic::dec(&self->om_in_use_count);
    L30:    om_unlock(mid);
L31: }
    L32:    prepend_to_om_free_list(self, m);

Most of the above code block extracts 'm' from self's in-use list; it is not an exact quote from om_release(), but it is the highlights:

    • L02 is used to lock self's in-use list head:
      • 'mid' is self's in-use list head and it is locked.
    • L05 'next' is the unmarked next field from 'mid'.
    • L06 → L07: handle first special case where the target ObjectMonitor 'm' matches the list head.
    • L08 → L12: handle second special case where the target ObjectMonitor 'm' matches next after the list head.
    • L14 → L30: self's in-use list is traversed looking for the target ObjectMonitor 'm':
      • L18: if the current 'mid' matches 'm':
        • L19: get the next after 'm'
        • L20: update the anchor to refer to the next after 'm'
        • L21: break out since we found a match
      • else
        • L23: lock the current 'mid'
        • L24-5: unlock the current anchor and advance to the new anchor
        • loop around and try again
    • L29 → L30: we've successfully extracted 'm' from self's in-use list so we decrement self's in-use counter, unlock 'mid' and we're done.

The last line of the code block (L32) prepends 'm' to self's free listThe above function loops until it marks the next field of the target ObjectMonitor. The unmarked value of the next field is returned by the function. There is nothing particularly special about this function so we don't need any line specific annotations.

ObjectSynchronizer::om_flush(Thread* self)

ObjectSynchronizer::om_flush() is reponsible for flushing self's in-use list to the global in-use list and self's free list to the global free list during self's thread exit processing. om_flush() starts with self's in-use list:

    L01:     if (mark(in_use_list = get_list_head_locked(&self->om_in_use_list, &in_use_list, &next)))) != NULL) {
    L02:         in_use_tail = in_use_list;
    L03:         in_use_count++;
    L04:         for (ObjectMonitor* cur_om = unmarked_next(in_use_list); cur_om != NULL;) {
    L05:             if (is_next_markedlocked(cur_om)) {
    L06:                 while (is_next_markedlocked(cur_om)) {
    L07:                     os::naked_short_sleep(1);
    L08:                 }
    L09:                 cur_om = unmarked_next(in_use_tail);
    L10:          continue;
    L11:        }
    L12:        if (cur_om->is_free()) {
    L13:          cur_om = unmarked_next(in_use_tail);
    L14:          continue;
    L15:        }
    L11:      }L16:        in_use_tail = cur_om;
    L12L17:      if (!cur_om->is_active()) {        in_use_count++;
    L13L18:        cur_om = unmarked_next(incur_use_tailom);
    L14L19:             continue;}
    L15L20:      }guarantee(in_use_tail != NULL, "invariant");
    L16L21:      int l_om_in_use_tailcount = cur_omAtomic::load(&self->om_in_use_count);
    L17L22:      ADIM_guarantee(l_om_in_use_count++; == in_use_count, "in-use counts don't match: "
    L18:      cur_om = unmarked_next(cur_omL23:                     "l_om_in_use_count=%d, in_use_count=%d", l_om_in_use_count, in_use_count);
    L19L24:    }
    L20:    OrderAccess      Atomic::release_store(&self->om_in_use_count, 0);
    L21L25:         OrderAccessAtomic::release_store(&self->om_in_use_list, (ObjectMonitor*)NULL);
    L22L26:         setom_nextunlock(in_use_list, next);
    L23L27:     }

The above is not an exact copy of the code block from om_flush(), but it is the highlights. What the above code block needs to do is pretty simple:

...

However, in this case, there are a lot of details:

    • L01 marks locks the in-use list head (if it is not empty):
      • 'in_use_list' is self's in-use list head and its next field is marked.
      • 'next' is the unmarked next field from 'in_use_list'.
      • The in-use list head is kept marked locked to prevent an async deflation thread from entering the list behind this thread.
        Note: An async deflation thread does check to see if the target thread is exiting, but if it has made it past that check before this thread started exiting, then we're racing.
    • L04-L18L19: loops over the in-use list counting and advancing 'in_use_tail'.
      • L05-L10: 'cur_om' s next field is marked locked so there must be an async deflater thread or a list walker thread ahead of us so we delay to give it a chance to finish and refetch 'in_use_tail's (possibly changed) next field and try again.
      • L12-L14: 'cur_om' was deflated and its allocation state was changed to Free while it was markedlocked. We just happened to be lucky enough to see it just after it was unmarked unlocked (and added to the free list). We refetch 'in_use_tail's (possibly changed) next field and try again.
      • L1[67]: finally 'cur_om' has been completely vetted so we can update 'in_use_tail' and increment 'in_use_count'.
      • L18: advance 'cur_om' to the next ObjectMonitor and do it all again.
    • L20L24: release-store self's in-use count to zero.
      Note: We clear self's in-use count before umarking unlocking self's in-use list head to avoid races.
    • L21L25: release-store self's in-use list head to NULL.
    • L22L26: unmark unlock the disconnected list head.
      Note: Yes, the next field in self's in-use list head was kept marked locked for the whole loop to keep any racing async deflater thread or list walker thread out of the in-use list. After L21L26, the racing async deflater thread will loop around and see self's in-use list is empty and bail out. Similarly, a racing list walker thread will retry and see self's in-use list is empty and bail out.

The code to process self's free list is much, much simpler because we don't have any races with an async deflater thread like self's in-use list. The only interesting bits:

    • load-acquire self's free list head.
    • release-store self's free list head count to NULLzero.
    • release-store self's free list count head to zeroNULL.

The last interesting bits for this function are prepending the local lists to the right global places:

    • prepend_list_to_gglobal_free_list(free_list, free_tail, free_count);
    • prepend_list_to_gglobal_om_in_use_list(in_use_list, in_use_tail, in_use_count);

...

ObjectSynchronizer::deflate_monitor_list() is responsible for deflating idle ObjectMonitors at a safepoint. This function can use the simpler marklock-mid-as-we-go protocol since there can be no parallel list deletions due to the safepoint:

    L01:  int ObjectSynchronizer::deflate_monitor_list(ObjectMonitor* volatile * list_p,
    L02:                                               int volatile * count_p,
    L03:                                               ObjectMonitor** free_head_p,
    L04:                                               ObjectMonitor** free_tail_p) {
    L05:    ObjectMonitor* cur_mid_in_use = NULL;
    L06:    ObjectMonitor* mid = NULL;
    L07:    ObjectMonitor* next = NULL;
    L08:    int deflated_count = 0;
    L09:    if (!mark((mid = get_list_head_locked(list_p, &mid, &next)))) == NULL) {
    L10:      return 0;  // The list is empty so nothing to deflate.
    L11:    }
    L12:    next = unmarked_next(mid);
    L13:    while (true) {
    L13L14:      oop obj = (oop) mid->object();
    L14L15:      if (obj != NULL && deflate_monitor(mid, obj, free_head_p, free_tail_p)) {
    L15L16:        if (cur_mid_in_use == NULL) {
    L17:          Atomic::cmpxchgstore(next, list_p, mid) != mid)next);
    L18:        } else {
    L16L19:          Atomic::cmpxchg(next, &cur_mid_in_use->>set_next_om, mid(next);
    L17L20:        }
    L18L21:        deflated_count++;
    L19L22:        Atomic::dec(count_p);
    L20L23:        setmid->set_next_om(mid, NULL);
    L21:        mid = next;
    L22L24:      } else {
    L23L25:        setom_nextunlock(mid, next);  // unmark next field
    L24L26:        cur_mid_in_use = mid;
    L27:      }
    L25L28:             mid = next;
    L26:      }
    L27L29:      if (mid == NULL) {
    L28L30:        break;  // Reached end of the list so nothing more to deflate.
    L29L31:      }
    L30L32:      om_lock(mid);
    L33:      next = markunmarked_next_loop(mid);
    L31L34:    }
    L32L35:    return deflated_count;
    L33L36:  }

Note: The above version of deflate_monitor_list() uses locking, but those changes were dropped during the code review cycle for JDK-8235795. The locking is only needed when additional calls to audit_and_print_stats() are used during debugging so it was decided that the pushed version would be simpler.

The above is not an exact copy of the code block from deflate_monitor_list(), but it is the highlights. What the above code block needs to do is pretty simple:

...

Since we're using the simpler mark-mid-as-we-go protocol, there are not too many details:

    • L09 marks the 'list_p' head (if it is not empty):'mid' is ': locks the 'list_p' s head and its next field is marked.head (if it is not empty)
    • L12: 'next' is the unmarked next field from 'mid'.
    • L12L13-L32L33: We walk each 'mid' in the list and determine if it can be deflated:
      • L14L15: if 'mid' is associated with an object and can be deflated:
        • L15L16: if cur_mid_in_use is NULL, we're still processing the head of the in-use list so...
          • L17: we store the list head to 'next'.
        • else
          • L19: we set
          we can't cmpxchg 'list_p' to the 'next' ObjectMonitor*:
          Note: This cmpxchg only works if 'mid' is 'list_p's list head and no-harm-no-foul if 'mid' is not 'list_p's list head. This is faster than doing a load-acquire of 'list_p', checking the value and then calling cmpxchg.
          • L16: we cmpxchg cur_mid_in_use's next field from 'mid' to 'next'.Note: We use cmpxchg here instead of release-store so that we can sanity check the result; see the real code.
        • L18 → L21L21 → L23: we've successfully extracted 'mid' from 'list_p's list so we increment 'deflated_count', decrement the counter referred to by 'count_p', set 'mid's next field to NULL and we're done.
          Note: 'mid' is the current tail in the 'free_head_p' list so we have to NULL terminate it (which also unmarks unlocks it).
      • L2[24-56]: else 'mid' can't be deflated so unmark unlock mid 's next field and advance both 'cur_mid_in_use' and .
      • L28: advance 'mid'.
      • L2[78]L29 → L30: we reached the end of the list so break out of the loop.
      • L30: mark next field in L32: lock the new 'mid'
      • L33: and update 'next'; loop around and do it all again.
    • L32L35: all done so return 'deflated_count'.

...

ObjectSynchronizer::deflate_monitor_list_using_JT() is responsible for asynchronously deflating idle ObjectMonitors using a JavaThread. This function uses the more complicated marklock-cur_mid_in_use-and-mid-as-we-go protocol because om_release() can do list deletions in parallel. We also marklock-next-next-as-we-go to prevent an om_flush() that is behind this thread from passing us. Because this function can asynchronously interact with so many other functions, this is the largest clip of code:

    L01:  int ObjectSynchronizer::deflate_monitor_list_using_JT(ObjectMonitor* volatile * list_p,
    L02:                                                                                                              int volatile * count_p,
    L03:                                                                                                              ObjectMonitor** free_head_p,
    L04:                                                                                                              ObjectMonitor** free_tail_p,
    L05:                                                                                                              ObjectMonitor** saved_mid_in_use_p) {
    L06:   JavaThread* self = JavaThread::current();
    L07:   ObjectMonitor* cur_mid_in_use = NULL;
    L07L08:      ObjectMonitor* mid = NULL;
    L08L09:      ObjectMonitor* next = NULL;
    L09L10:      ObjectMonitor* next_next = NULL;
    L10L11:      int deflated_count = 0;
    L11:   L12:   NoSafepointVerifier nsv;
    L13:   if (*saved_mid_in_use_p == NULL) {
    L12L14:          if (!mark((mid = get_list_head_locked(list_p, &mid, &next)))) == NULL) {
    L13L15:              return 0;  // The list is empty so nothing to deflate.
    L14L16:          }
    L15:   L17:     next = unmarked_next(mid);
    L18:   } else {
    L16L19:          cur_mid_in_use = *saved_mid_in_use_p;
    L17:     L20:     om_lock(cur_mid_in_use);
    L21:     mid = markunmarked_next_loop(cur_mid_in_use);
    L18L22:          if (mid == NULL) {
    L19L23:              setom_nextunlock(cur_mid_in_use, NULL);  // unmark next field
    L20L24:              *saved_mid_in_use_p = NULL;
    L21L25:              return 0;  // The remainder is empty so nothing more to deflate.
    L22L26:          }
    L23:      L27:     om_lock(mid);
    L28:     next = markunmarked_next_loop(mid);
    L24L29:      }
    L25L30:      while (true) {
    L26L31:          if (next != NULL) {
    L27:       L32:       om_lock(next);
    L33:       next_next = markunmarked_next_loop(next);
    L28L34:          }
    L29L35:          if (mid->object() != NULL && mid->is_old() &&
    L30L36:                  deflate_monitor_using_JT(mid, free_head_p, free_tail_p))) {
    L37:       if (cur_mid_in_use == NULL) {
    L31L38:                if (Atomic::cmpxchgstore(next, list_p, mid) != mid) {next);
    L32L39:                ObjectMonitor* marked_mid = mark_om_ptr(mid);} else {
    L33L40:                  ObjectMonitor* markedlocked_next = mark_om_ptr(next);
    L34L41:          Atomic::cmpxchg(marked_next, &         cur_mid_in_use->>set_next_om, marked_mid(locked_next);
    L35L42:              }
    L36L43:              deflated_count++;
    L37L44:              Atomic::dec(count_p);
    L38L45:              setmid->set_next_om(mid, NULL);
    L39L46:              mid = next;  // mid keeps non-NULL next's markedlocked next fieldstate
    L40L47:              next = next_next;
    L41L48:          } else {
    L42L49:              if (cur_mid_in_use != NULL) {
    L43L50:                  setom_nextunlock(cur_mid_in_use, mid);  // umark cur_mid_in_use
    L44L51:              }
    L45L52:              cur_mid_in_use = mid;
    L46L53:              mid = next;  // mid keeps non-NULL next's marked nextlocked fieldstate
    L47L54:              next = next_next;
    L48L55:              if (SafepointSynchronizeSafepointMechanism::isshould_synchronizingblock(self) &&
    L49L56:                      cur_mid_in_use != OrderAccessAtomic::load_acquire(list_p) &&
    L50:            cur_mid_in_use->is_old()) {
    L51L57:                  *saved_mid_in_use_p = cur_mid_in_use;
    L52L58:                  setom_nextunlock(cur_mid_in_use, mid);  // umark cur_mid_in_use
    L53L59:                  if (mid != NULL) {
    L54L60:                      setom_nextunlock(mid, next);  // umark mid
    L55L61:                  }
    L56L62:                  return deflated_count;
    L57L63:              }
    L58L64:          }
    L59L65:          if (mid == NULL) {
    L60L66:              if (cur_mid_in_use != NULL) {
    L61L67:                  setom_nextunlock(cur_mid_in_use, mid);  // umark cur_mid_in_use
    L62L68:              }
    L63L69:              break;  // Reached end of the list so nothing more to deflate.
    L64L70:          }
    L65L71:      }
    L66L72:      *saved_mid_in_use_p = NULL;
    L67L73:      return deflated_count;
    L68L74:  }

The above is not an exact copy of the code block from deflate_monitor_list_using_JT(), but it is the highlights. What the above code block needs to do is pretty simple:

...

Since we're using the more complicated marklock-cur_mid_in_use-and-mid-as-we-go protocol and also the marklock-next-next-as-we-go protocol, there is a mind numbing amount of detail:

    • L1[143-7]: Handle the initial setup if we are not resuming after a safepoint or a handshake:
      • L12 marks L14: locks the 'list_p' head (if it is not empty):
      • 'mid' is 'list_p's head and its next field is marked.
      • 'L17: 'next' is the unmarked next field from 'mid'.
    • L15L18-L23L28: Handle the initial setup if we are resuming after a safepoint or a handshake:
      • L17: mark next field in L20: lock 'cur_mid_in_use' and
      • L21: update 'mid'
      • L18L22-L21L25: If 'mid' == NULL, then we've resumed context at the end of the list so we're done.
      • L23: mark next field in L27: lock 'mid' and
      • L28: update 'next'
    • L25L30-L65L71: We walk each 'mid' in the list and determine if it can be deflated:
      • L2L3[671-3]: if next != NULL, then mark next field in lock 'next' and update 'next_next'
      • L29L35-L30L47: if 'mid' is associated with an object, 'mid' is old, and can be deflated:
        • L31L37: if cur_mid_in_use is NULL, we're still processing the head of the in-use list so...
          • L38: we store the list head to 'next'.
        • else
          • L40: make a locked
          we can't cmpxchg 'list_p' to the 'next' ObjectMonitor*:
          Note: This cmpxchg only works if 'mid' is 'list_p's list head and no-harm-no-foul if 'mid' is not 'list_p's list head. This is faster than doing a load-acquire of 'list_p', checking the value and then calling cmpxchg.
          • L32: make a marked copy of 'mid'
          • L33: make a marked copy of 'next'
          • L34L41: we cmpxchg set cur_mid_in_use's next field from 'marked_mid' to 'markedlocked_next'.Note: We use cmpxchg here instead of release-store so that we can sanity check the result; see the real code.
        • L36 → L38L43 → L45: we've successfully extracted 'mid' from 'list_p's list so we increment 'deflated_count', decrement the counter referred to by 'count_p', set 'mid's next field to NULL and we're done.
          Note: 'mid' is the current tail in the 'free_head_p' list so we have to NULL terminate it (which also unmarks unlocks it).
        • L39L46: advance 'mid' to 'next'.
          Note: 'mid' keeps non-NULL 'next's marked next field.locked state
        • L47L40: advance 'next' to 'next_next'.
      • L41L48-L57L63: 'mid' can't be deflated so we have to carefully advance the list pointers:
        • L4[23]L49,50: if cur_mid_in_use != NULL, then unmark next field in unlock 'cur_mid_in_use'.
        • L45L52: advance 'cur_mid_in_use' to 'mid'.
          Note: The next field in 'mid' is still marked locked and 'cur_mid_in_use' keeps that state.
        • L46L53: advance 'mid' to 'next'.
          Note: The next field in a A non-NULL 'next' is still marked locked and 'mid' keeps that state.
        • L47L54: advance 'next' to 'next_next'.
        • L48L55-L56L62: Handle a safepoint or a handshake if one has started and it is safe to do so.
      • L59L65-L63L69: we reached the end of the list:
        • L6[0167]: if cur_mid_in_use != NULL, then unmark next field in unlock 'cur_mid_in_use'.
        • L63L69: break out of the loop because we are done
    • L66L72: not pausing for a safepoint or handshake so clear saved state.
    • L67L73: all done so return 'deflated_count'.

...

ObjectSynchronizer::deflate_idle_monitors() handles deflating idle monitors at a safepoint from the global in-use list using ObjectSynchronizer::deflate_monitor_list(). There are only a few things that are worth mentioning:

    • OrderAccessAtomic::load_acquire(&gom_omlist_globals.in_use_list) is used to get the latest global in-use list.OrderAccess
    • Atomic::load_acquire(&gom_omlist_globals.in_use_count) is used to get the latest global in-use count.
    • prepend_list_to_gglobal_free_list(free_head_p, free_tail_p, deflated_count) is used to prepend the deflated ObjectMonitors on the global free list.

...

ObjectSynchronizer::deflate_common_idle_monitors_using_JT() handles asynchronously deflating idle monitors from either the global in-use list or a per-thread in-use list using ObjectSynchronizer::deflate_monitor_list_using_JT(). There are only a few things that are worth mentioning:

    • OrderAccessAtomic::load_acquire(&gom_omlist_globals.in_use_count) is used to get the latest global in-use count.
    • OrderAccessAtomic::load_acquire(&target→om_in_use_count) is used to get the latest per-thread in-use count.
    • prepend_list_to_gglobal_free_list(free_head_p, free_tail_p, local_deflated_count) is used to prepend the deflated ObjectMonitors on the global free list.

...

  • New diagnostic option '-XX:AsyncDeflateIdleMonitors' that is default 'true' so that the new mechanism is used by default, but it can be disabled for potential failure diagnosis.
  • ObjectMonitor deflation is still initiated or signaled as needed at a safepoint. When Async Monitor Deflation is in use, flags are set so that the work is done by the ServiceThread which offloads the safepoint cleanup mechanism.
    • Having the ServiceThread deflate a potentially long list of in-use monitors could potentially delay the start of a safepoint. This is detected in ObjectSynchronizer::deflate_monitor_list_using_JT() which will save the current state when it is safe to do so and return to its caller to drop locks as needed before honoring the safepoint request.
  • New diagnostic option '-XX:AsyncDeflationInterval' that is default 250 millis; this this option controls how frequently we async default idle monitors when MonitorUsedDeflationThreshold is exceeded.
  • Everything else is just monitor list management, infrastructure, logging, debugging and the like. :-)

...

  • The existing safepoint deflation mechanism is still invoked at safepoint "cleanup" time when '-XX:AsyncDeflateIdleMonitors' is false or when a special cleanup request is made.
  • SafepointSynchronize::do_cleanup_tasks() calls:
    • ObjectSynchronizer::prepare_deflate_idle_monitors()
    • A ParallelSPCleanupTask is used to perform the tasks (possibly using parallel tasks):
      • A ParallelSPCleanupThreadClosure is used to perform the per-thread tasks:
        • ObjectSynchronizer::deflate_thread_local_monitors() to deflate per-thread idle monitors
      • ObjectSynchronizer::deflate_idle_monitors() to deflate global idle monitors
    •  ObjectSynchronizer::finish_deflate_idle_monitors()
  • If MonitorUsedDeflationThreshold is exceeded (default is 90%, 0 means off), then the ServiceThread will invoke a cleanup safepoint when '-XX:AsyncDeflateIdleMonitors' is false. When '-XX:AsyncDeflateIdleMonitors' is true, the ServiceThread will call ObjectSynchronizer::deflate_idle_monitors_using_JT().
    • This experimental flag was added in JDK10 via:

...

    • For this option, exceeded means:

   ((gMonitorPopulation - gMonitorFreeCount) / gMonitorPopulation) > NN%

  • If MonitorBound is exceeded (default is 0 which means off), cleanup safepoint will be induced.
  • For this option, exceeded means:

(gMonitorPopulation - gMonitorFreeCount) > MonitorBound

  • This is a very difficult option to use correctly as it does not scale.

om_list_globals.population - om_list_globals.free_count) / om_list_globals.population) > NN%

  • Changes to the safepoint deflation mechanism by the Async Monitor Deflation project (when async deflation is enabled):
    • If System.gc() is called, then a special deflation request is made which invokes the safepoint deflation mechanism.
    • Added the AsyncDeflationInterval diagnostic option (default 250 millis, 0 means off) to prevent MonitorUsedDeflationThreshold requests from swamping the ServiceThread.
      • Description: Async deflate idle monitors every so many milliseconds when MonitorUsedDeflationThreshold is exceeded (0 is off).
      • A special deflation request can cause an async deflation to happen sooner than AsyncDeflationInterval.
    • SafepointSynchronize::dois_cleanup_tasksneeded() now calls:
      • ObjectSynchronizer::is_safepoint_deflation_needed() instead of ObjectSynchronizer::is_cleanup_needed().
      • is_safepoint_deflation_needed() returns true only if a special deflation request is made (see above).
    • SafepointSynchronize::do_cleanup_tasks() now (indirectly) calls:
      • ObjectSynchronizer::do_safepoint_work() instead of ObjectSynchronizer::deflate_idle_monitors().
      • do_cleanup_tasks() can be called for non deflation related cleanup reasons and that will still result in a call to do_safepoint_work().
    • ObjectSynchronizer::do_safepoint_work() only does the safepoint cleanup tasks if there is a special deflation request. Otherwise it just sets the is_async_deflation_requested flag and notifies the ServiceThread.
    • ObjectSynchronizer::deflate_idle_monitors() and ObjectSynchronizer::deflate_thread_local_monitors() do nothing unless there is a special deflation request.
  • Changes to the ServiceThread mechanism by the Async Monitor Deflation project (when async deflation is enabled):

    • The ServiceThread will wake up every GuaranteedSafepointInterval to check for cleanup tasks.

      • This allows is_async_deflation_needed() to be checked at the same interval.

    • The ServiceThread handles deflating global idle monitors and deflating the per-thread idle monitors by calling ObjectSynchronizer::deflate_idle_monitors_using_JT().

  • Other invocation changes by the Async Monitor Deflation project (when async deflation is enabled):

    • VM_Exit::doit_prologue() will request a special cleanup to reduce the noise in 'monitorinflation' logging at VM exit time.

    • Before the final safepoint in a non-System.exit() end to the VM, we will request a special cleanup to reduce the noise in 'monitorinflation' logging at VM exit time.'monitorinflation' logging at VM exit time.

    • The following whitebox test functions will request a special cleanup:
      • WB_G1StartMarkCycle()

      • WB_FullGC()
      • WB_ForceSafepoint()

Gory Details

  • Counterpart function mapping for those that know the existing code:
    • ObjectSynchronizer class:
      • deflate_idle_monitors() has deflate_idle_monitors_using_JT(), deflate_global_idle_monitors_using_JT(), deflate_per_thread_idle_monitors_using_JT(), and deflate_common_idle_monitors_using_JT().
      • deflate_monitor_list() has deflate_monitor_list_using_JT()
      • deflate_monitor() has deflate_monitor_using_JT()
    • ObjectMonitor class:
      • clear() has clear_using_JT()
  • These functions recognize the Async Monitor Deflation protocol and adapt their operations:
    • ObjectMonitor::enter()
    • ObjectMonitor::EnterI()ObjectMonitor::ReenterI()
    • ObjectSynchronizer::quick_enter()
    • ObjectSynchronizer::deflate_monitor()
    • Note: These changes include handling the lingering owner == DEFLATER_MARKER value.
  • Also these functions had to adapt and retry their operations:
    • ObjectSynchronizer::FastHashCode()ObjectSynchronizer::current_thread_holds_lock()
    • ObjectSynchronizer::query_lock_ownership()
    • ObjectSynchronizer::get_lock_owner()
    • ObjectSynchronizer::monitors_iterate()
    • ObjectSynchronizer::inflate_helper()ObjectSynchronizer::inflate() 
  • Various assertions had to be modified to pass without their real check when AsyncDeflateIdleMonitors is true; this is due to the change in semantics for the ObjectMonitor owner field.
  • ObjectMonitor has a new allocation_state field that supports three states: 'Free', 'New', 'Old'. Async Monitor Deflation is only applied to ObjectMonitors that have reached the 'Old' state.
    • Note: Prior to CR1/v2.01/4-for-jdk13, the allocation state was transitioned from 'New' to 'Old' in deflate_monitor_via_JT(). This meant that deflate_monitor_via_JT() had to see an ObjectMonitor twice before deflating it. This policy was intended to prevent oscillation from 'New' → 'Old' and back again.
    • In CR1/v2.01/4-for-jdk13, the allocation state is transitioned from 'New' -> "Old" in inflate(). This makes ObjectMonitors available for deflation earlier. So far there has been no signs of oscillation from 'New' → 'Old' and back again.
    ObjectMonitor has a new ref_count field that is used as part of the async deflation protocol and to indicate that an ObjectMonitor* is in use so the ObjectMonitor should not be deflated; this is needed for operations on non-busy monitors so that ObjectMonitor values don't change while they are being queried. There is a new ObjectMonitorHandle helper to manage the ref_count
    • .
  • The ObjectMonitor::owner() accessor detects DEFLATER_MARKER and returns NULL in that case to minimize the places that need to understand the new DEFLATER_MARKER value.
  • System.gc()/JVM_GC() causes a special monitor list cleanup request which uses the safepoint based monitor list mechanism. So even if AsyncDeflateIdleMonitors is enabled, the safepoint based mechanism is still used by this special case.
    • This is necessary for those tests that do something to cause an object's monitor to be inflated, clear the only reference to the object and then expect that enough System.gc() calls will eventually cause the object to be GC'ed even when the thread never inflates another object's monitor. Yes, we have several tests like that. :-)