- Loading...
Note: Updating the wiki for the upcoming CR7/v2.07/10-for-jdk14 review cycle. Changes have been made, but not yet sanity checked.
Table of Contents:
| Table of Contents |
|---|
...
RFE: 8153224 Monitor deflation prolong safepoints
https://bugs.openjdk.java.net/browse/JDK-8153224
Full Webrev: http://cr.openjdk.java.net/~dcubed/8153224-webrev/910-for-jdk14.v2.0607.full/
Inc Webrev: http://cr.openjdk.java.net/~dcubed/8153224-webrev/910-for-jdk14.v2.0607.inc/
This patch for Async Monitor Deflation is based on Carsten Varming's
...
...
...
Sorry in advance for the sudden deep dive into really gory C2 details, but this is related to a majority of save_om_ptr() so this is the right place to talk about the complication.
As of CR7/v2.07/10-for-jdk14, we have added C2 inc_om_ref_count() on X64 to implement the ref_count management parts of save_om_ptr():
T-enter T-enter ObjectMonitor T-deflate
-------------------------------------------- +-------------------------+ ------------------------------------------
ObjectMonitor::enter() { | owner=DEFLATER_MARKER | deflate_monitor_using_JT() {
<owner is contended> | ref_count=1 | cmpxchg(DEFLATER_MARKER, &owner, NULL)
1> EnterI() { +-------------------------+ 1> :
if (owner == DEFLATER_MARKER && || 2> : <thread_stalls>
cmpxchg(Self, &owner, \/ ObjectMonitor :
DEFLATER_MARKER) T-deflate
-------------------------------------------- +-------------------------+ ------------------------------------------
ObjectMonitor::enter() { :
| owner== DEFLATER_MARKER | deflate_monitor_using_JT() {
<owner is contended> | owner=Self/T-enter | :
| ref_count=1 | // EnterI is done cmpxchg(DEFLATER_MARKER, &owner, NULL)
1> EnterI() { | ref_count=0 | : <thread_resumes>
return +-------------------------+ 1> :
prev = cmpxchg(-max_jint, &ref_count, 0)
if (owner == DEFLATER_MARKER && } || 2> : <thread_stalls>
cmpxchg(Self, &owner, || \/ if (prev == 0 &&:
} // enter() is done DEFLATER_MARKER) +-------------------------+ :
\/ 3> owner == DEFLATER_MARKER) {
~OMH: atomic dec ref_count | owner=Self/T-enter | :
+-------------------------+ // EnterI }is elsedone {
2> : <does app work> | ref_count=0 | owner=Self/T-enter|NULL | cmpxchg(NULL, &owner, DEFLATER_MARKER)
3> :: <thread_resumes>
return +-------------------------+ |prev ref_count= cmpxchg(-max_jint, &ref_count, 0)
| } atomic add max_jint to ref_count
exit() monitor +-------------------------+ 4> bailout on deflation
4> owner = NULL || }
if (prev == 0 &&
} // enter() is done \/
3> owner == DEFLATER_MARKER) {
~OMH: atomic dec ref_count +-------------------------+
} else {
2> : <does app work> | owner=Self/T-enter|NULL |
cmpxchg(NULL, &owner, DEFLATER_MARKER)
3> : | ref_count=0 -max_jint |
atomic add max_jint to ref_count
exit() monitor +-------------------------+ 4> bailout on deflation
4> owner = NULL || }
\/
+-------------------------+
| owner=Self/T-enter|NULL |
| ref_count=0 |
+-------------------------+
NULL → DEFLATE_MARKER → Self/T-enter
so we really have A1-B-A2, but the A-B-A principal still holds.
If the T-enter thread has managed to enter and exit the monitor during the T-deflate stall, then our owner field A-B-A transition is:
NULL → DEFLATE_MARKER → Self/T-enter → NULL
so we really have A-B1-B2-A, but the A-B-A principal still holds.
T-enter finished doing app work and is about to exit the monitor (or it has already exited the monitor).
...
...
NULL → DEFLATE_MARKER → Self/T-enter
so we really have A1-B-A2, but the A-B-A principal still holds.
If the T-enter thread has managed to enter and exit the monitor during the T-deflate stall, then our owner field A-B-A transition is:
NULL → DEFLATE_MARKER → Self/T-enter → NULL
so we really have A-B1-B2-A, but the A-B-A principal still holds.
T-enter finished doing app work and is about to exit the monitor (or it has already exited the monitor).
The fourth ObjectMonitor box is showing the fields at this point and the "4>" markers are showing where each thread is at for that ObjectMonitor box.
After T-deflate has won the race for deflating an ObjectMonitor it has to restore the header in the associated object. Of course another thread can be trying to do something to the object's header at the same time. Isn't asynchronous work exciting?!?!
ObjectMonitor::install_displaced_markword_in_object() is called from two places so we can have a race between a T-save thread and a T-deflate thread:
T-save object T-deflate
------------------------------------------- +-------------+ --
ObjectMonitor::install_displaced_markword_in_object() is called from two places so we can have a race between a T-save thread and a T-deflate thread:
T-save object T-deflate
------------------------------------------- +-------------+ --------------------------------------------
install_displaced_markword_in_object() { | mark=om_ptr | install_displaced_markword_in_object() {
dmw = header() +-------------+ dmw = header()
if (!dmw->is_marked() && if (!dmw->is_marked() &&
dmw->hash() == 0) { dmw->hash() == 0) {
create marked_dmw create marked_dmw
dmw = cmpxchg(marked_dmw, &header, dmw) dmw = cmpxchg(marked_dmw, &header, dmw)
} }
...
Note: The above code snippet comes from ObjectSynchronizer::prepend_block_to_lists(); see that function for more complete context (and comments).
Note: In v2.06, L2 uses OrderAccess::load_acquire() and L3 uses OrderAccess::release_store(). David H. pointed out that a regular load and regular store can be used. I've made the change for the upcoming V2.07 and will remove this note when the project rolls forward to v2.07.
...
T1: Simple Take: | | T2: Simple Prepend:
---------------- | T1 and T3 see this initial list: | -------------------
while (true) { | +---+ +---+ +---+ | :
cur = head; | head -> | A | -> | X | -> | Y | | :
next = cur->next; | +---+ +---+ +---+ | :
: | T3 takes "A", T2 sees this list: | :
: | +---+ +---+ | :
: | head -> | X | -> | Y | | :
: | +---+ +---+ | while (true) {
: | T2 prepends "B": | cur = head;
: | +---+ +---+ +---+ | new->next = cur;
: | head -> | B | -> | X | -> | Y | | if (cmpxchg(new, &head, cur) == cur) {
: | +---+ +---+ +---+ | break;
: | T3 prepends "A": | }
: | +---+ +---+ +---+ +---+ | }
: | head -> | A | -> | B | -> | X | -> | Y | |
: | +---+ +---+ +---+ +---+ |
: | T1 takes "A", loses "B": |
: | +---+ |
: | | B | ----+ |
: | +---+ | |
: | V |
: | +---+ +---+ |
if (cmpxchg(next, &head, cur) == cur) { | head -> | X | -> | Y | |
} | +---+ +---+ |
} | +---+ |
return cur; | cur -> | A | |
| +---+ |
So the simple algorithms are not sufficient when we allow simultaneous take and prepend operations.
Note: This subsection is talking about "Marking" as a solution to the A-B-A race in abstract terms. The purpose of this marking code and A-B-A example is to introduce the solution concepts. The code shown here is not an exact match for the project code.
One solution to the A-B-A race is to mark the next field in a node to indicate that the node is busy. Only one thread can successfully mark the next field in a node at a time and other threads must loop around and retry their marking operation until they succeed. Each thread that marks the next field in a node must unmark the next field when it is done with the node so that other threads can proceed.
Here's the take algorithm modified with marking (still ignores the empty list for clarity):
// "take" a node with marking:
while (true) {
cur = head;
if (!mark_next(cur, &next)) {
// could not mark cur so try again
continue;
}
if (head != cur) {
// head changed while marking cur so try again
unmark_next(cur);
continue;
}
// list head is now marked so switch it to next which also makes list head unmarked
OrderAccess::release_store(&head, next);
unmark_next(cur); // unmark cur and return it
return cur;
}
The modified take algorithm does not change the list head pointer until it has successfully marked the list head node. Notice that after we mark the list head node we have to verify that the list head pointer hasn't changed in the mean time. Only after we have verified that the node we marked is still the list head is it safe to modify the list head pointer. The marking of the list head prevents the take algorithm from executing in parallel with a prepend algorithm and losing a node.
Also notice that we update the list head pointer with release-store instead of with cmpxchg. Since we have the list head marked, we are not racing with other threads to change the list head pointer so we can use the smaller release-store hammer instead of the heavier cmpxchg hammer.
Here's the prepend algorithm modified with marking (ignores the empty list for clarity):
// "prepend" a node with marking:
while (true) {
cur = head;
if (!mark_next(cur, &next)) {
// could not mark cur so try again
continue;
}
if (head != cur) {
// head changed while marking cur so try again
unmark_next(cur);
continue;
}
// list head is now marked so switch it to 'new' which also makes list head unmarked
Atomic::release_store(&head, new);
unmark_next(cur); // unmark the previous list head
}
The modified prepend algorithm does not change the list head pointer until it has successfully marked the list head node. Notice that after we mark the list head node we have to verify that the list head pointer hasn't changed in the mean time. Only after we have verified that the node we marked is still the list head is it safe to modify the list head pointer. The marking of the list head prevents the prepend algorithm from executing in parallel with the take algorithm and losing a node.
Also notice that we update the list head pointer with release-store instead of with cmpxchg for the same reasons as the previous algorithm.
The purpose of this subsection is to provide background information about how ObjectMonitors move between the various lists. This project changes the way these movements are implemented, but does not change the movements themselves. For example, newly allocated blocks of ObjectMonitors are always prepending to the global free list; this is true in the baseline and is true in this project.
ObjectMonitors are deflated at a safepoint by:
ObjectSynchronizer::deflate_monitor_list() calling ObjectSynchronizer::deflate_monitor()
And when Async Monitor Deflation is enabled, the are deflated by:
ObjectSynchronizer::deflate_monitor_list_using_JT() calling ObjectSynchronizer::deflate_monitor_using_JT()
Idle ObjectMonitors are deflated by the ServiceThread when Async Monitor Deflation is enabled. They can also be deflated at a safepoint by the VMThread or by a task worker thread. Safepoint deflation is used when Async Monitor Deflation is disabled or when there is a special deflation request made, e.g., System.gc().
An idle ObjectMonitor is deflated and extracted from its in-use list and prepended to the global free list. The in-use list can be either the global in-use list or a per-thread in-use list. Deflated ObjectMonitors are always prepended to the global free list.
It is now time to switch from algorithms to real snippets from the code.
The next case to consider for lock-free list management with the Java Monitor subsystem is prepending to a list that also allows deletes. As you might imagine, the possibility of a prepend racing with a delete makes things more complicated. The solution is to "mark" the next field in the ObjectMonitor at the head of the list we're trying to prepend to. A successful mark tells other prependers or deleters that the marked ObjectMonitor is busy and they will need to retry their own mark operation.
Note: This is the v2.06 version of code and associated notes:
L01: while (true) {
L02: ObjectMonitor* cur = OrderAccess::load_acquire(list_p);
L03: ObjectMonitor* next = NULL;
L04: if (!mark_next(m, &next)) {
L05: continue; // failed to mark next field so try it all again
L06: }
L07: set_next(m, cur); // m now points to cur (and unmarks m)
L08: if (cur == NULL) {
L09: // No potential race with other prependers since *list_p is empty.
L10: if (Atomic::cmpxchg(m, list_p, cur) == cur) {
L11: // Successfully switched *list_p to 'm'.
L12: Atomic::inc(count_p);
L13: break;
L14: }
L15: // Implied else: try it all again
L16: } else {
L17: // Try to mark next field to guard against races:
L18: if (!mark_next(cur, &next)) {
L19: continue; // failed to mark next field so try it all again
L20: }
L21: // We marked the next field so try to switch *list_p to 'm'.
L22: if (Atomic::cmpxchg(m, list_p, cur) != cur) {
L23: // The list head has changed so unmark the next field and try again:
L24: set_next(cur, next);
L25: continue;
L26: }
L27: Atomic::inc(count_p);
L28: set_next(cur, next); // unmark next field
L29: break;
L30: }
L31: }
What the above block of code does is:
The above block of code can be called by multiple prependers in parallel or with deleters running in parallel and does not lose track of any ObjectMonitor. Of course, the "does not lose track of any ObjectMonitor" part is where all the details come in:
ObjectMonitor 'm' is safely on the list at the point that we have updated 'list_p' to refer to 'm'. In this subsection's block of code, we also called two new functions, mark_next() and set_next(), that are explained in the next subsection.
+---+ |So the simple algorithms are not sufficient when we allow simultaneous take and prepend operations.
Note: This subsection is talking about "Marking" as a solution to the A-B-A race in abstract terms. The purpose of this marking code and A-B-A example is to introduce the solution concepts. The code shown here is not an exact match for the project code.
One solution to the A-B-A race is to mark the next field in a node to indicate that the node is busy. Only one thread can successfully mark the next field in a node at a time and other threads must loop around and retry their marking operation until they succeed. Each thread that marks the next field in a node must unmark the next field when it is done with the node so that other threads can proceed.
Here's the take algorithm modified with marking (still ignores the empty list for clarity):
// "take" a node with marking:
while (true) {
cur = head;
if (!mark_next(cur, &next)) {
// could not mark cur so try again
continue;
}
if (head != cur) {
// head changed while marking cur so try again
unmark_next(cur);
continue;
}
// list head is now marked so switch it to next which also makes list head unmarked
OrderAccess::release_store(&head, next);
unmark_next(cur); // unmark cur and return it
return cur;
}
The modified take algorithm does not change the list head pointer until it has successfully marked the list head node. Notice that after we mark the list head node we have to verify that the list head pointer hasn't changed in the mean time. Only after we have verified that the node we marked is still the list head is it safe to modify the list head pointer. The marking of the list head prevents the take algorithm from executing in parallel with a prepend algorithm and losing a node.
Also notice that we update the list head pointer with release-store instead of with cmpxchg. Since we have the list head marked, we are not racing with other threads to change the list head pointer so we can use the smaller release-store hammer instead of the heavier cmpxchg hammer.
Here's the prepend algorithm modified with marking (ignores the empty list for clarity):
// "prepend" a node with marking:
while (true) {
cur = head;
if (!mark_next(cur, &next)) {
// could not mark cur so try again
continue;
}
if (head != cur) {
// head changed while marking cur so try again
unmark_next(cur);
continue;
}
// list head is now marked so switch it to 'new' which also makes list head unmarked
Atomic::release_store(&head, new);
unmark_next(cur); // unmark the previous list head
}
The modified prepend algorithm does not change the list head pointer until it has successfully marked the list head node. Notice that after we mark the list head node we have to verify that the list head pointer hasn't changed in the mean time. Only after we have verified that the node we marked is still the list head is it safe to modify the list head pointer. The marking of the list head prevents the prepend algorithm from executing in parallel with the take algorithm and losing a node.
Also notice that we update the list head pointer with release-store instead of with cmpxchg for the same reasons as the previous algorithm.
The purpose of this subsection is to provide background information about how ObjectMonitors move between the various lists. This project changes the way these movements are implemented, but does not change the movements themselves. For example, newly allocated blocks of ObjectMonitors are always prepending to the global free list; this is true in the baseline and is true in this project.
ObjectMonitors are deflated at a safepoint by:
ObjectSynchronizer::deflate_monitor_list() calling ObjectSynchronizer::deflate_monitor()
And when Async Monitor Deflation is enabled, they are deflated by:
ObjectSynchronizer::deflate_monitor_list_using_JT() calling ObjectSynchronizer::deflate_monitor_using_JT()
Idle ObjectMonitors are deflated by the ServiceThread when Async Monitor Deflation is enabled. They can also be deflated at a safepoint by the VMThread or by a task worker thread. Safepoint deflation is used when Async Monitor Deflation is disabled or when there is a special deflation request, e.g., System.gc().
An idle ObjectMonitor is deflated and extracted from its in-use list and prepended to the global free list. The in-use list can be either the global in-use list or a per-thread in-use list. Deflated ObjectMonitors are always prepended to the global free list.
It is now time to switch from algorithms to real snippets from the code.
The next case to consider for lock-free list management with the Java Monitor subsystem is prepending to a list that also allows deletes. As you might imagine, the possibility of a prepend racing with a delete makes things more complicated. The solution is to "mark" the next field in the ObjectMonitor at the head of the list we're trying to prepend to. A successful mark tells other prependers or deleters that the marked ObjectMonitor is busy and they will need to retry their own mark operation.Note: This is the v2.07 version of code and associated notes:
L01: while (true) {
L02: (void)mark_next_loop(m); // mark m so we can safely update its next field
L03: ObjectMonitor* cur = NULL;
L04: ObjectMonitor* next = NULL;
L05: // Mark the list head to guard against A-B-A race:
L06: if (mark_list_head(list_p, &cur, &next)) {
L07: // List head is now marked so we can safely switch it.
L08: set_next(m, cur); // m now points to cur (and unmarks m)
L09: OrderAccess::release_store(list_p, m); // Switch list head to unmarked m.
L10: set_next(cur, next); // Unmark the previous list head.
L11: break;
L12: }
L13: // The list is empty so try to set the list head.
L14: assert(cur == NULL, "cur must be NULL: cur=" INTPTR_FORMAT, p2i(cur));
L15: set_next(m, cur); // m now points to NULL (and unmarks m)
L16: if (Atomic::cmpxchg(m, list_p, cur) == cur) {
L17: // List head is now unmarked m.
L18: break;
L19: }
L20: // Implied else: try it all again
L21: }
L22: Atomic::inc(count_p);...
L01: if (from_per_thread_alloc) {
L02: mark_list_head(&self->om_in_use_list, &mid, &next);
L03: while (true) {
L04: if (m == mid) {
L05: if (Atomic::cmpxchg(next, cur_mid_in_use == NULL) {
L06: OrderAccess::release_store(&self->om_in_use_list, mid) != mid) {, next);
L06L07: ObjectMonitor* marked_mid = mark_om_ptr(mid);} else {
L07L08: AtomicOrderAccess::cmpxchg(next, release_store(&cur_mid_in_use->_next_om, marked_midnext);
L08: L09: }
L09L10: extracted = true;
L10L11: Atomic::dec(&self->om_in_use_count);
L11L12: set_next(mid, next);
L12L13: break;
L13L14: }
L14L15: if (cur_mid_in_use != NULL) {
L15L16: set_next(cur_mid_in_use, mid); // umark cur_mid_in_use
L16L17: }
L17L18: cur_mid_in_use = mid;
L18: mid = next;
L19: next = mark_next_loop(mid);
L20: }
L21: }
L22: prepend_to_om_free_list(self, m);Note: In v2.07, I figured out a simpler way to do L05-L08:
L05: if (cur_mid_in_use == NULL) {next;
L06: OrderAccess::release_store(&self->om_in_use_list, nextL20: next = mark_next_loop(mid);
L07L21: }
elseL22: {}
L08L23: OrderAccess::release_store(&cur_mid_in_use->_next_om, next);
L09: }
...
prepend_to_om_free_list(self, m);Most of the above code block extracts 'm' from self's in-use list; it is not an exact quote from om_release(), but it is the highlights:
The last line of the code block (L22L23) prepends 'm' to self's free list.
...
L01: int ObjectSynchronizer::deflate_monitor_list(ObjectMonitor* volatile * list_p,
L02: int volatile * count_p,
L03: ObjectMonitor** free_head_p,
L04: ObjectMonitor** free_tail_p) {
L05: ObjectMonitor* cur_mid_in_use = NULL;
L06: ObjectMonitor* mid = NULL;
L07: ObjectMonitor* next = NULL;
L08: int deflated_count = 0;
L09: if (!mark_list_head(list_p, &mid, &next)) {
L10: return 0; // The list is empty so nothing to deflate.
L11: }
L12: while (true) {
L13: oop obj = (oop) mid->object();
L14: if (obj != NULL && deflate_monitor(mid, obj, free_head_p, free_tail_p)) {
L15: if (Atomic::cmpxchg(next, if (cur_mid_in_use == NULL) {
L16: OrderAccess::release_store(list_p, mid) != mid) {
L16: Atomic::cmpxchg(next, next);
L17: } else {
L18: OrderAccess::release_store(&cur_mid_in_use->_next_om, midnext);
L17L19: }
L18L20: deflated_count++;
L19L21: Atomic::dec(count_p);
L20L22: set_next(mid, NULL);
L21L23: mid = next;
L22L24: } else {
L23L25: set_next(mid, next); // unmark next field
L24L26: cur_mid_in_use = mid;
L25L27: mid = next;
L26L28: }
L27L29: if (mid == NULL) {
L28L30: break; // Reached end of the list so nothing more to deflate.
L29: }
L30: next = mark_next_loop(mid);
L31: }
L32: return deflated_count;
L33: }
Note: In v2.07, I figured out a simpler way to do L15-L16:
L15: if (cur_mid_in_use == NULL) {
L16: OrderAccess::release_store(list_p, next.
L31: }
L32: next = mark_next_loop(mid);
L17L33: } else {
L18L34: OrderAccess::release_store(&cur_mid_in_use->_next_om, next) return deflated_count;
L19L35: }
...
}The above is not an exact copy of the code block from deflate_monitor_list(), but it is the highlights. What the above code block needs to do is pretty simple:
...
...
L01: int ObjectSynchronizer::deflate_monitor_list_using_JT(ObjectMonitor* volatile * list_p,
L02: int volatile * count_p,
L03: ObjectMonitor** free_head_p,
L04: ObjectMonitor** free_tail_p,
L05: ObjectMonitor** saved_mid_in_use_p) {
L06: ObjectMonitor* cur_mid_in_use = NULL;
L07: ObjectMonitor* mid = NULL;
L08: ObjectMonitor* next = NULL;
L09: ObjectMonitor* next_next = NULL;
L10: int deflated_count = 0;
L11: if (*saved_mid_in_use_p == NULL) {
L12: if (!mark_list_head(list_p, &mid, &next)) {
L13: return 0; // The list is empty so nothing to deflate.
L14: }
L15: } else {
L16: cur_mid_in_use = *saved_mid_in_use_p;
L17: mid = mark_next_loop(cur_mid_in_use);
L18: if (mid == NULL) {
L19: set_next(cur_mid_in_use, NULL); // unmark next field
L20: *saved_mid_in_use_p = NULL;
L21: return 0; // The remainder is empty so nothing more to deflate.
L22: }
L23: next = mark_next_loop(mid);
L24: }
L25: while (true) {
L26: if (next != NULL) {
L27: next_next = mark_next_loop(next);
L28: }
L29: if (mid->object() != NULL && mid->is_old() &&
L30: deflate_monitor_using_JT(mid, free_head_p, free_tail_p)) {
L31: if (Atomic::cmpxchg(next, list_p, mid) != midcur_mid_in_use == NULL) {
L32: ObjectMonitor* marked_mid = mark_om_ptr(midOrderAccess::release_store(list_p, next);
L33: } else {
L34: ObjectMonitor* marked_next = mark_om_ptr(next);
L34L35: AtomicOrderAccess::cmpxchg(marked_next, release_store(&cur_mid_in_use->_next_om, marked_midnext);
L35: L36: }
L36L37: deflated_count++;
L37L38: Atomic::dec(count_p);
L38L39: set_next(mid, NULL);
L39L40: mid = next; // mid keeps non-NULL next's marked next field
L40L41: next = next_next;
L41L42: } else {
L42L43: if (cur_mid_in_use != NULL) {
L43L44: set_next(cur_mid_in_use, mid); // umark cur_mid_in_use
L44L45: }
L45L46: cur_mid_in_use = mid;
L46L47: mid = next; // mid keeps non-NULL next's marked next field
L47L48: next = next_next;
L48L49: if (SafepointSynchronize::is_synchronizing() &&
L49L50: cur_mid_in_use != OrderAccess::load_acquire(list_p) &&
L50L51: cur_mid_in_use->is_old()) {
L51L52: *saved_mid_in_use_p = cur_mid_in_use;
L52L53: set_next(cur_mid_in_use, mid); // umark cur_mid_in_use
L53L54: if (mid != NULL) {
L54L55: set_next(mid, next); // umark mid
L55L56: }
L56L57: return deflated_count;
L57L58: }
L58L59: }
L59L60: if (mid == NULL) {
L60L61: if (cur_mid_in_use != NULL) {
L61L62: set_next(cur_mid_in_use, mid); // umark cur_mid_in_use
L62L63: }
L63L64: break; // Reached end of the list so nothing more to deflate.
L64: }
L65: }
L66: *saved_mid_in_use_p = NULL;
L67: return deflated_count;
L68: }
Note: In v2.07, I figured out a simpler way to do L31-L34:
L31: if (cur_mid_in_use == NULL) {
L32: OrderAccess::release_store(list_p, next);
L33: } else {
L34: ObjectMonitor* marked_next = mark_om_ptr(next);
L35: OrderAccess::release_store(&cur_mid_in_use->_next_om, marked_next);
L36: to deflate.
L65: }
L66: }
L67: *saved_mid_in_use_p = NULL;
L68: return deflated_count;
L69: }
The line numbers in the analysis below are still for the v2.06 version and will be updated when we roll the project forward to v2.07.
...
...
...
...
((gMonitorPopulation - gMonitorFreeCount) / gMonitorPopulationg_om_population - g_om_free_count) / g_om_population) > NN%
(gMonitorPopulation - gMonitorFreeCountg_om_population - g_om_free_count) > MonitorBound
Changes to the ServiceThread mechanism by the Async Monitor Deflation project (when async deflation is enabled):
The ServiceThread will wake up every GuaranteedSafepointInterval to check for cleanup tasks.
This allows is_async_deflation_needed() to be checked at the same interval.
The ServiceThread handles deflating global idle monitors and deflating the per-thread idle monitors by calling ObjectSynchronizer::deflate_idle_monitors_using_JT().
Other invocation changes by the Async Monitor Deflation project (when async deflation is enabled):
VM_Exit::doit_prologue() will request a special cleanup to reduce the noise in 'monitorinflation' logging at VM exit time.
Before the final safepoint in a non-System.exit() end to the VM, we will request a special cleanup to reduce the noise in 'monitorinflation' logging at VM exit time.
...