You are viewing an old version of this page. View the current version.
Compare with Current
View Page History
« Previous
Version 3
Next »
This page under construction...
For Sumatra targets, we wanted to experiment with an infrastructure for supporting deoptimization, that is, transferring execution from the compiled code running on the GPU back to the equivalent bytecodes being run through the interpreter on the CPU. Some reasons for this
- as a way of handing certain hopefully rare events, such as throwing exceptions back to the CPU, which might be difficult to implement in the GPU language. (This relies on the fact that the interpreter can handle anything). If profiling shows that such events are not actually "rare", the particular lambda is probably not a good candidate for offload.
- compiled code running on the GPU might get to a point where it needs the CPU to do something before the GPU can make further progress. For example, if we are supporting heap allocation on the GPU, we could get to a point where we cannot allocate any new object until a GC happens. If the target does not have an easy way to spin and wait for the CPU to do the GC, one way to support this is to deoptimize. The interpreter will let the GC happen and then finish the allocation and continue from that point.