by the nature of the application to be supervised, as shown in the following.
2. The number of bytecode instructions until the next conditional C is executed that checks whether the consumption variable has reached or exceeded zero. This value is bounded by the number of bytecode instructions on the longest execution path between two such conditionals C. The worst case is a method M that has a series of MAXPATH invocations of a leaf method L. We assume that L has MAXPATH − 1 bytecode instructions, no JVM subroutines, and no loops. M will have the conditional C in the beginning and after each segment of MAXPATH instructions, whereas C does not occur in L. During the execution of M, C is reached every MAXPATH ∗ (MAXPATH − 1) instructions, i.e., before MAXPATH 2 instructions.
Considering these two factors, in the worst case the trigger- Consume() method of ThreadCPUAccount (which in turn invokes the consume(long) method of CPUManager) will be invoked after each MAXDELAY = (231 − 1) + MAXPATH 2 executed bytecode in- structions. If MAXPATH = 215, the int counter consumption in ThreadCPUAccount will not overflow, because the initial counter value is -granularity (a negative value) and it will not exceed 230 (i.e. MAXPATH 2), well below Integer.MAX VALUE. Using current hardware and a state-of-the-art JVM, the execution of 232 bytecode instructions may take only a fraction of a second,7 of course depending on the complexity of the executed instructions.
For a component with n concurrent threads, in total at most n ∗ MAXDELAY bytecode instructions are executed before a thread triggers the consume(long c) function. If the number of threads in a component can be high, the accounting granularity may be reduced in order to achieve a finer-grained management. However, as this delay is not only influenced by the accounting granularity, it may be necessary to use a smaller value for MAXPATH during the rewriting.
An interesting measurement we made was to determine the impact of the choice of a granularity. We used the ‘compress’ program of the SPEC JVM98 benchmark suite to this end. As shown in Figure 6, the lower the granularity, the higher the overhead will be, and the more frequently the management actions will take place. In our cur- rent implementation, and on the given computer, this interval is not
7 On our test machine (see Section 4.1) this is made possible on some favorable code segments by the combination of an efficient just-in-time Java compiler and a CPU architecture taking advantage of implicit instruction-level parallelism.