X hits on this document





9 / 14

ter storing the message arguments and prior to setting the message tag.

4.2 Channel teardown

As is apparent from Figure 3, channel endpoints are partially shared, since the sender needs to access the peer endpoint to store message tags and message arguments, as well as to signal the waithandle. In fact, the cross-linked endpoint structure forming a channel is the sole mechanism for trans- fering information from one process to another.

In order to implement the desired failure semantics where sends can occur as long as the sending endpoint is not closed, we need to keep the channel data structure alive as long as there is at least one of the two parties still using the channel. We thus reference-count the endpoint structures. The ref-count starts at 2 for both ends, since there are ex- actly two references from the user processes, one to each endpoint, and one reference from each endpoint to its peer. The reference count never goes beyond two. When one side deletes an endpoint, we decrement its reference count and also that of its peer. When the reference count of an end- point drops to 0, its memory and the associated waithandle are reclaimed. This scheme makes has the advantage that beyond the atomic decrement, no thread synchronization is needed during sends/receives or tear-down.

4.3 Block accounting

Normally terminating processes are guaranteed to free all ex- change heap memory owned by the process by virtue of the static verification that accounts for every block manipulated and thus prevents such blocks from leaking. However, a sys- tem needs to be able to shutdown a process abruptly without letting the process clean up. Thus, in order to reclaim the blocks in the exchange heap that the process owned at the moment of its abrupt demise, the system must be able to find these blocks.

Our implementation uses a short header per block in the exchange heap containing its size and the owning process id. The size is used for vector operations in determining how many elements are in the vector. The owning process id is maintained by channel send operations as follows: When sending on an endpoint e, we determine the owning pro- cess pid of the peer f of e. Note that there is no race in determining the receivers pid, since the invariant discussed in Section 2.6 guarantees that an endpoint that is about to receive cannot be sent itself and thus its owning process cannot change. Once the receiver’s identity is known, the sending thread changes the owning process id of all mes- sage arguments recursively. This traversal code is generated automatically by the compiler.

Additionally, the memory manager for exchange heap blocks keeps a list of all used blocks. The list is traversed by a background clean-up thread to reclaim blocks owned by non- existant processes.

4.4 Switch receive

For each switch receive statement, the compiler generates a static 2-dimensional integer array representing the message patterns that make up the cases. The matrix has k rows and n columns, where k is the number of cases in the switch, and n is the number of distinct endpoints used overall the cases. The matrix contains a non-zero entry in position (i, j), if the ith case contains a pattern for message m on the jth end-

point. The entry’s value is the tag for message m. Each row thus indicates which messages must be received on which endpoints for that case to be enabled. Given this pattern matrix, the compiler compiles the switch receive statement into the following schematic code:

Endpoint [] endpoints = new Endpoint[]{ep1 ,..., epn}; int enabled case = Endpoint.Select(pattern , endpoints ); switch ( enabled case ) { case 0: // declare locals for bound message arguments // receive messages for case 0 on endpoints ... break;

case 1: // declare locals for bound message arguments // receive messages for case 1 on endpoints ... break;



To make this more concrete, let’s see how the following ex- ample is compiled:

switch receive { case a.M(int x) && b.Q(char[] in ExHeap vec): block0

case b.R(int y ): block1

case a.ChannelClosed(): block2

case b.ChannelClosed(): block3




case 0

M tag

Q tag

case 1


R tag

case 2



case 3



The patterns mention 2 endpoints a and b. Assuming the compiler numbers them in that order, the pattern matrix for the switch is:

The special ChannelClosed patterns are given tag -1. The code generated is shown in Listing 2. On line 1, we construct an array containing the endpoints involved in the pattern. Then we call a static method Select , passing the pattern ma- trix and the endpoint array. This call will block until either a case is satisfied, returning the case number, or until it de- termines that none of the cases can be satisfied, returning -1. Note that the RecvX calls on lines 7,8, and 13 will not block, since the Select call will only return that case if the neces- sary messages are present at the head of the corresponding 2

endpoints. The implementation of Select is relatively simple.


each row of the pattern matrix, it queries each endpoint for which there is a non-null entry in the row for the status of the endpoint. The endpoint returns either the tag of the first

2Our implementation avoids the allocation by reusing a thread-local array after the first switch receive of a given size.

Document info
Document views47
Page views47
Page last viewedWed Jan 18 10:13:47 UTC 2017