[seL4] Capability unwrapping

Norman Feske norman.feske at genode-labs.com
Fri Feb 13 23:36:13 EST 2015


>> In principle the reasonable limit is 1. If a sender can transfer 1 cap
>> per message, it can transfer any number of caps by sending a lot of
>> messages.
> that is true. But I’m not sure that sending 3 caps in 3 messages
> (plus appropriate protocols) is in all circumstances semantically
> equivalent to sending 3 caps in one message.
> An example of 1+1<2 is round-trip IPC. One might argue that providing
> two IPC operations (send+receive) in a single system call is
> unnecessary, as you could simply make two system calls. Turns out,
> the functionality of the combined IPC cannot be implemented with two
> individual IPCs, for two reasons:
> 1) a call-type IPC creates the one-shot reply cap which allows to
> provide the server with a reply channel without trusting it. This
> could probably be sort-of modelled without the combined IPC, but
> would require many system calls (creating an endpoint, send with EP
> transfer, wait, destroy EP) and at best would be really expensive
> 2) the combined IPC is atomic, guaranteeing that the sender is ready
> to receive as soon as the send part is done. With two calls, the
> sender could be preempted between send and receive phase. With the
> forthcoming new real-time scheduling model, this is even more
> important, as it allows the server to run exclusively on
> client-provided time, which it couldn’t do if it had to do an
> explicit wait after the reply (it has no budget to run on after the
> reply completes).

There is another fundamental problem with issuing RPC calls in a
non-atomic way: Servers do not want to trust their clients. Let me
elaborate this a bit by merely looking at the send phase of an RPC call
(which we call "RPC request" in Genode).

If a client would split an RPC request into multiple seL4 IPCs, the
first IPC would possibly contain the information about the number of
subsequent IPCs that belong to the same RPC request. The server would
need to wait for the arrival of all parts before processing the RPC
function. While the server is waiting for one of the subsequent IPCs,
another client could issue an RPC request. What would the server do?
There are two principle options, (1) blocking RPC requests of other
clients until the completion of all parts of the current RPC request, or
(2) keeping track of the states on an RPC request per client. Both
options are futile.

(1) The server could stall RPC requests by other clients by performing a
closed wait for IPCs coming from the initiator of the current RPC
request. (side note: as far as I know, such a closed wait is not
possible on seL4) The kernel would block all other clients that try to
issue an IPC to the endpoint until the server performs an open wait for
the next time.

Unfortunately, this approach puts the availability of the server at the
whim of each single client. A misbehaving client could issue an RPC
request that normally consists of two parts but deliver only the first
part. The server would infinitely stay in the closed wait and all other
clients would block forever. This is unacceptable.

(2) The server could accept incoming IPCs from multiple clients but
would keep a state machine for each client. The state machine would
track the completion of an individual RPC request. E.g., after receiving
the initial part of a three-part RPC request, it would keep the
information that two parts are still missing before the server-side RPC
function can be invoked. Also, the state machine would need to keep the
accumulated message content.

Unfortunately, there is no way to pre-allocate the memory needed for
those state machines at the server. The number of state machines needed
ultimately depends on the number of concurrent RPC requests. E.g., if a
capability for a server-side object was delegated to a number of
different components, each component could issue RPC requests at any
time. For each request, the server would require an individual state
machine. It would eventually allocate the backing store for a state
machine on the arrival of the initial part of an RPC request. What would
happen when a misbehaving client issued the first part of a two-part RPC
request in an infinite loop? Right: The server would try to allocate an
infinite amount of state machines. Hence, any client could drive such a
simple denial-of-service attack at the server.

Even without going into detail about the performance overhead of
server-side dynamic memory allocations per RPC request, or the added
complexity of the options described above, I hope that it becomes clear
that splitting RPCs into multiple parts is not a sensible approach.


Dr.-Ing. Norman Feske
Genode Labs

http://www.genode-labs.com · http://genode.org

Genode Labs GmbH · Amtsgericht Dresden · HRB 28424 · Sitz Dresden
Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth

More information about the Devel mailing list