[seL4] Capability unwrapping

Norman Feske norman.feske at genode-labs.com
Sat Feb 14 00:38:16 EST 2015


Hi Mark,

On 02/13/2015 07:25 AM, Mark Jones wrote:
> There may well be some benefit in allowing the transfer of multiple
> capabilities in a single IPC.  But let’s remember that this discussion
> actually started with Norman's request for a /different feature/, which
> he characterized as “capability re-identification” [1]. We only began a
> transition to the current conversation when Norman proposed a cunning
> encoding to represent a Genode capability by a triple of seL4
> capabilities [2]. Are there alternative ways to solve Norman’s original
> problem, perhaps even without requiring a change to the current seL4 design?

that is a very good point!

I stated earlier that no Genode RPC interface requires more than 2
capability arguments. In fact, in the very few instances where two
capability arguments are used, the passed capabilities actually
originate from the server. So those capabilities could be unwrapped
instead of delegated.

So you are spot on: My immediate requirement for delegating multiple
caps would not exist without my approach to solving the
re-identification problem.

> I proposed one possible approach during the original conversation (the
> second half of [3]).  Like Harry’s suggestion, it involves the use of a
> CNode object (although I wasn’t thinking about using it for the purpose
> of transferring multiple capabilities). In addition, with my proposal,
> that CNode is created as a side effect of a one-time registration
> operation and is shared between multiple CSpaces. Among other things,
> this should eliminate the overhead that Gerwin mentioned of having to
> allocate a new CNode for every transfer. Back in November, it wasn’t
> entirely clear whether my proposal could be adapted to Genode to solve
> the problem that Norman had described, and we didn’t explore it further.

I liked the idea of the shared CNodes but could not see how to bring it
together with Genode without introducing difficult new problems. In
particular, the approach would ultimately need a shared CNode for each
communication relationship. The CNode is a physical resource. This
raises the question of who should allocate the CNodes and how to
organize the namespaces of all the CNodes a component shares with
others. In my perception, it comes down to similar problems as I
described in my other posting [1] of today (the second option): The
server would need to keep state per client but the number of clients is
unbounded.

In Genode, we have the notion of "sessions", which enable a server to
maintain client-specific state using a memory budget provided by the
client. But this session concept is built on top of the basic RPC
mechanism. The mechanism you proposed requires also a notion of sessions
(in the sense of state to be kept at both communication partners), but
at a lower level. It thereby raises the same problems that we have
solved at the higher level.

Btw, if you are interested in learning more about Genode's session
concept, let me recommend the Chapter 3 of the forthcoming Genode manual
[2].

> However, even if that specific approach won’t work, perhaps there is a
> variation, or a different strategy altogether, that might do the trick
> instead? In short, I still think it might be worth exploring other
> options to make sure that we’re treating the problem rather than the
> symptom …

Your posting has actually provoked me to reconsider the problem from
another angle:

In Genode, each component has a relationship with the root task (called
"core"). Instead of letting each component manage their CSpaces locally,
core could manage their CSpaces. At the creation time of a component,
core would allocate a CNode as CSpace for the component and would keep
it shared between core and the component (quite similar to your idea).
Now, if a component wants to perform an RPC call with capabilities as
arguments, it would not isssue an seL4 IPC call directly to the server,
but an seL4 IPC call to core, supplying the local names of the
capability arguments along with the local name of the invoked
capability. Because core has a global view of all CSpaces, it can copy
the capabilities from the sender CSpace to the server's CSpace and
forward the IPC call to the server by translating the local names of the
capability arguments to the CSpace of the server.

I have not fully wrapped my head around all the minor details of the
forwarding mechanism, but the approach would in principle use core as a
proxy for "heavy weight" RPC calls. The semantics I need for Genode
(like the re-identification of capabilities) could be provided by core.
Still, RPCs without capability arguments (as is the case for all
performance-critical RPCs anyway) would go straight to the server.

In contrast to your original idea, we would not need one CNode per
communication relationship but only one per component. This memory
resources of the CNode can be trivially accounted to the respective
component.

I have to think it through but it seems like a promising alternative
approach to the original re-identification problem. It actually lowers
the requirements with regard to the kernel as the delegation of
capabilities via IPC would remain unused.

Thanks Mark, for pushing me in this direction! :-)

[1] http://sel4.systems/pipermail/devel/2015-February/000222.html
[2]
http://genode.org/files/e01096b9ffe3f416157f6ec46c467725/manual-2015-01-23.pdf

Cheers
Norman

-- 
Dr.-Ing. Norman Feske
Genode Labs

http://www.genode-labs.com · http://genode.org

Genode Labs GmbH · Amtsgericht Dresden · HRB 28424 · Sitz Dresden
Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth



More information about the Devel mailing list