Discussion:
Capbility Concepts
(too old to reply)
Norman Hardy
2000-07-13 18:39:54 UTC
Permalink
I talked briefly with Markm yesterday about his conversations with others about our capability concepts.

Reflecting on this later it occurred to me that we may not say, or say often enough, that capabilities must be the only naming scheme on the system level!

I went back to my glossary <http://www.mediacity.com/~norm/CapTheory/Glossary.html#What.html> and found that my brief intro could be read as allowing capabilities to be but one way of designating objects.

I changed it but I have not figured out how to say what I want without sounding overly dogmatic.

I think that we agree that all sucessful capability systems "bottom out" in capabilities as names.

"Bottoming out" is itself a slippery term for we spend much text describing how yet lower levels work to support capability function. In my brief intro I overuse the term "system" to denote those hardware and software pieces that collectively provide the capability foundation.

I could not review Jonathan's intro at <http://www.eros-os.org/essays/capintro.html> as the server was indisposed.
Gernot Heiser
2000-07-13 23:11:28 UTC
Permalink
>>>>> "NH" == Norman Hardy <***@agorics.com> writes:

NH> I talked briefly with Markm yesterday about his conversations with
NH> others about our capability concepts.

NH> Reflecting on this later it occurred to me that we may not say, or say
NH> often enough, that capabilities must be the only naming scheme on the
NH> system level!

NH> I went back to my glossary
NH> <http://www.mediacity.com/~norm/CapTheory/Glossary.html#What.html> and
NH> found that my brief intro could be read as allowing capabilities to be
NH> but one way of designating objects.

When I checkd that page it ended with the entry "Virtual".

NH> I changed it but I have not figured out how to say what I want without
NH> sounding overly dogmatic.

NH> I think that we agree that all sucessful capability systems "bottom
NH> out" in capabilities as names.


Hmm, how does that go together with, say, password capabilities? They
consist of a UID and a password. The UID take by itself names an object.

Or am I missing something?

Gernot
--
Gernot Heiser ,--_|\ School of Computer Sci. & Engin.
Phone: +61 2 9385 5156 / \ The University of NSW
Fax: +61 2 9385 5533 \_,--._* UNSW SYDNEY NSW 2052, Australia
E-mail: ***@unsw.edu.au v http://www.cse.unsw.edu.au/~gernot
PGP fingerprint: 94 1E B8 28 25 FD 7C 94 20 10 92 E5 0B FF 39 8F
Jonathan S. Shapiro
2000-07-16 14:26:20 UTC
Permalink
> NH> I think that we agree that all sucessful capability systems "bottom
> NH> out" in capabilities as names.
>
> Hmm, how does that go together with, say, password capabilities? They
> consist of a UID and a password. The UID take by itself names an object.
>
> Or am I missing something?

A password capability system doesn't bottom out in the way that Norm
describes, because the operating system itself cannot be bootstrapped
without either (a) some set of available bootstrap password(s) or (b) some
other means to load code.

Jonathan
Al Gilman
2000-07-16 16:20:37 UTC
Permalink
At 10:26 AM 2000-07-16 -0400, Jonathan S. Shapiro wrote:
>> NH> I think that we agree that all sucessful capability systems "bottom
>> NH> out" in capabilities as names.
>>
>> Hmm, how does that go together with, say, password capabilities? They
>> consist of a UID and a password. The UID take by itself names an object.
>>
>> Or am I missing something?
>
>A password capability system doesn't bottom out in the way that Norm
>describes, because the operating system itself cannot be bootstrapped
>without either (a) some set of available bootstrap password(s) or (b) some
>other means to load code.
>
>Jonathan
>

This relates to a perplexity occurring in the Grid Forum at present. The
open issue there could be described as to compare and contrast what
information about an individual person is served from a persistent
information service [likely LDAP] as compared with what information is
involved in the process where that individual demonstrates that their
requests for service from allied administrative domains should be honored.

As I see their scenario, the "system" that operates Grid Jobs doesn't
bottom out into one space of known or trusted identities. The sharing
trust-space doesn't go to the bottom. It is layered over multiple bottoms
that are not shared. In this situation, I see no way to unify the identity
that gets you trusted in a foreign domain with the name that unifies your
identities in all domains. That would require qualifying lots of people
into a universally trusted administrative domain and that probably won't
happen.

Actually, my naive view of how systems get bootstrapped is similar. Access
protection is not functioning until after some trusted kernel of code is
loaded. Then root defines a password and installs the control that makes
loading check with the authorizing function, and further loading requires
authorization. You can build access control into a boot ROM but the secret
this is built on is static. Once the access key is compromised, that
system is no longer trustable.

In the present scenario the system is not trustable until locked. In the
hardwired situation it is not trustable after compromise.

Let me get to my bottom line with capabilities and Grid Forum. I suspect
that the Grid Forum would resolve its perplexities better if it had looked
at the situation with the fresh eyes of a "capabilities perspective" first.
Not that they wouldn't use the existing security infrastructure in the
several administrative domains as the foundation in building a more unified
metaprogramming capability. But I can't explain the new perspective, and
the likelihood is that they would blow it off as "too theoretical and
exploratory" and hence off-topic. They are looking at minimal incremental
tweaks to the best current practice that will unify operations across the
allied domains. Rough consensus and running code is the religion. To
explain why they should care about this theory, it will take a proponent
better versed than I. The grid forum can be found starting at
www.gridforum.org. The issue I described above is actually up for grabs in
this community. Is there anyone versed in the 'capabilities' perspective
on security that would care to get engaged in this process?

Al
Norman Hardy
2000-07-16 15:54:49 UTC
Permalink
At 9:11 +1000 00/07/14, Gernot Heiser wrote:
>>>>>> "NH" == Norman Hardy <***@agorics.com> writes:
....

>NH> I think that we agree that all sucessful capability systems "bottom
>NH> out" in capabilities as names.
>
>
>Hmm, how does that go together with, say, password capabilities? They
>consist of a UID and a password. The UID take by itself names an object.
>
>Or am I missing something?
>

I can't think of anything to say beyond the note at
<http://www.mediacity.com/~norm/CapTheory/CapBits.html> where I point out
mainly that confinement may be impossible in password systems.
Norman Hardy <http://www.mediacity.com/~norm>
Jonathan S. Shapiro
2000-07-16 17:58:57 UTC
Permalink
I have just read the notes at
http://www.mediacity.com/~norm/CapTheory/CapBits.html, and I have a few
comments on them.

I find the use of the terms "protected" and "unprotected" in the first
paragraph very confusing. I think that what is probably meant is
"partitioned" or "unpartitioned". I dislike the term "protected" because it
already has many other uses in the context of the capability discussion, and
I believe the that "partition" term captures the distinction that Norm is
trying to make. If not, I'ld be very interested to be corrected, and perhaps
the term I suggest below may be useful.

A password capability system need not keep the object identity bits in
cleartext. The password can apply to the entire capability. The fact that
the object bits were left in the clear in the Pose systems is, I think, a
flaw. It was probably motivated by a desire to simplify the memory
translation hardware, but in the context of modern processor design it's an
unnecessary simplification. Indeed, I recently had occasion to review that
paper in connection with examining a patent for prior art, and was very very
disappointed that the object bits were not protected.

There is a hybrid design that is possible, which is a cryptographic or
sparse capability system (bits exposed) in which the kernel must perform
some protection-domain-specific transformation in order to obtain the true
capability representation, such as applying a process-specific XOR late in
the game. I believe that we may consider designs of this form to be
partitioned, because the process never has access to the true representation
of the capability. This presumes that the XOR value is unknown to the
process, obviously.

Non-partitioned systems suffer from a total failure of confineability,
because it is impossible to determine whether the binary image has embedded
within it one or more capabilities that are unknown to the confinement
checker. A conservative approximation check can be done, but the cost is
probably prohibitive and the need to run this check has unfortunate
consequences for sharing of connected structures in transitive read only
form (i.e. you can no longer do so).

For the purpose of this objection, the E system does not suffer, because the
type system allows the checker to make the necessary determination.

For the purpose of this objection, a tagged memory system may or may not be
partitioned. The issue is once again the shared transitive read only
struture problem.

Indeed, as I wrote the above, it occurred to me that we have a failure of
specification. In EROS/KeyKOS, there are two simultaneous problems being
solved, and each has implications for choice of capability protection
strategy. The first might perhaps be called the "referential encapsulation
property", by which I mean that the process cannot generate bits that will
later be interpreted as a capability [I ignore number capabilities, and we
must someday think them through]. The second is transitive read-only
shareability, or what KeyKOS calls "sensory access". This is somewhat
generalized in EROS, and is further complicated by metadata indirection. A
reasonable treatment is given in the Diminish Take chapter of my
dissertation (archival paper real soon now, I promise!).

Norm has been focused on the referenctial encapsulation property, but there
is enormous leverage in the sensory access idea as well. Sensory access is
certainly not necessary, but if you remove it, you must enforce deep copy at
process instantiation. Deep copy is expensive, but worse than that it
requires a widely held means by which to compare two capabilities for
identity of their underlying representation object in order to preserve
object identity during the deep copy. That is *frought* with unfortunate
implications.

Ultimately, the most basic difficulty with password and cryptographic
capabilities is that there is no way to determine when all capabilities to
an object are gone. In EROS/KeyKOS we have not implemented GC, but I am
prepared to believe that it will prove desirable to do so in time. I even
think I can see how to avoid the covert channel issues implied by the
presence of GC. The reason to want GC is mostly for lost space recovery.
Programs do err in explicit storage management, and it is desirable to be
able to address this problem. In a "user bits are capabilities" style of
system, collection is possible but only if done conservatively, and only at
the cost of a much more complete scan of the disk (all sectors must be
examined) This strikes me as problematic.
Norman Hardy
2000-07-16 23:56:20 UTC
Permalink
At 13:58 -0400 00/07/16, Jonathan S. Shapiro wrote:
>I have just read the notes at
>http://www.mediacity.com/~norm/CapTheory/CapBits.html, and I have a few
>comments on them.
>
>I find the use of the terms "protected" and "unprotected" in the first
>paragraph very confusing. I think that what is probably meant is
>"partitioned" or "unpartitioned". I dislike the term "protected" because it
>already has many other uses in the context of the capability discussion, and
>I believe the that "partition" term captures the distinction that Norm is
>trying to make. If not, I'ld be very interested to be corrected, and perhaps
>the term I suggest below may be useful.
>

I have used "segrated" much as you use "partitioned". I was confused by
"partitioned" in recent mail. I shall try to speak of hiding or protecting
the bits of the capability instead of the capability itself. Yet I need an
adjective to modify "capability" that means its bits are hidden! There is
too much good text devoted to that plan.

There is yet another difference lurking here. IBM's System 38 devoted a
hardware bit in memory for each 16 memory bits to mark a location as
holding part of a capability. Capabilities were 64 bits long and allocated
admidst other user data but "hidden" by the extra bit so as to make the
bits therein unreadable and unmodifiable by "user code". The System 38 was
not the first to protect capabilities admist user data but it may have been
the last. AS/400 is in some sense a descendent of the System 38. The 38
provided a language called "MI" (Machine Interface) that, like Java byte
codes, was translated before execution. The translator was trusted but not
enough to dispense with the hardware to protect the capability bits.

Perhaps the bits of the System 38 capability are protected but neither
segregated nor partitioned. I agree that the page you cite is confusing. I
have spent the afternoon rewriting the page above. I am not done.
Norman Hardy <http://www.mediacity.com/~norm>
Jonathan S. Shapiro
2000-07-17 02:39:32 UTC
Permalink
> I have used "segrated" much as you use "partitioned". ...I need an
> adjective to modify "capability" that means its bits are hidden! There is
> too much good text devoted to that plan.

In most of the capability literature, the term for this has been
"partitioned". Gehringer has a good discussion of this, and I think that
Levy also uses this term.

> There is yet another difference lurking here. IBM's System 38 devoted a
> hardware bit in memory for each 16 memory bits to mark a location as
> holding part of a capability. Capabilities were 64 bits long and allocated
> admidst other user data but "hidden" by the extra bit so as to make the
> bits therein unreadable and unmodifiable by "user code". The System 38 was
> not the first to protect capabilities admist user data but it may have
been
> the last. AS/400 is in some sense a descendent of the System 38. The 38
> provided a language called "MI" (Machine Interface) that, like Java byte
> codes, was translated before execution. The translator was trusted but not
> enough to dispense with the hardware to protect the capability bits.

This example is why I was distinguishing between detectability and the
transitive read only problem. In the S/38 you can find all of the
capabilities -- it is effectively a type system. However, there is no
provision in the architecture that is equivalent to the sense capability.
Therefore, deep copy is still required where confinement must be enforced.

Deep copy with identity preservation (i.e. there is a one to one
correspondance between objects in the original and objects in the copy in
spite of multiple references) requires a widely available "keybits"
equivalent in order to perform the necessary identity tests.

My point here is that in addition to having a partitioned mechanism, we also
wish to consider whether the mechanism provides efficient state sharing
across a confinement boundary.


shap
Norman Hardy
2000-07-17 19:42:08 UTC
Permalink
At 22:39 -0400 00/07/16, Jonathan S. Shapiro wrote:
>> I have used "segrated" much as you use "partitioned". ...I need an
>> adjective to modify "capability" that means its bits are hidden! There is
>> too much good text devoted to that plan.
>
>In most of the capability literature, the term for this has been
>"partitioned". Gehringer has a good discussion of this, and I think that
>Levy also uses this term.
>
>> There is yet another difference lurking here. IBM's System 38 devoted a
>> hardware bit in memory for each 16 memory bits to mark a location as
>> holding part of a capability. Capabilities were 64 bits long and allocated
>> admidst other user data but "hidden" by the extra bit so as to make the
>> bits therein unreadable and unmodifiable by "user code". The System 38 was
>> not the first to protect capabilities admist user data but it may have
>been
>> the last. AS/400 is in some sense a descendent of the System 38. The 38
>> provided a language called "MI" (Machine Interface) that, like Java byte
>> codes, was translated before execution. The translator was trusted but not
>> enough to dispense with the hardware to protect the capability bits.
>
>This example is why I was distinguishing between detectability and the
>transitive read only problem. In the S/38 you can find all of the
>capabilities -- it is effectively a type system. However, there is no
>provision in the architecture that is equivalent to the sense capability.
>Therefore, deep copy is still required where confinement must be enforced.

Yes. I was addressing those capability systems where the holder of a
capability could not see the bits. The System 38 is like this. Seeing as
how the capabilities are among the data, it hardly seems either
partitioned or segregated. In
<http://www.mediacity.com/~norm/CapTheory/CapBits.html> I have adopted the
term "abstracted" as refering to capabilities whose bits you cannot see,
partitioned or otherwise.

Just wrestling with words.

>Deep copy with identity preservation (i.e. there is a one to one
>correspondance between objects in the original and objects in the copy in
>spite of multiple references) requires a widely available "keybits"
>equivalent in order to perform the necessary identity tests.

Yes, and even here there remains the question of discreet sub-contractors
which the factory allows in the form of allowing requesor's keys as factory
components.

...
Norman Hardy <http://www.mediacity.com/~norm>
Jonathan S. Shapiro
2000-07-17 19:52:46 UTC
Permalink
> >Deep copy with identity preservation (i.e. there is a one to one
> >correspondance between objects in the original and objects in the copy in
> >spite of multiple references) requires a widely available "keybits"
> >equivalent in order to perform the necessary identity tests.
>
> Yes, and even here there remains the question of discreet sub-contractors
> which the factory allows in the form of allowing requesor's keys as
factory
> components.

I'm not sure what you mean here. Can you expand on this?
Mark S. Miller
2000-07-20 21:14:11 UTC
Permalink
At 07:39 PM 7/16/00 , Jonathan S. Shapiro wrote:
>Deep copy with identity preservation (i.e. there is a one to one
>correspondance between objects in the original and objects in the copy in
>spite of multiple references) requires a widely available "keybits"
>equivalent in order to perform the necessary identity tests.
>
>My point here is that in addition to having a partitioned mechanism, we also
>wish to consider whether the mechanism provides efficient state sharing
>across a confinement boundary.

Yes, but a keybits-equivalent that's equivalent enough for the above
purposes still need not reveal the bits in the keys, and can therefore be
used safely by those that must not have such access -- such as computation
constrained to be deterministically replayable. E primitively provides an
equality testing operation (the moral equivalent of the equality-testing
aspect of DISCRIM), and a primitive hash table.

The hash table uses but does not reveal the bits of the keys. Its internal
representation is therefore non-deterministic on overt properties of the
computation (depending on the bits of a key, which we define as a covert
property). However, it provides deterministic service to its clients,
thereby not providing its clients a way to escape deterministic replay. We
pay a price for this: in order to be deterministic, the table's enumeration
order depends on order of entry and removal, not hash order.

These two primitives are sufficiently keybits-like for the above purposes,
but the services they provide avoid Norm's criticisms.


Cheers,
--MarkM
Jonathan S. Shapiro
2000-07-23 17:08:18 UTC
Permalink
> >My point here is that in addition to having a partitioned mechanism, we
also
> >wish to consider whether the mechanism provides efficient state sharing
> >across a confinement boundary.
>
> Yes, but a keybits-equivalent that's equivalent enough for the above
> purposes still need not reveal the bits in the keys.... E primitively
provides an
> equality testing operation (the moral equivalent of the equality-testing
> aspect of DISCRIM), and a primitive hash table

Hash table mechanisms that provide automatic rebalancing of the table are
almost certainly good enough, and algorithms for this are now know. For E,
making this a primitive mechanism is certainly good enough, because the
runtime almost certainly needs hash tables internally in any case.

For EROS, I'ld be reluctant to do this, but it's worth a ponder at some
other time.


shap
Mark S. Miller
2000-07-23 19:13:37 UTC
Permalink
At 10:08 AM 7/23/00 , Jonathan S. Shapiro wrote:
>Hash table mechanisms that provide automatic rebalancing of the table are
>almost certainly good enough, and algorithms for this are now know. For E,
>making this a primitive mechanism is certainly good enough, because the
>runtime almost certainly needs hash tables internally in any case.
>
>For EROS, I'ld be reluctant to do this, but it's worth a ponder at some
>other time.

By "primitive", note that it need not be provided by the kernel. Rather,
for EROS, all I'd advocate is that such a hashtable implementation be a
widely exported service provided by the TCB, and that the keybits service it
uses be closely held within the TCB. The hash table service provided by the
TCB would be on its honor to provide deterministic service to its clients
despite its non-deterministic (keybits-based) implementation.
Jonathan S. Shapiro
2000-07-25 02:50:58 UTC
Permalink
> By "primitive", note that it need not be provided by the kernel. Rather,
> for EROS, all I'd advocate is that such a hashtable implementation be a
> widely exported service provided by the TCB...

I agree with the comment about the primitiveness of hash tables. This note
is a comment about TCBs.

After a lot of discussion back and forth with Paul Karger and Leendert van
Doorn, I have concluded that speaking of *the* TCB is nonsensical.

In the context of a component-based system, different applications may well
rely on different components in establishing trust contracts. In defining
the TCB, we must divide an application into the code written by the author
and the (often opaque) middleware used by the application. The TCB of the
application is all of the stuff that is *not* written by the application
author.

For a given set of applications, each having a TCB, we can imagine taking
the intersection of the application TCB's. In practice, in some given
system, there is some intersection that is common to *all* application
TCB's. I refer to this as the "universal TCB". Usually, when people speak of
"the TCB" they are speaking of this universal TCB.

I find this distinction useful when thinking about families of related
applications. I'm not sure if it makes a useful distinction for this list.

shap
Jonathan S. Shapiro
2000-07-17 19:51:44 UTC
Permalink
> I am unclear about the meaning of "The password can apply to the entire
> capability.". I presume that by "password capability system" you mean one
> where all the bits are visible to the program, be they UIDs or "secret
pass
> phrase".

I meant that the object identity need not be kept in the clear. The password
(or secret key) can protect both the object identity and the permissions
bits.

> Trying to reconstruct the Monash magic years later it occurred to me that
a
> program confined by the XOR trick would be likely to have the same
> capability C in the form C and also C xor S. This would reveal likely
> candidates for S, the confining secret.
> Is the Monash magic online somewhere that you know of?

Only the Pose paper. However, note that the value C xor S is never present
in application memory in that design. Therefore, I believe this is not a
problem.

shap
Norman Hardy
2000-07-18 00:42:10 UTC
Permalink
At 15:51 -0400 00/07/17, Jonathan S. Shapiro wrote:
>> I am unclear about the meaning of "The password can apply to the entire
>> capability.". I presume that by "password capability system" you mean one
>> where all the bits are visible to the program, be they UIDs or "secret
>pass
>> phrase".
>
>I meant that the object identity need not be kept in the clear. The password
>(or secret key) can protect both the object identity and the permissions
>bits.
>
>> Trying to reconstruct the Monash magic years later it occurred to me that
>a
>> program confined by the XOR trick would be likely to have the same
>> capability C in the form C and also C xor S. This would reveal likely
>> candidates for S, the confining secret.
>> Is the Monash magic online somewhere that you know of?
>
>Only the Pose paper. However, note that the value C xor S is never present
>in application memory in that design. Therefore, I believe this is not a
>problem.
>
>shap

I suspect that you are more recent on that technology than I. I had thought
that all bits of capabilities were kept in application memory and that the
applications ignorance of S was the sole method of confinement. One
possible rule might be that for any particular capability C, it was not the
case that both C and CxorS were accessible to the capability.
Norman Hardy <http://www.mediacity.com/~norm>
Gernot Heiser
2000-07-17 02:27:22 UTC
Permalink
>>>>> "NH" == Norman Hardy <***@netcom.com> writes:

NH> At 9:11 +1000 00/07/14, Gernot Heiser wrote:
>>>>>>> "NH" == Norman Hardy <***@agorics.com> writes:
NH> ....

NH> I think that we agree that all sucessful capability systems "bottom
NH> out" in capabilities as names.
>>
>>
>> Hmm, how does that go together with, say, password capabilities? They
>> consist of a UID and a password. The UID take by itself names an object.
>>
>> Or am I missing something?
>>

NH> I can't think of anything to say beyond the note at
NH> <http://www.mediacity.com/~norm/CapTheory/CapBits.html> where I point out
NH> mainly that confinement may be impossible in password systems.

"May be impossible" doesn't strike me as a particularly strong claim ;-)

We believe that Mungi (http://mungi.org/) can do confinement (although
we haven't done a formal proof -- I'm waiting for a student to do the
work ;-).

Gernot
--
Gernot Heiser ,--_|\ School of Computer Sci. & Engin.
Phone: +61 2 9385 5156 / \ The University of NSW
Fax: +61 2 9385 5533 \_,--._* UNSW SYDNEY NSW 2052, Australia
E-mail: ***@unsw.edu.au v http://www.cse.unsw.edu.au/~gernot
PGP fingerprint: 94 1E B8 28 25 FD 7C 94 20 10 92 E5 0B FF 39 8F
Jonathan S. Shapiro
2000-07-17 02:55:57 UTC
Permalink
> We believe that Mungi (http://mungi.org/) can do confinement (although
> we haven't done a formal proof -- I'm waiting for a student to do the
> work ;-).

Gernot, all:

My recollection from talks with Jerry Vochteloo is that mungi uses a
per-process or per-system XOR value (I don't recall which, but it doesn't
matter).

If this is correct, or a similar unforgeable transformation is used, then
Mungi can definitely do confinement subject to the assumption that the
inverse transform is unguessable. If so, then you don't actually need to do
a proof of confinement. You merely need to do an equivalence proof between
XOR application and capability/data partitioning. If you can do that then
you fall under the SW model proof.

The proof sketch I have in mind is:

1. Observe that the true capabilities are the post-XOR capabilities, which
are all partitioned. The pre-XOR capabilities are completely irrelevant.

2. Show that the XOR test operation has the effect of enforcing the
partition between capabilities and data, again subject to the unguessability
constraint.

3. Observe that if this partition exists then there is a direct equivalence
between the Mungi protection model and the SW model published in the latest
IEEE security symposium at http://www.eros-os.org/papers/oakland2000.ps ).
In point of fact, the SW proof only requires that users cannot invoke
capabilities as data. It is not compromised if the user can observe the
capability bits.

As far as Sam and I know, the SW model covers all capability systems
(including Mungi) that *can* enforce confinement, and excludes all the ones
that cannot.

Mungi definitely can do confinement.

I'll expect a citation of the SW paper in your publication :-)

While I'm thinking about it, would you consider adding an EROS link to your
related systems list? EROS has some of the feel of a single address space
system, and we've all certainly exchanged a bunch of ideas back and forth.


shap
Mark S. Miller
2000-07-17 03:24:24 UTC
Permalink
At 07:55 PM 7/16/00 , Jonathan S. Shapiro wrote:
>As far as Sam and I know, the SW model covers all capability systems
>(including Mungi) that *can* enforce confinement, and excludes all the ones
>that cannot.

E can do confinement without relying on dynamic kernel-weakening of
primitive capabilities. Rather, E uses observable pure immutability,
corresponding to stage #2 of
http://www.erights.org/elib/capability/factory.html . As a result, E's
confinement has the costs documented at that URL.

The mechanisms of the SW model seem to apply starting at stage #3. As a
result, EROS's and KeyKOS's confinement is more flexible -- more kinds of
useful services can be observably confined. But would you say the SW model
covers E? Would you claim that E's confinement isn't really "confinement"
as you define it? If no and yes, then E is a counterexample to your above
claim.


Cheers,
--MarkM
Jonathan S. Shapiro
2000-07-17 04:37:30 UTC
Permalink
> At 07:55 PM 7/16/00 , Jonathan S. Shapiro wrote:
> >As far as Sam and I know, the SW model covers all capability systems
> >(including Mungi) that *can* enforce confinement, and excludes all the
ones
> >that cannot.
>
> E can do confinement without relying on dynamic kernel-weakening of
> primitive capabilities. Rather, E uses observable pure immutability,
> corresponding to stage #2 of
> http://www.erights.org/elib/capability/factory.html . As a result, E's
> confinement has the costs documented at that URL.
>
> The mechanisms of the SW model seem to apply starting at stage #3. As a
> result, EROS's and KeyKOS's confinement is more flexible -- more kinds of
> useful services can be observably confined. But would you say the SW
model
> covers E? Would you claim that E's confinement isn't really "confinement"
> as you define it? If no and yes, then E is a counterexample to your above
> claim.

I don't have time at the moment to delve into the E documentation in detail,
but I believe the answer is that E satisfies the SW model, and that the SW
proof applies but is incomplete.

The SW factory test permits sensory capabilities and purely read-only data
capabilities. E has no sensory capabilities, which means that the SW model
is strictly more powerful than E and E is therefore covered. Strictly
speaking, the definition in the paper needs to be expanded, as capabilities
to structures that are exclusively held *by the yield* are also good enough,
and our proof does not addres the case in which the instantiation might
entail a deep copy performed by trusted code.

In practice, this extension is pretty straightforward, as the deep copy is
not held by anyone outside the yield at the instant the first instruction is
run. It therefore is not possibly a member of readable(outside), which is
sufficient. The proof strategy would work, but the final step would need to
be revised from claiming that mutable(inside) was empty to showing the
less-trivial non-intersection. I suspect that the easiest solution would be
to extend the definition of authorized holes and then do some term rotation.
It will be interesting to see if/how Sam responds.

So I think that the SW model covers the E case, but I think that our
verification might need to be enhanced. This is may be an example of a place
where extending our model to handle intransitive non-interference (Rushby)
would be helpful, though I don't think that is necessary in this case.

There is, however, another issue, which is that the deep copy may not
terminate if the developer shares access to the structure and dicks around
with the structure maliciously and continuously. This problem can be
resolved by performing an additional deep copy at factory creation time to
isolate the factory's copy of the data structure from subsequent actions of
the developer.

shap
Gernot Heiser
2000-07-17 09:36:00 UTC
Permalink
>>>>> "JSS" == Jonathan S Shapiro <***@eros-os.org> writes:

>> We believe that Mungi (http://mungi.org/) can do confinement (although
>> we haven't done a formal proof -- I'm waiting for a student to do the
>> work ;-).

JSS> Gernot, all:

JSS> My recollection from talks with Jerry Vochteloo is that mungi uses a
JSS> per-process or per-system XOR value (I don't recall which, but it doesn't
JSS> matter).

JSS> If this is correct, or a similar unforgeable transformation is used, then
JSS> Mungi can definitely do confinement subject to the assumption that the
JSS> inverse transform is unguessable. If so, then you don't actually need to do
JSS> a proof of confinement. You merely need to do an equivalence proof between
JSS> XOR application and capability/data partitioning. If you can do that then
JSS> you fall under the SW model proof.

The XOR scheme was used (or proposed?) for the Monash Password
Capability System.

Mungi's approach doesn't use the XOR scheme, but is based on the
following:

- capabilities are never presented explicitly, but only implicitly (the
kernel searches the PD data structure, which is actually mostly user data).

- a PD can be set up such that it doesn't contain its own PD data
structure, hence a thread executing in that PD cannot add caps to the
PD. Hence caps embedded in its code don't help.

It's really the complete opposite of a segregated scheme.

JSS> Mungi definitely can do confinement.

So I hope, but no proof yet ;-)

JSS> I'll expect a citation of the SW paper in your publication :-)

If we ever get it done. Priority is on cutting code at the moment.

JSS> While I'm thinking about it, would you consider adding an EROS link
JSS> to your related systems list? EROS has some of the feel of a single
JSS> address space system,

... like any real capability system ;-)

JSS> and we've all certainly exchanged a bunch of
JSS> ideas back and forth.

Securo.

The Mungi pages need some updating.... But the link is there now.

Gernot
--
Gernot Heiser ,--_|\ School of Computer Sci. & Engin.
Phone: +61 2 9385 5156 / \ The University of NSW
Fax: +61 2 9385 5533 \_,--._* UNSW SYDNEY NSW 2052, Australia
E-mail: ***@unsw.edu.au v http://www.cse.unsw.edu.au/~gernot
PGP fingerprint: 94 1E B8 28 25 FD 7C 94 20 10 92 E5 0B FF 39 8F
Norman Hardy
2000-07-17 13:33:24 UTC
Permalink
At 12:27 +1000 00/07/17, Gernot Heiser wrote:
>>>>>> "NH" == Norman Hardy <***@netcom.com> writes:
>
>NH> At 9:11 +1000 00/07/14, Gernot Heiser wrote:
>>>>>>>> "NH" == Norman Hardy <***@agorics.com> writes:
>NH> ....
>
>NH> I think that we agree that all sucessful capability systems "bottom
>NH> out" in capabilities as names.
>>>
>>>
>>> Hmm, how does that go together with, say, password capabilities? They
>>> consist of a UID and a password. The UID take by itself names an object.
>>>
>>> Or am I missing something?
>>>
>
>NH> I can't think of anything to say beyond the note at
>NH> <http://www.mediacity.com/~norm/CapTheory/CapBits.html> where I point out
>NH> mainly that confinement may be impossible in password systems.
>
>"May be impossible" doesn't strike me as a particularly strong claim ;-)

I made it wishy-washy on purpose. I recall that the Monash system proposed
schemes that brought doubt on the simple claim that confinment was
impossible.

>We believe that Mungi (http://mungi.org/) can do confinement (although
>we haven't done a formal proof -- I'm waiting for a student to do the
>work ;-).
>

I shall look at it.
Norman Hardy <http://www.mediacity.com/~norm>
Gernot Heiser
2000-07-17 22:40:36 UTC
Permalink
>>>>> "NH" == Norman Hardy <***@netcom.com> writes:

NH> Yes. I was addressing those capability systems where the holder of a
NH> capability could not see the bits. The System 38 is like this. Seeing as
NH> how the capabilities are among the data, it hardly seems either
NH> partitioned or segregated.

Are you sure? I don't have the S/38 papers handy (working from home
today) but I seem to remember that it was actually possible for
unprivileged code to copy a cap, but the hardware would turn of the
capability bit then, converting it to simple data.

Gernot
--
Gernot Heiser ,--_|\ School of Computer Sci. & Engin.
Phone: +61 2 9385 5156 / \ The University of NSW
Fax: +61 2 9385 5533 \_,--._* UNSW SYDNEY NSW 2052, Australia
E-mail: ***@unsw.edu.au v http://www.cse.unsw.edu.au/~gernot
PGP fingerprint: 94 1E B8 28 25 FD 7C 94 20 10 92 E5 0B FF 39 8F
Jonathan S. Shapiro
2000-07-17 22:46:27 UTC
Permalink
> Are you sure? I don't have the S/38 papers handy (working from home
> today) but I seem to remember that it was actually possible for
> unprivileged code to copy a cap, but the hardware would turn of the
> capability bit then, converting it to simple data.
>
> Gernot

I believe this is so also. I have the AS/400 Principles of Operations, but
regrettably my copy is in a box somewhere. If someone can remind me in two
weeks about this I'll gladly check on it.


shap
Norman Hardy
2000-07-18 00:21:00 UTC
Permalink
At 18:46 -0400 00/07/17, Jonathan S. Shapiro wrote:
>> Are you sure? I don't have the S/38 papers handy (working from home
>> today) but I seem to remember that it was actually possible for
>> unprivileged code to copy a cap, but the hardware would turn of the
>> capability bit then, converting it to simple data.
>>
>> Gernot
>
>I believe this is so also. I have the AS/400 Principles of Operations, but
>regrettably my copy is in a box somewhere. If someone can remind me in two
>weeks about this I'll gladly check on it.
>
>
>shap

Me too. I think that there were memory to memory operations, but it was
some how enforced that capabilities among the data were copied all or none.
It would be impossible to program the copy with a load-store loop as that
would require knowing which you were about to move.

It is hassles like this which sour me on intersperssed abstracted
capabilities. Doing I/O was a bitch. I heard rumors of a modified 64 bit
PowerPC that was cognizant of the capability tags. I have never seen
details.

The Plessey 250 segregated data and capabilities into different memory
segments and had segregated data and capability registers.
Norman Hardy <http://www.mediacity.com/~norm>
Jonathan S. Shapiro
2000-07-18 00:36:28 UTC
Permalink
> I heard rumors of a modified 64 bit
> PowerPC that was cognizant of the capability tags. I have never seen
> details.

Here are the details:

The stock PowerPC architecture was altered to include variant load/store
instructions that set or do not set the tag bit in memory. Recall that the
application does not generate these instructions directly. I don't know if
the "store with tag set" operation is privileged in the architecture or not.


shap
Norman Hardy
2000-07-18 00:36:40 UTC
Permalink
At 15:52 -0400 00/07/17, Jonathan S. Shapiro wrote:
>> >Deep copy with identity preservation (i.e. there is a one to one
>> >correspondance between objects in the original and objects in the copy in
>> >spite of multiple references) requires a widely available "keybits"
>> >equivalent in order to perform the necessary identity tests.
>>
>> Yes, and even here there remains the question of discreet sub-contractors
>> which the factory allows in the form of allowing requesor's keys as
>factory
>> components.
>
>I'm not sure what you mean here. Can you expand on this?

The factory yield is permitted to access the requestor's key to another
factory, in general. This is because the factory logic will, in general,
accept such a key as a factory component. When a naïve deep copy comes to
an abstracted service such as that provided by a requestor's key it must
provide a copy in some sense. The Factory recognizes such a key because,
well, it *is* the factory. The same may work for the deep copier.

I try to use present tense for Keykos for I discovered just a week ago the
Keykos still runs as part of some EDS application. I must compliment IBM
for providing a remarkably stable platform upon which to run.
Norman Hardy <http://www.mediacity.com/~norm>
Continue reading on narkive:
Loading...