Discussion:
Implementing the y-property using HTTPS
(too old to reply)
Tyler Close
2005-11-01 19:25:06 UTC
Permalink
Over two years ago, a bunch of us collaborated on the design of the
httpsy URL scheme. One of the wonderful features of the original
design was the https *-subset. The scheme was a subset of the https
URL grammar:

https_URL = "https://*" key_id "@" host [ ":" port ] [ abs_path [ "?" query ]]

This URL scheme had the wonderful feature that if someone without
HTTPSY software received the URL, they would interpret it as a normal
https URL and thus still be able to connect to the server. If they did
have the HTTPSY software, the new start-up protocol and y-property
security checks would be used. In a world where it is impossible to
get clients to download a browser extension just to enable a
not-yet-widely-used security enhancement, this backwards compatibility
feature was crucial for deployment.

Unfortunately, Microsoft then decided that they could solve the
phishing problem by making it illegal to put a userinfo field in an
https URL. Goodbye backwards compatibility for httpsy.

Fast-forward to today, and there is motion among the major browser
manufacturers to support TLS/1.0, which enables sending the domain
name to the server before starting the crypto protocol. This suggest a
new design for implementing the y-property on top of HTTPS. Instead
of putting the public key hash in the userinfo segment, we could use
it as the bottom level subdomain label. For example,

https://****@yurl.net/

would become:

https://cl7h3f7jwyj3fvmw7jpnjfvf2xlcmayi.yurl.net/

Once again, not-yet-upgraded browsers could connect to the target
site. Unfortunately, there are some disadvantages to this design:

- You can't just use an IP address, and so you must own a domain name
in order to put up a server.
- It's no longer possible to be independent of the DNS, so if the DNS
goes down, so do all HTTPSY sites.
- There's not a clear way to generalize this https URL scheme into an
httpsy URL scheme that provides a list of IP addresses.
- Implementation work will be a lot harder, since there is more
mucking around with TLS/1.0 and X.509 than was required in the
original HTTPSY design.
- Although the length of the public-key subdomain label fits within
the length limits of the DNS specification, some online DNS management
software does not permit a subdomain label long enough to encode a
SHA-1 hash of the public key. We'll likely have to implement custom
redirectory software for DNS, which will be harder than implementing a
custom HTTP redirectory was. Either that, or start gambling with the
length of the hash.

On the plus side, I think it's feasible to produce a Firefox extension
which implements the y-property checks for this new design.
Unfortunately, it'll require native code and so will be hard to do and
will not be cross-platform. Being a Firefox extension means that it
would be something you could download and install from
addons.mozilla.org, instead of requiring a completely new Firefox
download.

Any thoughts, or volunteers?

Tyler

PS, for more about HTTPSY, see:

http://www.waterken.com/dev/YURL/httpsy/

--
The web-calculus is the union of REST and capability-based security:
http://www.waterken.com/dev/Web/

Name your trusted sites to distinguish them from phishing sites.
https://addons.mozilla.org/extensions/moreinfo.php?id=957
David Mercer
2005-11-06 00:28:24 UTC
Permalink
On 11/1/05, Tyler Close <***@gmail.com> wrote:
**snip some preamble**
Post by Tyler Close
Fast-forward to today, and there is motion among the major browser
manufacturers to support TLS/1.0, which enables sending the domain
name to the server before starting the crypto protocol. This suggest a
new design for implementing the y-property on top of HTTPS. Instead
of putting the public key hash in the userinfo segment, we could use
it as the bottom level subdomain label. For example,
https://cl7h3f7jwyj3fvmw7jpnjfvf2xlcmayi.yurl.net/
Once again, not-yet-upgraded browsers could connect to the target
I first posted this idea publically to cap-talk in 2002:
http://www.eros-os.org/pipermail/cap-talk/2002-July/000955.html
Post by Tyler Close
- You can't just use an IP address, and so you must own a domain name
in order to put up a server.
Naked IP's have been viewed by 'normal users' as suspect for years anyway.
Post by Tyler Close
- It's no longer possible to be independent of the DNS, so if the DNS
goes down, so do all HTTPSY sites.
And how many people would have used http redirectories with naked IP's
in the first place? I just don't think one can escape the dns fully in
todays net.

**snip**
Post by Tyler Close
We'll likely have to implement custom
redirectory software for DNS, which will be harder than implementing a
custom HTTP redirectory was. Either that, or start gambling with the
length of the hash.
I had proposed doing a custom combined dns client/server to
dynamically update the list of nameservers for the subdomain for an
object. Or at least tell the closest one of them to you that you have
a copy of the object.
Post by Tyler Close
Any thoughts, or volunteers?
Tyler
I think that if we abuse the dns in this way to locate objects, it'll
make a lot of things easier, and require less machinery in browser
extensions and similar on the client.

All of this https stuff is of course only really relevant to the web
front end; how the object capability system behind it keeps the custom
dns servers updated as to object locations is another matter. Wrap
those update operations themselves in httpsy calls, perhaps?

Cheers,

David Mercer
Tucson, AZ
Jed at Webstart
2005-11-08 23:23:44 UTC
Permalink
...Fast-forward to today, and there is motion among the major browser
manufacturers to support TLS/1.0, which enables sending the domain
name to the server before starting the crypto protocol. This suggests a
new design for implementing the y-property on top of HTTPS. Instead
of putting the public key hash in the userinfo segment, we could use
it as the bottom level subdomain label. For example,
https://cl7h3f7jwyj3fvmw7jpnjfvf2xlcmayi.yurl.net/
Once again, not-yet-upgraded browsers could connect to the target
...
Would such an approach mean that what amounts to the
Swiss number would go across the network in clear text to
the DNS server? Wouldn't that be a problem? Perhaps I'm
not understanding and need some help clarifying the model.

--Jed http://www.webstart.com/jed/
David Hopwood
2005-11-08 23:31:29 UTC
Permalink
Post by Jed at Webstart
...Fast-forward to today, and there is motion among the major browser
manufacturers to support TLS/1.0, which enables sending the domain
name to the server before starting the crypto protocol. This suggests a
new design for implementing the y-property on top of HTTPS. Instead
of putting the public key hash in the userinfo segment, we could use
it as the bottom level subdomain label. For example,
https://cl7h3f7jwyj3fvmw7jpnjfvf2xlcmayi.yurl.net/
Once again, not-yet-upgraded browsers could connect to the target
...
Would such an approach mean that what amounts to the
Swiss number would go across the network in clear text to
the DNS server?
This approach depends on public key crypto; the public key hash need not
be secret.
--
David Hopwood <***@blueyonder.co.uk>
Jed at Webstart
2005-11-09 01:06:22 UTC
Permalink
Post by David Hopwood
Post by Jed at Webstart
...Fast-forward to today, and there is motion among the major browser
manufacturers to support TLS/1.0, which enables sending the domain
name to the server before starting the crypto protocol. This suggests a
new design for implementing the y-property on top of HTTPS. Instead
of putting the public key hash in the userinfo segment, we could use
it as the bottom level subdomain label. For example,
https://cl7h3f7jwyj3fvmw7jpnjfvf2xlcmayi.yurl.net/
Once again, not-yet-upgraded browsers could connect to the target
...
Would such an approach mean that what amounts to the
Swiss number would go across the network in clear text to
the DNS server?
This approach depends on public key crypto; the public key hash need not
be secret.
I considered that before responding and probably should have mentioned that
possibility (hinted at when I said, "Perhaps I'm not understanding and need
some help clarifying the model.").

However, there's still something I don't understand. When and how is
an authority (permission?) communicated with a YURL? I've been looking
through the documentation on Tyler's Web site and I can't seem to find
the essence of any such communication. If as you suggest the authority
is represented by a private key, then how is that private key communicated?

Hmmm. I read a bit more about the management of private keys for the
YURL scheme, e.g. on:

http://www.waterken.com/dev/YURL/FAQ/

(keep it offline, etc.).

Might the answer be that one can't communicate an authority/permission
as a YURL? I don't think that can be as I clearly remember Tyler sending
me a "capability" to what amounted to a directory on his Web site at
one point. That YURL (as I understood it to be) clearly by itself was
sufficient authority to access the object that it pointed to. That means
that there was something secret in it that communicated the permission.

I wonder if the problem is that I'm focusing on the wrong end of the YURL.
Is it that in some sense the left end of the YURL is dealing with safe
naming/identification of the server and there is something more like
a Swiss number (not shown in Tyler's recent question to the list as
above) that deals with the permission component - when needed?

Can somebody help me out here?

--Jed http://www.webstart.com/jed/
David Hopwood
2005-11-09 14:51:03 UTC
Permalink
Post by Jed at Webstart
Post by David Hopwood
Post by Jed at Webstart
...Fast-forward to today, and there is motion among the major browser
manufacturers to support TLS/1.0, which enables sending the domain
name to the server before starting the crypto protocol. This
suggests a new design for implementing the y-property on top of HTTPS.
Instead of putting the public key hash in the userinfo segment, we
could use it as the bottom level subdomain label. For example,
https://cl7h3f7jwyj3fvmw7jpnjfvf2xlcmayi.yurl.net/
Once again, not-yet-upgraded browsers could connect to the target
...
Would such an approach mean that what amounts to the
Swiss number would go across the network in clear text to
the DNS server?
This approach depends on public key crypto; the public key hash need not
be secret.
I considered that before responding and probably should have mentioned that
possibility (hinted at when I said, "Perhaps I'm not understanding and need
some help clarifying the model.").
However, there's still something I don't understand. When and how is
an authority (permission?) communicated with a YURL?
It is communicated in the URL "scheme-specific-part" (after the ":"). This
part must be treated as a shared secret. If https: is used, then both the
request URL and any other YURLs sent over the connection will be encrypted.
Post by Jed at Webstart
I've been looking
through the documentation on Tyler's Web site and I can't seem to find
the essence of any such communication. If as you suggest the authority
is represented by a private key, then how is that private key communicated?
Private keys are never communicated.
Post by Jed at Webstart
I wonder if the problem is that I'm focusing on the wrong end of the YURL.
Is it that in some sense the left end of the YURL is dealing with safe
naming/identification of the server and there is something more like
a Swiss number (not shown in Tyler's recent question to the list as
above) that deals with the permission component - when needed?
Exactly. The specific example above only works as an openly published YURL
(which doesn't prevent it from being a YURL, but it's a degenerate case).

The format of the scheme-specific-part doesn't need to be standardized at
the YURL/Web-AMP protocol level.
--
David Hopwood <***@blueyonder.co.uk>
David Hopwood
2005-11-09 14:57:02 UTC
Permalink
Post by David Hopwood
Post by Jed at Webstart
However, there's still something I don't understand. When and how is
an authority (permission?) communicated with a YURL?
It is communicated in the URL "scheme-specific-part" (after the ":").
Sorry, I meant the "hierarchical part" (after the first "/").
Post by David Hopwood
This
part must be treated as a shared secret. If https: is used, then both the
request URL and any other YURLs sent over the connection will be encrypted.
--
David Hopwood <***@blueyonder.co.uk>
Tyler Close
2005-11-09 00:39:56 UTC
Permalink
Post by Jed at Webstart
...Fast-forward to today, and there is motion among the major browser
manufacturers to support TLS/1.0, which enables sending the domain
name to the server before starting the crypto protocol. This suggests a
new design for implementing the y-property on top of HTTPS. Instead
of putting the public key hash in the userinfo segment, we could use
it as the bottom level subdomain label. For example,
https://cl7h3f7jwyj3fvmw7jpnjfvf2xlcmayi.yurl.net/
Once again, not-yet-upgraded browsers could connect to the target
...
Would such an approach mean that what amounts to the
Swiss number would go across the network in clear text to
the DNS server? Wouldn't that be a problem? Perhaps I'm
not understanding and need some help clarifying the model.
As David pointed out, the random looking string in the above URLs is
the hash of the site's public key, not the SwissNumber for an object.
The above URLs are not capability URLs, just YURLs. A YURL has the
authentication properties of a capability, but not necessarily the
authorization properties. Once you've got the authentication
properties of a capability, it's easy to add the authorization
properties. For example, we could have capability URLs that look like:

https://cl7h3f7jwyj3fvmw7jpnjfvf2xlcmayi.yurl.net/id/vjqtxjo6m42fid7hnjrzsfovha7ggmuu

In this case, the "cl7h3f7jwyj3fvmw7jpnjfvf2xlcmayi.yurl.net"
component would go out in cleartext and the
"/id/vjqtxjo6m42fid7hnjrzsfovha7ggmuu" component would go out over the
authenticated and encrypted communication channel. The tricky part is
bootstrapping this comm channel in a backwards compatible way. We've
succeeded once, but were undone by Microsoft. Now we're looking for
another way.

Tyler

--
The web-calculus is the union of REST and capability-based security:
http://www.waterken.com/dev/Web/

Name your trusted sites to distinguish them from phishing sites.
https://addons.mozilla.org/extensions/moreinfo.php?id=957
Jed at Webstart
2005-11-09 01:17:32 UTC
Permalink
Post by Tyler Close
...
As David pointed out, the random looking string in the above URLs is
the hash of the site's public key, not the SwissNumber for an object.
The above URLs are not capability URLs, just YURLs. A YURL has the
authentication properties of a capability, but not necessarily the
authorization properties. Once you've got the authentication
properties of a capability, it's easy to add the authorization
https://cl7h3f7jwyj3fvmw7jpnjfvf2xlcmayi.yurl.net/id/vjqtxjo6m42fid7hnjrzsfovha7ggmuu
In this case, the "cl7h3f7jwyj3fvmw7jpnjfvf2xlcmayi.yurl.net"
component would go out in cleartext and the
"/id/vjqtxjo6m42fid7hnjrzsfovha7ggmuu" component would go out over the
authenticated and encrypted communication channel. The tricky part is
bootstrapping this comm channel in a backwards compatible way. We've
succeeded once, but were undone by Microsoft. Now we're looking for
another way.
Tyler
I think I understand now. As I suggested in my last message, it appears
I was looking at the wrong end of the capability for the authorization
component.

There's still something about what you refer to as the "authentication"
properties above that I don't think I'm fully understanding. I'll think/read
about that a bit more when I have some more time and see if I can come
up to speed enough to contribute. Thanks for taking time to clarify.

--Jed http://www.webstart.com/jed/
Tyler Close
2005-11-09 01:42:52 UTC
Permalink
Post by Tyler Close
- Although the length of the public-key subdomain label fits within
the length limits of the DNS specification, some online DNS management
software does not permit a subdomain label long enough to encode a
SHA-1 hash of the public key. We'll likely have to implement custom
redirectory software for DNS, which will be harder than implementing a
custom HTTP redirectory was. Either that, or start gambling with the
length of the hash.
So the dns provider I'm thinking about is dyndns.com. The software to
update their DNS server is included in the firmware of popular home
networking products from NetGear and Linksys. Being compatible with
this software would make it a lot easier to host a server at people's
homes. Unfortunately, dyndns.com only allows a maximum of 20
characters in the subdomain label. There's also a threatening message
in the source of the HTML that tells you not to mess around with this
interface.

So with 20 characters, we could encode 100 bits using base32, or 104
using base37 (using all the case-insensitive characters available in
DNS), or 119 using base63 (using all the characters available in DNS).
Using base63 might actually be feasible, since case would only need to
be preserved in the URL, not in the domain lookup, since it's
improbable that two public key hashes would collide in a
case-insensitive comparison. But it still seems a little dodgy; some
clients might canonicalize the domain name. So, is 100 bits enough? If
so, what's the best way to generate these 100 bits from the public
key? First 100 bits of a SHA-1 hash?

Tyler

--
The web-calculus is the union of REST and capability-based security:
http://www.waterken.com/dev/Web/

Name your trusted sites to distinguish them from phishing sites.
https://addons.mozilla.org/extensions/moreinfo.php?id=957
Trevor Perrin
2005-11-10 17:37:42 UTC
Permalink
Post by Tyler Close
So with 20 characters, we could encode 100 bits using base32
[...]
Post by Tyler Close
So, is 100 bits enough? If
so, what's the best way to generate these 100 bits from the public
key? First 100 bits of a SHA-1 hash?
A useful technique here may be "hash extension", e.g.:

1) Calculate the hash value of HASH(public key, counter)
2) Increment the counter until a hash value is discovered which begins
with some number of zero bytes (1, 2, 3, etc.)
3) Construct a 100-bit fingerprint as:
- 4-bit encoding of number of initial zero bytes in hash value
- 96-bits from the hash value after the zero bytes
4) Stick the counter into a self-signed certificate somewhere, so that a
verifier can quickly verify that the public key matches the fingerprint,
without an expensive search

Depending on how much computation time you spend in generating the
fingerprint (steps 1 and 2), you can get a higher security level (since
a brute-force attacker has to search for a hash value that matches the
zero bytes *plus* the 96 bits). On a P4M 1.7Ghz:
- 112-bit security level at ~1/20th of a second (2 zero bytes)
- 120-bit security level at ~13 seconds (3 zero bytes)
- 128-bit security level at ~1 hour (4 zero bytes)

As computers get faster, it will become feasible to generate
fingerprints at higher security levels, and thus potentially keep pace
with increases in brute strength by attackers.

Anyways, it's a minor but cute technique for compressing fingerprints,
see references below.


Trevor


http://research.microsoft.com/users/tuomaura/CGA/
http://trevp.net/cryptoID/
Tyler Close
2005-11-10 19:15:21 UTC
Permalink
Post by Trevor Perrin
1) Calculate the hash value of HASH(public key, counter)
2) Increment the counter until a hash value is discovered which begins
with some number of zero bytes (1, 2, 3, etc.)
- 4-bit encoding of number of initial zero bytes in hash value
- 96-bits from the hash value after the zero bytes
4) Stick the counter into a self-signed certificate somewhere, so that a
verifier can quickly verify that the public key matches the fingerprint,
without an expensive search
This is an intriguing technique. Unfortunately, step 4 becomes
impossible if we want to use certificates that are signed by a CA. The
goal is to produce a certificate chain that can be validated by either
existing clients, using the existing method, or by new clients, using
the hash check method. When buying a domain validated certificate, you
typically only have the ability to control the CN field and the public
key
Post by Trevor Perrin
Depending on how much computation time you spend in generating the
fingerprint (steps 1 and 2), you can get a higher security level (since
a brute-force attacker has to search for a hash value that matches the
- 112-bit security level at ~1/20th of a second (2 zero bytes)
- 120-bit security level at ~13 seconds (3 zero bytes)
- 128-bit security level at ~1 hour (4 zero bytes)
So what happens to these numbers if, instead of using a counter, we
just keep generating new public/private key pairs until we get a
public key that hashes to a value with the desired number of zero
bytes?

Tyler

--
The web-calculus is the union of REST and capability-based security:
http://www.waterken.com/dev/Web/

Name your trusted sites to distinguish them from phishing sites.
https://addons.mozilla.org/extensions/moreinfo.php?id=957
Lorens Kockum
2005-11-10 23:06:08 UTC
Permalink
When buying a domain validated certificate, you typically only
have the ability to control the CN field and the public key
You meant private, I'm sure :-)
Tyler Close
2005-11-11 00:14:32 UTC
Permalink
Post by Lorens Kockum
When buying a domain validated certificate, you typically only
have the ability to control the CN field and the public key
You meant private, I'm sure :-)
No, I meant public. The certificate the CA sells you contains your
public key. The private key is not in the certificate. It's kept
private.

Tyler

--
The web-calculus is the union of REST and capability-based security:
http://www.waterken.com/dev/Web/

Name your trusted sites to distinguish them from phishing sites.
https://addons.mozilla.org/extensions/moreinfo.php?id=957
Tyler Close
2005-11-11 01:34:47 UTC
Permalink
Post by Tyler Close
Post by Trevor Perrin
Depending on how much computation time you spend in generating the
fingerprint (steps 1 and 2), you can get a higher security level (since
a brute-force attacker has to search for a hash value that matches the
- 112-bit security level at ~1/20th of a second (2 zero bytes)
- 120-bit security level at ~13 seconds (3 zero bytes)
- 128-bit security level at ~1 hour (4 zero bytes)
So what happens to these numbers if, instead of using a counter, we
just keep generating new public/private key pairs until we get a
public key that hashes to a value with the desired number of zero
bytes?
Google was less forthcoming here than usual, but I think I've got some
back of the envelope numbers.

NIST puts a 1024 bit RSA key at the 80 bit security level, so no
extension of our 96 bit hash is required.

NIST puts a 2048 bit RSA key at the 112 bit security level, so we'd
need 16 extension bits. Assuming 5 seconds to generate the key pair,
it would take about 2 days to find a key pair.

NIST puts a 3072 bit RSA key at the 128 bit security level, so we'd
need 32 extension bits. Assuming 20 seconds to generate the key pair,
it would take over a millenia to find a key pair. Nuts.

So with this design, the hash wouldn't be able to keep up with the
extra security provided by RSA keys larger than 2048. RSA Inc. says a
2048 bit key should be good until 2030. I don't expect to use this
design for resources meant to live beyond that, so perhaps this design
is sufficient. If it turns out there's demand for longer lived keys,
we can certainly produce a design that uses a longer domain label, but
is incompatible with common dynamic DNS services.

Thoughts?

Tyler

--
The web-calculus is the union of REST and capability-based security:
http://www.waterken.com/dev/Web/

Name your trusted sites to distinguish them from phishing sites.
https://addons.mozilla.org/extensions/moreinfo.php?id=957
Trevor Perrin
2005-11-11 03:01:14 UTC
Permalink
Post by Tyler Close
Post by Tyler Close
Post by Trevor Perrin
Depending on how much computation time you spend in generating the
fingerprint (steps 1 and 2), you can get a higher security level (since
a brute-force attacker has to search for a hash value that matches the
- 112-bit security level at ~1/20th of a second (2 zero bytes)
- 120-bit security level at ~13 seconds (3 zero bytes)
- 128-bit security level at ~1 hour (4 zero bytes)
So what happens to these numbers if, instead of using a counter, we
just keep generating new public/private key pairs until we get a
public key that hashes to a value with the desired number of zero
bytes?
Google was less forthcoming here than usual, but I think I've got some
back of the envelope numbers.
[...]
Post by Tyler Close
NIST puts a 2048 bit RSA key at the 112 bit security level, so we'd
need 16 extension bits. Assuming 5 seconds to generate the key pair,
it would take about 2 days to find a key pair.
I don't think you want to generate totally different RSA key pairs just
to see whether HASH(N || e) starts with zero bits.

I think you'd rather choose an N, then try values for e. When a
suitable e is discovered (i.e. an e for which HASH(N || e) has a prefix
of zero bytes), then you do the inversion mod phi(N) to get d. If you
set things up right, you should be able to try different e values for a
cost of about one SHA-1 compression function for each trial fingerprint,
which is what the numbers above were based on.

The e value will have around the same # of bits as the number of zero
bytes that were searched for, e.g. 16, 24, 32, etc. That's not as small
as the common choices of e=17 bits or less (small e's are more
efficient), but it's not bad.

One issue is that suitable e values have to be coprime to phi(N). If
you choose the RSA primes p and q such that p-1 and q-1 have no prime
factors less than 64 bits except 2 (which I think is easy and efficient,
just choose a bunch of 64+ bit primes and try primality testing
different multiples of them plus 1), then I *think* you can just try
e=3,5,7,9,11,... with little chance of e getting big enough to be
coprime to (p-1)(q-1).

I'm not sure the math is right... I'd certainly appreciate comments or
corrections!


Trevor
Jed at Webstart
2005-11-11 19:55:30 UTC
Permalink
All cap,

I read this recent "10 Things I Hate About (U)NIX" article:

http://www.informit.com/articles/article.asp?p=424451

with interest. Of course I recognize that this is just one person's
view. However, I think it's pretty reflective of the sorts of things
that people think about when considering the potential failings of Unix.

Naturally (as evidence on this cap-talk list) what struck me most was
the fact that Unix uses an ambient authority model for access control
that is incapable of effectively supporting the Principle Of Least
Authority (Plash notwithstanding) and is thus nearly as wide open as
possible to exploits by Trojan horses.

It's true that many of the "things to hate" aren't even really very
much about the Unix operating system - e.g. X, even C to some extent,
etc. However, there are enough "things to hate" that fall into the
base OS interface area such as "1. Everything Is a File (Unless It
Isn't)", "6. Synchronous System Calls", and "7. One-Way System
Calls" (some where microkernels are suggested as a solution) that
this sort of discussion to me cries out for mention of the need for
support of a POLA program execution environment to combat Trojan horses.

I suspect (Hi David, feel free to defend yourself) that this author
isn't even aware of models beyond the ambient authority (user) model
and doesn't understand how such POLA environments (running programs
with their own selectively granted permissions rather than running as
'user' surrogates) can effectively combat Trojan horses. If he is
among the few who are aware of such POLA environments and systems
(which is it David?) I expect he believes with most in our profession
that a change to a POLA program execution environment would be so
expensive in terms of overhead and strained interfaces that the cost
would outweigh the benefit.

Regardless of where he's coming from, to me the lack of a mention of
the POLA in the context of things lacking in Unix is a shockingly
clear indication of just how poorly we (the cap/POLA community?) are
doing in informing others about this problem and about potential solutions.

Maybe it's time to introspect a bit here. What are we doing
wrong? What are we missing that might bring the need for POLA
execution environments enough into the main stream to combat the
current scourge of Trojan horses? Certainly there are plenty of
papers that have been written and published. There are plenty of
systems that have been designed and implemented with demonstrable
results (except of course that none are widespread enough
commercially to demonstrate significant benefit from combating Trojan horses).

I'm looking for strategic thoughts. Perhaps we can draw David into
the discussion to get an idea why the lack of a POLA execution
environment didn't occur to him as something to hate about Unix (Windows).

--Jed http://www.webstart.com/jed/
John Carlson
2005-11-12 03:21:08 UTC
Permalink
Isn't the POLA argument merely an argument for an object-oriented
operating system?
Where if you want to protect an object (say a file), you don't let an
untrusted piece of code at the
handle or reference to the object. I understand that
object-orientedness is not a panacea, that
good design (POLA) is required as well.

What do we need to do? Work with Web browser and Web server vendors to
enable a POLA web.
I tend to think that browsers and servers can be fairly operating system
agnostic, and the next
breakthrough OS will be built on top of the Web. The underlying
operating system becomes very
simple and unimportant when all it is doing is storing, retrieving,
serving and receiving web pages.
I'm not really sure what the next GUI will be, so I'll leave that out of
this discussion. I'm not sure
what language the next OS will be written in, but I'm fairly sure that
it will not be intel architecture,
in fact, that should probably be disallowed as an option.

Thus, we need to get on mailing lists that aren't specific to POLA and
spread the word. Maybe
start chatting up POLA on moznet, etc.

John
Post by Jed at Webstart
All cap,
http://www.informit.com/articles/article.asp?p=424451
with interest. Of course I recognize that this is just one person's
view. However, I think it's pretty reflective of the sorts of things
that people think about when considering the potential failings of Unix.
Naturally (as evidence on this cap-talk list) what struck me most was
the fact that Unix uses an ambient authority model for access control
that is incapable of effectively supporting the Principle Of Least
Authority (Plash notwithstanding) and is thus nearly as wide open as
possible to exploits by Trojan horses.
It's true that many of the "things to hate" aren't even really very
much about the Unix operating system - e.g. X, even C to some extent,
etc. However, there are enough "things to hate" that fall into the
base OS interface area such as "1. Everything Is a File (Unless It
Isn't)", "6. Synchronous System Calls", and "7. One-Way System
Calls" (some where microkernels are suggested as a solution) that this
sort of discussion to me cries out for mention of the need for support
of a POLA program execution environment to combat Trojan horses.
I suspect (Hi David, feel free to defend yourself) that this author
isn't even aware of models beyond the ambient authority (user) model
and doesn't understand how such POLA environments (running programs
with their own selectively granted permissions rather than running as
'user' surrogates) can effectively combat Trojan horses. If he is
among the few who are aware of such POLA environments and systems
(which is it David?) I expect he believes with most in our profession
that a change to a POLA program execution environment would be so
expensive in terms of overhead and strained interfaces that the cost
would outweigh the benefit.
Regardless of where he's coming from, to me the lack of a mention of
the POLA in the context of things lacking in Unix is a shockingly
clear indication of just how poorly we (the cap/POLA community?) are
doing in informing others about this problem and about potential solutions.
Maybe it's time to introspect a bit here. What are we doing wrong?
What are we missing that might bring the need for POLA execution
environments enough into the main stream to combat the current scourge
of Trojan horses? Certainly there are plenty of papers that have been
written and published. There are plenty of systems that have been
designed and implemented with demonstrable results (except of course
that none are widespread enough commercially to demonstrate
significant benefit from combating Trojan horses).
I'm looking for strategic thoughts. Perhaps we can draw David into
the discussion to get an idea why the lack of a POLA execution
environment didn't occur to him as something to hate about Unix (Windows).
--Jed http://www.webstart.com/jed/
_______________________________________________
cap-talk mailing list
http://www.eros-os.org/mailman/listinfo/cap-talk
Jed at Webstart
2005-11-12 04:20:10 UTC
Permalink
Post by John Carlson
Isn't the POLA argument merely an argument for an object-oriented
operating system?
Where if you want to protect an object (say a file), you don't let
an untrusted piece of code at the
handle or reference to the object. I understand that
object-orientedness is not a panacea, that
good design (POLA) is required as well.
Well, yes. Object oriented systems are necessary but not sufficient
for POLA. If one is to
limit running programs to the authority only to access the resources
they need, then there
must be a means to provide them with just such access on a resource
by resource (objects)
basis. In addition there must be mechanisms in place to grant
running programs the
authority they need and no more. Granting authority via 'user'
surrogates will not suffice
for practical POLA.
Post by John Carlson
What do we need to do? Work with Web browser and Web server vendors
to enable a POLA web.
I tend to think that browsers and servers can be fairly operating
system agnostic, and the next
breakthrough OS will be built on top of the Web. The underlying
operating system becomes very
simple and unimportant when all it is doing is storing, retrieving,
serving and receiving web pages.
Heh. I love your optimism John. R.e. POLA for the Web: Since I
assume you've been following
this list you know about Tyler's efforts with YURLs. It seems to me
that something like YURLs
can ultimately get us to POLA for Web accesses. However, there are
many steps and barriers
along the way. Rather than presume, perhaps Tyler can comment on
what he thinks it will
take to get to a POLA Web - e.g. over YURLs. I think there is
certainly more short term potential
in this area than in the area execution environments, but even on the
Web POLA doesn't seem
to be making substantive progress (perhaps Alan Karp would care to
comment?). On the Web
at least the active entities (subjects) are people, so basing
authority on 'user' authority on
the Web seems to me to make sense. On the Web just getting authority
bundled with
designation for something like YURLs is not yet there to serve as a
basis for a more POLA
Web.
Post by John Carlson
I'm not really sure what the next GUI will be, so I'll leave that
out of this discussion.
That seems reasonable as I don't believe any adaptations needed for
POLA in a GUI will
be significant (others?).
Post by John Carlson
I'm not sure
what language the next OS will be written in, but I'm fairly sure
that it will not be intel architecture,
in fact, that should probably be disallowed as an option.
Do you believe the language that the 'next' OS will be written is a
substantial barrier - never mind
whether we could have any say about the manufacturer of future processors?

I believe the language issues (e.g. POLA at the object level in O-O
languages) will take care
of itself if we can get to POLA at the OS interface. Perhaps I'm
focusing to much on the wrong
area, but until 'user' mode processes run under an OS (even that
'user' word seems to suggest
ambient authority) can be limited to access to just the objects they
need, I see little hope
for languages getting POLA. As far as the "next" OS goes, well, just
let me suggest that when
David Chisnall says, "Time for U(NIX) 2 Retire", I think the 'retire'
metaphor is stretched beyond
its effective usability. The thing that keeps OSs around and in fact
dominant in the market
is the amount of software written to their interfaces. In my opinion
that is why Windows is
dominant on the desktop and why Unix is dominant in the area of
scientific computing.

Many of the interfaces that those dominant systems (Windows and Unix)
present to running
programs are not, as you suggest, appropriately object
oriented. They haven't been since Unix
or Windows have been around. Even if there was a huge ground swell
of support for POLA
execution environments, and even if market issues were put aside, I
believe there is so much
momentum behind the existing interfaces that it will be quite
difficult to move them.

Perhaps there is hope in the little bit at a time approaches (e.g.
Plash and Polaris). However, at this
time there isn't a ground swell of support for POLA execution
environments. In fact almost nobody
(e.g. David Chisnall?) even knows about them and that they can be
effective against the problem
of Trojan horses.
Post by John Carlson
Thus, we need to get on mailing lists that aren't specific to POLA
and spread the word. Maybe
start chatting up POLA on moznet, etc.
Heh. Well, that certainly can't hurt. However, do you believe such
'chatting' will ultimately lead
to POLA execution environments? Like to speculate on how many years
that might take? I'd
say we're sitting at about 30. Of course it's only really in the
Internet era (since, say, 1993?)
that the issue of Trojan horses has really started to bite. It's
only since the late nineties that
it's become a serious problem. Now there are many companies whose
business is in areas
like virus filtering that would likely be adversely impacted if the
dominant program execution
environments were POLA.

Still, the lack of such POLA execution environments now seems to me a
serious practical problem.
However, I don't see any movement (beyond this list which I believe
has low "market penetration")
toward correcting that problem.

Are we going to wallow in this situation for another 30 years?

--Jed http://www.webstart.com/jed/
Ian G
2005-11-12 21:00:14 UTC
Permalink
Post by Jed at Webstart
Maybe it's time to introspect a bit here. What are we doing wrong?
What are we missing that might bring the need for POLA execution
environments enough into the main stream to combat the current scourge
of Trojan horses? Certainly there are plenty of papers that have been
written and published. There are plenty of systems that have been
designed and implemented with demonstrable results (except of course
that none are widespread enough commercially to demonstrate significant
benefit from combating Trojan horses).
I'm looking for strategic thoughts. Perhaps we can draw David into the
discussion to get an idea why the lack of a POLA execution environment
didn't occur to him as something to hate about Unix (Windows).
Sounds like you need "10 things I hate about POLA/caps!"

Here goes! Feel free to rip into them - or me :-)

1. All the info is at a very academic level. I don't
think anyone less than doctoral level in comp sci would
get anywhere with it. But the world is full (and I do
mean full) of self-taught, framework, plug-in programmers.
Proper comp sci degrees are not as common as one would
think; most places do not have even one person like the
people on this list, and many places don't even have
comp sci graduates.

2. Far too much of the language is in obscure inside
code that takes a while to learn. For example, "ambient
authority" and "confused deputy." The Granovetter
diagram is the easiest turn-off: everyone starts by
talking about Granovetter introductions and cute little
diagrams, but those are meaningless to most people, and
the explanations don't convey anywhere near as much as
the explainers think, probably because the listener
hasn't read the book on Granovetter. (Oddly enough,
I did actually research Granovetter one day and
discovered he was much more interested in weak
relationships!)

3. There are few or no low level easy to read intros.
No executive introductions. I think there is a good
low level intro which was reported on this group sometime
ago - but I haven't got it now (by Shap?). No Dr Dobbs
or Nutshells, no cross linking like "caps for java
programmers" and "caps explained for LISP people..."

4. There is no linking to easy areas. Everything and
everybody is focussed on the hard stuff, the elegant
stuff, the unsolved problems. But if you want it to
spread, you have to reach out to people who will never
understand those hard issues, they just want to read the
basics: which means the URL to the 10 best links should
be down at the bottom of this email, and on everyone's
page. There should be a blog. There should be podcasts,
there should be dancing girls... well, maybe not that
far, but if you want a movement you have to preach to
the masses.

5. No frameworks, no tools. Someone else pointed this
out. We live in a framework world; practically all
big and small systems are now done within a set of
frameworks which get plugged together in standard
ways and then some glue code is hacked in.

6. No killer apps - it is unfortunate that most tech
introductions succeed not because they are good, but
because they are introduced by a killer app. It's the
app people want, not the philosophy.

7. No "compatible drop in replacement" systems. The
second way things get done is to replace some other
system, but with improvements. That doesn't denigrate
the prior system, it improves it. So from a Unix pov,
that means re-writing parts or all of Unix, but
retaining all the capability of the host. (I've
recently started using subversion instead of cvs,
and it works with exactly the same command language.
It's great, I simply don't have time to learn yet
another tool...)

8. A sort of unreality pervades many caps people's
interaction with the rest of the world. I find this
very off-putting, and I consider myself one of the more
patient people around. This unreality manifests itself
in several ways. The notion seems almost that it has
to be all POLA all caps or its not good enough. Like
Unix - it's not good enough because it's not POLA.
Well, that's nonsense - it stands alone and proud
and runs many computers safely and securely.

9. Little integration with other disciplines. There
seems to be this notion that you are providing the
philosophy and we will receive it and be grateful.
It's only now that people like Ping are working to
integrate security into UIs and the guys at HP are
working to create actual usable demos that this might
be changing.

10. Isolated ideas are not science. My impression
on reading lots of articles and papers and so forth
is that I find relatively little attempt to link and
cross-reference to other concepts and ideas. This
could be considered elitist, insular, rude, arrogant,
or whatever. I prefer to think of it as a signal that
the work that is done may be crap, because if you
haven't taken the time to integrate and understand
(and criticise!) other people's work, then you haven't
done the scientific thing. And other scientists who
read the stuff then can often sense that, although
they may not vocalise it (it took me a while to
figure out what I felt and why).



There! Fire away :)

Having said all that, I don't necessarily think there
are easy solutions to any of this. I also think many
of the same things apply to my work - which is how I
figured some of them out.

iang
Ka-Ping Yee
2005-11-12 21:24:26 UTC
Permalink
Post by Ian G
1. All the info is at a very academic level.
2. Far too much of the language is obscure.
3. There are few or no low level easy to read intros.
4. There is no linking to easy areas.
5. No frameworks, no tools.
6. No killer apps.
7. No "compatible drop in replacement" systems.
8. A sort of unreality... [which i take to mean an idealist attitude].
9. Little integration with other disciplines.
10. Isolated ideas are not science.
Wow -- i think that is an excellent list of challenges for us to
think hard about. I don't necessarily think all of these are as
absolute as Ian has stated them (i.e. some of the "No"s could be
replaced by "Few" or "Minimally known/deployed" etc.) -- but
nonetheless each one is a serious concern worth addressing.

I think i've seen some of these come up at various points in earlier
discussions, and it's great to have a summary of them here.

We have the early beginnings of answers to some of these items
(e.g. tools, apps), which is cause for hope.


-- ?!ng
Ian G
2005-11-14 10:37:33 UTC
Permalink
Post by Ka-Ping Yee
Post by Ian G
1. All the info is at a very academic level.
2. Far too much of the language is obscure.
3. There are few or no low level easy to read intros.
4. There is no linking to easy areas.
5. No frameworks, no tools.
6. No killer apps.
7. No "compatible drop in replacement" systems.
8. A sort of unreality... [which i take to mean an idealist attitude].
9. Little integration with other disciplines.
10. Isolated ideas are not science.
Wow -- i think that is an excellent list of challenges for us to
think hard about. I don't necessarily think all of these are as
absolute as Ian has stated them (i.e. some of the "No"s could be
replaced by "Few" or "Minimally known/deployed" etc.) -- but
nonetheless each one is a serious concern worth addressing.
Of course! But sometimes it is easier to be more
singular minded ("aggressive") as that helps to make
the point clearer where one is challenging long term
habits, albeit at some risk of offence.

( OTOH, there is no reason why things on that list
can't simply be wrong. They are the perception of
an outsider, and it could be that I've simply
misunderstood some things of subtlety. Which is
still an issue, but it's a different problem to
solve. )

I was struck on the weekend by the Amazon Mechanical
Turk. This idea directly relates to the development
of a thread of things going back to the late 80s, and
which I partially covered in a paper back in 97. Now,
the power of the story and analogue is outstanding,
having read their presentation it is immediately clear
what they are talking about. Or, it is immediately
clear enough and also exciting enough to ignore any
other difficulties.

https://www.financialcryptography.com/mt/archives/000596.html

What caps is lacking is those sorts of easy stories.
(Someone else posted on this.) We need the easy
analogue to some familiar story, and we need this
to be inculcated into the headline and front page
of every presentation.
Post by Ka-Ping Yee
I think i've seen some of these come up at various points in earlier
discussions, and it's great to have a summary of them here.
We have the early beginnings of answers to some of these items
(e.g. tools, apps), which is cause for hope.
Right. It was these sorts of thoughts that motivated
me to push (help?) Marc's recent paper on ZT & petnames.

iang
John C. McCabe-Dansted
2005-11-13 03:54:56 UTC
Permalink
At the moment there is little popular incentive to use caps because nobody
will bother learning how to request a capability to a file the standard
execution environment allows you to just fopen it. If we can sell POLA, we
then have way of selling caps.

For this reason, I think the Mark Seaborn's Plash is almost an answer to the
"10 thinks I hate about caps". Plash plus something like klik or zero-install
could provide the killer app for POLA. Together they provide a safer, cheaper
and more convenient way of providing software solutions while protecting the
network from malware than e.g. "only the IT department is allowed to install
software".

Then, if Plash becomes popular and there is a famous confused deputy exploit,
I understand he could fix all such exploits in typical C code by requiring the
application to pass the same pointer to to fopen as was returned by
powerbox_req_filename.
i.e.
fopen(fname)
succeeds but
fopen(strcpy(fname))
fails.

This solves the confused deputy problem as (remotely sane) C programs never
accept pointers from untrusted sources, and two valid non-null pointers are
never equal unless they come from the same source.

Then fname becomes a capability ID that just so happens to also be a pointer
to a text string. Once average programmers favourite programming framework
offers capabilities and uses them under-the-hood, I imagine they would want
to exploit them and learn how to use them effectively.

--
John C. McCabe-Dansted
Masters Student
Darius Bacon
2005-11-13 07:09:27 UTC
Permalink
Post by John C. McCabe-Dansted
Then, if Plash becomes popular and there is a famous confused deputy exploit,
I understand he could fix all such exploits in typical C code by requiring the
application to pass the same pointer to to fopen as was returned by
powerbox_req_filename.
i.e.
fopen(fname)
succeeds but
fopen(strcpy(fname))
fails.
This solves the confused deputy problem as (remotely sane) C programs never
accept pointers from untrusted sources, and two valid non-null pointers are
never equal unless they come from the same source.
Can you explain how this works for the usual example of the confused
deputy? -- where there's no copying or building of strings, unless you
count a string literal. The problem is having two fopens for two
different roles where what's OK in one role isn't in the other:

#include <stdio.h>
int main(int argc, char **argv) {
FILE *debugging = fopen(argv[1], "w");
FILE *accounting = fopen("/home/compiler/accounting", "a");
if (!accounting || !debugging) return 1;
fprintf (accounting, "Starting another run\n");
fprintf (debugging, "Hooray\n");
return 0;
}

(where the executable is suid and the attack looks like
$ runcompiler /home/compiler/accounting
truncating the private accounting file. I haven't tried actually
running this, being at a Windows laptop at the moment.)

Darius
Richard Uhtenwoldt
2005-11-13 22:12:38 UTC
Permalink
Post by Darius Bacon
FILE *debugging = fopen(argv[1], "w");
FILE *accounting = fopen("/home/compiler/accounting", "a");
I think Plash redefines or hooks fopen. I believe it's not the
normal, libc fopen, Darius.

the Plash fopen might for example make a habit of ensuring that
the char pointer you give it is pointer-equal to one of the
pointers that powerbox_req_filename has already returned.

I hypothesize that a central goal of Plash is to make it
impossible for the app simply to fopen(argv[1] ...) or to
fopen("/some/string/literal").

I.e., the whole idea is to give the computer operator ("sysadm"?)
running the app more control over the app by making the app ask
the computer operator for permission to open the file (or talk to
the modem, or ...). the app _can't_ just do it itself.

I hope Mark Seaborn will enlighten us.

If what I just said is actually the case, then there is some
vague isomorphism between Plash and per-process namespaces (PPNs)
(a Plan 9 thing). If the computer operator has PPNs, he can
start the app in an environment in which, e.g.,
/home/compiler/accounting simply does not exists. If instead the
app has been written atop Plash, then the string
/home/compiler/accounting must, e.g., have been obtained via the
powerbox or the fopen will not work.

Both cases differ from the normal Unix case in that it is the
operator and not the app writer that gets to specify whether the
app gets to write to /home/compiler/accounting. In the simplest
case, the operator is presented with the name of the file and
grants or refuses to give the app access a la a file open dialog
box. Ideally with Plash it is possible for the operator to do
more advanced things, like write a script that grants the
accesses when a simple dialog box is not expressive enough.

Text by me above is hereby placed in the public domain
John C. McCabe-Dansted
2005-11-14 01:23:39 UTC
Permalink
Post by Richard Uhtenwoldt
I hypothesize that a central goal of Plash is to make it
impossible for the app simply to fopen(argv[1] ...) or to
fopen("/some/string/literal").
As I understand, the Plash allows the user to associate rights with argv[1].
The user indicates that a command line application should have write access
to a file by prefixing it with '=>', e.g.so the user indicates that the
runcompiler should have write access to myfile by typing
runcompiler => myfile
In this case fopen(argv[1]...) should succeed.
The user should not be able to pass rights in for /home/compiler/accounting.

Perhaps similarly we could consider string literals to also be grants of
authority from the suid bit. This would result in a language that looks a lot
like Posix/C.

Also, If we included a configuration file consisting of rules like
alias vi='vi =>' (vi has write access to argv)
then the Plash shell would feel like a standard bash shell. Perhaps we could
generate many of these rules these from man pages. Otherwise manually writing
rules for the 1000 most common command line apps would be the easiest part of
the Plash project, and a bug here here would be easier to detect and less
serious than a bug in the Plash C code.

it seems that we could introduce caps to existing programmers as a short list
of rules like
strcat(fname,".bak") is depreciated, use fnamecat(fname,".bak")
or backupfname(fname),
gets is depreciated (but you knew that already),
which would be much less scary than "write everything in E".

--
John C. McCabe-Dansted
Masters Student.
Richard Uhtenwoldt
2005-11-14 02:19:50 UTC
Permalink
Post by John C. McCabe-Dansted
Post by Richard Uhtenwoldt
I hypothesize that a central goal of Plash is to make it
impossible for the app simply to fopen(argv[1] ...) or to
fopen("/some/string/literal").
As I understand, the Plash allows the user to associate rights with argv[1].
Thank you for the correction.

I hope it was clear that my reply was simply a wild guess based
on no knowledge of Plash. (Moreover, it does not even attempt to
answer Darius's question.)

Maybe wild guesses like mine are unuseful in highly technical
discussions like this one??

BTW _learning_ something about Plash has been on my list of
things to do since Mark Seaborn first announced it here _years_
ago (I'm a linux user), but alas, "The lyf so short, the craft so
long to lerne."
--
Text by me above is hereby placed in the public domain
Richard Uhtenwoldt (pronounced "You, Ten-Walled")
John C. McCabe-Dansted
2005-11-14 02:53:55 UTC
Permalink
Post by Richard Uhtenwoldt
Post by John C. McCabe-Dansted
Post by Richard Uhtenwoldt
I hypothesize that a central goal of Plash is to make it
impossible for the app simply to fopen(argv[1] ...) or to
fopen("/some/string/literal").
As I understand, the Plash allows the user to associate rights with argv[1].
Thank you for the correction.
I hope it was clear that my reply was simply a wild guess based
on no knowledge of Plash. (Moreover, it does not even attempt to
answer Darius's question.)
Maybe wild guesses like mine are unuseful in highly technical
discussions like this one??
BTW _learning_ something about Plash has been on my list of
things to do since Mark Seaborn first announced it here _years_
ago (I'm a linux user), but alas, "The lyf so short, the craft so
long to lerne."
Darius Bacon
2005-11-14 08:11:46 UTC
Permalink
Post by John C. McCabe-Dansted
Post by Richard Uhtenwoldt
I hypothesize that a central goal of Plash is to make it
impossible for the app simply to fopen(argv[1] ...) or to
fopen("/some/string/literal").
As I understand, the Plash allows the user to associate rights with argv[1].
The user indicates that a command line application should have write access
to a file by prefixing it with '=>', e.g.so the user indicates that the
runcompiler should have write access to myfile by typing
runcompiler => myfile
In this case fopen(argv[1]...) should succeed.
The user should not be able to pass rights in for /home/compiler/accounting.
Passing in that right was never the question in the confused-deputy
example, because the program was already endowed with it. (With Plash
you'd make an 'executable object' including that filename in its
namespace, as I understand the docs.) The attacker just passes in the
filename as a string, with no "=>". fopen() proxies to Plash which
checks if the file is accessible to this process, which it is.

There are ways to deal with this, like your suggestion below -- I
asked because checking for char-pointers from powerboxes just didn't
address even the very first example of the confused deputy, and it's
not clear to me what's the best solution that works with legacy code.
Post by John C. McCabe-Dansted
Perhaps similarly we could consider string literals to also be grants of
authority from the suid bit. This would result in a language that looks a lot
like Posix/C.
Interesting idea. I wonder how many programs would work unmodified, by
that rule.

Darius
Lorens Kockum
2005-11-14 09:17:03 UTC
Permalink
Post by Darius Bacon
Post by John C. McCabe-Dansted
Perhaps similarly we could consider string literals to also be grants of
authority from the suid bit. This would result in a language that looks a lot
like Posix/C.
Interesting idea. I wonder how many programs would work unmodified, by
that rule.
I think the right way is to define an interface for every
application. That way, installing untrusted third-party code
means accepting that interface. For instance, caps/POLA can't
help you if your malicious editor scrambles your document or if
your malicious browser uploads your browsing history somewhere,
but if say mplayer has an interface that says "read-only access
to files named on command line, right to use graphics and sound
card", then that eliminates a whole class of problems.

Plash forces/permits users to define those rights on the command
line, which has the advantage of making the user aware of
it, but the disadvantage of not looking like the standard
command line (think adoption) and making invocation seem more
complicated. I think one could combine both for the best of both
worlds (refuse to run programs with unapproved interfaces unless
plash-style syntax is used, override approved interfaces with
plash-style syntax).

See:

http://www.eros-os.org/pipermail/cap-talk/2004-April/001585.html

http://www.eros-os.org/pipermail/cap-talk/2004-September/001935.html

for previous discussions on this topic.

(expounding on my POV specifically,

http://www.eros-os.org/pipermail/cap-talk/2004-September/001937.html
)

If people agreed that this is the way to go, I'd be happy to
try to define a syntax for these interfaces (thinking XML) and
describe existing programs.
--
Lorens
John C. McCabe-Dansted
2005-11-14 10:53:13 UTC
Permalink
Post by Darius Bacon
...
Post by John C. McCabe-Dansted
As I understand, the Plash allows the user to associate rights with
argv[1]. The user indicates that a command line application should have
write access to a file by prefixing it with '=>', e.g.so the user
indicates that the runcompiler should have write access to myfile by
typing
runcompiler => myfile
In this case fopen(argv[1]...) should succeed.
The user should not be able to pass rights in for
/home/compiler/accounting.
Passing in that right was never the question in the confused-deputy
example, because the program was already endowed with it. (With Plash
you'd make an 'executable object' including that filename in its
namespace, as I understand the docs.) The attacker just passes in the
filename as a string, with no "=>". fopen() proxies to Plash which
checks if the file is accessible to this process, which it is.
There are ways to deal with this, like your suggestion below -- I
asked because checking for char-pointers from powerboxes just didn't
address even the very first example of the confused deputy, and it's
not clear to me what's the best solution that works with legacy code.
I think we might be talking past each other. In my proposed modification to
Plash fopen(fname) will only succeed if (int)fname==(int)cap_ptrs[i] for some
i. If the user did not pass rights into "runcompiler" via argv[1], then
(int)argv[1] would not have been added to cap_ptrs and the fopen(argv[1])
will thus return an error.

I am not sure if it would be worth getting this to work with the majority of
legacy code. You would probably have to modify all the string functions to
add extra entries to cap_ptrs where appropriate. This would probably have
less overhead than SafeC, but the overhead of SafeC can be quite large. You
would also have to hope that they did not modify filenames with their own
string functions or enter filenames with their own entry functions.

However I think that this does allow us to present a type of caps in a form
that won't scare existing POSIX programmers.
Post by Darius Bacon
Post by John C. McCabe-Dansted
Perhaps similarly we could consider string literals to also be grants of
authority from the suid bit. This would result in a language that looks a
lot like Posix/C.
Interesting idea.  I wonder how many programs would work unmodified, by
that rule.
Darius
--
John C. McCabe-Dansted
Darius Bacon
2005-11-14 21:13:10 UTC
Permalink
Post by John C. McCabe-Dansted
I think we might be talking past each other. In my proposed modification to
Plash fopen(fname) will only succeed if (int)fname==(int)cap_ptrs[i] for some
i. If the user did not pass rights into "runcompiler" via argv[1], then
(int)argv[1] would not have been added to cap_ptrs and the fopen(argv[1])
will thus return an error.
Yes, I understand this. When you proposed it, you said "this solves
the confused deputy problem." The confused deputy problem deals in
rights from both the program's creator and its caller; allowing only
rights from the caller means you can't even express the pattern where
the problem comes up:
http://www.cis.upenn.edu/~KeyKOS/ConfusedDeputy.html

So I was asking how you would change Plash to deal with this concrete
example.
Post by John C. McCabe-Dansted
I am not sure if it would be worth getting this to work with the majority of
legacy code. You would probably have to modify all the string functions to
add extra entries to cap_ptrs where appropriate. This would probably have
less overhead than SafeC, but the overhead of SafeC can be quite large. You
would also have to hope that they did not modify filenames with their own
string functions or enter filenames with their own entry functions.
I don't know. When I was thinking about this before Plash came out, I
figured you'd do a 'setup run' of the program where all its open()
calls turn into powerbox requests, with the rights granted that way
getting bundled into that installation of the program for subsequent
runs. To stop the confused deputy, it'd associate rights with
particular call sites instead of the whole program. This struck me as
pretty kludgey and I'm just as glad it's Mark Seaborn working on the
Unix problem instead of me.

Darius
Mark Seaborn
2005-11-16 21:13:16 UTC
Permalink
Plash plus something like klik or zero-install could provide the
killer app for POLA. Together they provide a safer, cheaper and more
convenient way of providing software solutions while protecting the
network from malware than e.g. "only the IT department is allowed to
install software".
Thanks for mentioning klik and zero-install. I hadn't come across
them before. I've been interested in better ways of installing
software for a long time, and it's part of the motivation for Plash.
Plash could simplify klik and zero-install, which both perform
filesystem hacks.

Mark
Trevor Perrin
2005-11-11 03:04:10 UTC
Permalink
err, typo fixed -
Post by Tyler Close
Post by Tyler Close
Post by Trevor Perrin
Depending on how much computation time you spend in generating the
fingerprint (steps 1 and 2), you can get a higher security level (since
a brute-force attacker has to search for a hash value that matches the
- 112-bit security level at ~1/20th of a second (2 zero bytes)
- 120-bit security level at ~13 seconds (3 zero bytes)
- 128-bit security level at ~1 hour (4 zero bytes)
So what happens to these numbers if, instead of using a counter, we
just keep generating new public/private key pairs until we get a
public key that hashes to a value with the desired number of zero
bytes?
Google was less forthcoming here than usual, but I think I've got some
back of the envelope numbers.
[...]
Post by Tyler Close
NIST puts a 2048 bit RSA key at the 112 bit security level, so we'd
need 16 extension bits. Assuming 5 seconds to generate the key pair,
it would take about 2 days to find a key pair.
I don't think you want to generate totally different RSA key pairs just
to see whether HASH(N || e) starts with zero bits.

I think you'd rather choose an N, then try values for e. When a
suitable e is discovered (i.e. an e for which HASH(N || e) has a prefix
of zero bytes), then you do the inversion mod phi(N) to get d. If you
set things up right, you should be able to try different e values for a
cost of about one SHA-1 compression function for each trial fingerprint,
which is what the numbers above were based on.

The e value will have around the same # of bits as the number of zero
bytes that were searched for, e.g. 16, 24, 32, etc. That's not as small
as the common choices of e=17 bits or less (small e's are more
efficient), but it's not bad.

One issue is that suitable e values have to be coprime to phi(N). If
you choose the RSA primes p and q such that p-1 and q-1 have no prime
factors less than 64 bits except 2 (which I think is easy and efficient,
just choose a bunch of 64+ bit primes and try primality testing
different multiples of them plus 1), then I *think* you can just try
e=3,5,7,9,11,... with little chance of e getting big enough to have a
Post by Tyler Close
Post by Tyler Close
common factor<< with (p-1)(q-1).
I'm not sure the math is right... I'd certainly appreciate comments or
corrections!


Trevor
Rob J Meijer
2005-11-11 07:57:58 UTC
Permalink
Post by Tyler Close
we can certainly produce a design that uses a longer domain label, but
is incompatible with common dynamic DNS services.
Thoughts?
This may or may not be an option, but you could provide a static CNAME
DNS service. Register a short domain and set up a service that allows
people to register a wildcard prefixed subdomain of this domain and have
the static dns server return a CNAME record contining the dynamic DNS name.

But maybe it would be smarter to abandon trying to use common existing
dynamic DNS services as they dont provide DNSsec, and first see if it
would not be possible to create a new dynamic DNS design using DNSSec at
its foundation.
Such a DNS service may possibly provide a more solid foundation for your
implementing, taking away some of the design issues that could already be
solved partialy at the dynamic DNSSec layer.
David Mercer
2005-11-11 09:10:31 UTC
Permalink
Post by Rob J Meijer
Post by Tyler Close
we can certainly produce a design that uses a longer domain label, but
is incompatible with common dynamic DNS services.
Thoughts?
This may or may not be an option, but you could provide a static CNAME
DNS service. Register a short domain and set up a service that allows
people to register a wildcard prefixed subdomain of this domain and have
the static dns server return a CNAME record contining the dynamic DNS name.
But maybe it would be smarter to abandon trying to use common existing
dynamic DNS services as they dont provide DNSsec, and first see if it
would not be possible to create a new dynamic DNS design using DNSSec at
its foundation.
Such a DNS service may possibly provide a more solid foundation for your
implementing, taking away some of the design issues that could already be
solved partialy at the dynamic DNSSec layer.
I'm not personally very attached to the idea of using current dynamic
dns servers for this myself. I've been holding off on writing a
custom dns server since my post a couple of years ago that I
referenced earlier in this thread. A non-bind, non-MS, non-djb DNS
server codebase would be a good thing though, imho.

IPv4 dnssec is an ugly hack with key distribution issues, so if I were
to write a custom dns server for this from scratch, I don't think I'd
use dnssec features with IPv4. Now a nice, shiny new IPv6 dns server
with the object location features we're talking about, that would be
neat. And perhaps easier to do.

-David Mercer
Tucson, Az
David Hopwood
2005-11-11 13:18:31 UTC
Permalink
Post by Tyler Close
Post by Trevor Perrin
1) Calculate the hash value of HASH(public key, counter)
2) Increment the counter until a hash value is discovered which begins
with some number of zero bytes (1, 2, 3, etc.)
- 4-bit encoding of number of initial zero bytes in hash value
- 96-bits from the hash value after the zero bytes
4) Stick the counter into a self-signed certificate somewhere, so that a
verifier can quickly verify that the public key matches the fingerprint,
without an expensive search
This is an intriguing technique. Unfortunately, step 4 becomes
impossible if we want to use certificates that are signed by a CA.
The counter value doesn't have to be in the certificate. The verifier just
has to know what it is. For example, it could be sent in a TLS extension.
--
David Hopwood <***@blueyonder.co.uk>
Tyler Close
2005-11-11 17:33:11 UTC
Permalink
Post by David Hopwood
Post by Tyler Close
Post by Trevor Perrin
1) Calculate the hash value of HASH(public key, counter)
2) Increment the counter until a hash value is discovered which begins
with some number of zero bytes (1, 2, 3, etc.)
- 4-bit encoding of number of initial zero bytes in hash value
- 96-bits from the hash value after the zero bytes
4) Stick the counter into a self-signed certificate somewhere, so that a
verifier can quickly verify that the public key matches the fingerprint,
without an expensive search
This is an intriguing technique. Unfortunately, step 4 becomes
impossible if we want to use certificates that are signed by a CA.
The counter value doesn't have to be in the certificate. The verifier just
has to know what it is. For example, it could be sent in a TLS extension.
Ideally, I think we should try for a design that also does not require
any server side changes. We just format our domain name and public key
such that we can support the y-property. Apache, or whatever server
software is being used, is then configured with the SSL certificate in
the normal way. If that's possible, existing shared hosting services
could be used to deploy applications.

I want to give us the best possible chance of seeing widespread
adoption of YURLs. I think optimizing the amount of software and
number of configurations that are compatible with the design is the
best way I have to meet this goal. I am paying particular attention to
low cost deployment scenarios, since they are our likely early
adopters. So, the ability to use an existing shared hosting service is
important. Similarly, the ability to host at home and use dyndns.com
is important.

Tyler

--
The web-calculus is the union of REST and capability-based security:
http://www.waterken.com/dev/Web/

Name your trusted sites to distinguish them from phishing sites.
https://addons.mozilla.org/extensions/moreinfo.php?id=957
David Wagner
2005-11-12 06:20:16 UTC
Permalink
Object oriented systems are necessary but not sufficient for POLA.
OO isn't necessary, if your notion of OO includes things like inheritance,
polymorphism, dynamic message dispatch, interfaces, or type hierarchies.
None of the latter are necessary for building capability-oriented systems.

All that you really need is a protected domain: an isolated subsystem
that can hold private (secret, untamperable) code and state, with some
well-defined invocation interfaces (such that these are the only ways to
enter the subsystem). This is like the lambda-calculus where functions
can have private, mutable state, or like Saltzer & Schroeder's notion of
protected domains, or like the programming language community's notion of
an Abstract Data Type, or somewhat like the architecture's notion of call
gates, or (even more loosely) somewhat like the notion of setuid programs.
I feel sure I've seen Mark Miller, and probably Jonathan Rees, make this
point before.
Rob J Meijer
2005-11-12 07:13:44 UTC
Permalink
Post by Jed at Webstart
Regardless of where he's coming from, to me the lack of a mention of
the POLA in the context of things lacking in Unix is a shockingly
clear indication of just how poorly we (the cap/POLA community?) are
doing in informing others about this problem and about potential solutions.
Maybe it's time to introspect a bit here. What are we doing
wrong? What are we missing that might bring the need for POLA
execution environments enough into the main stream to combat the
current scourge of Trojan horses? Certainly there are plenty of
papers that have been written and published. There are plenty of
systems that have been designed and implemented with demonstrable
results (except of course that none are widespread enough
commercially to demonstrate significant benefit from combating Trojan horses).
This 'plenty of' may actualy be part of the problem. If it would be
possible for all these papers and implementations to go into the direction
of some form of POLA posix interface definition, than I think the main
stream system developers may be found willing and able to implement it.
But the problem is that I dont believe POLA to be specific enough to allow
a one size fits all posix standard API. I would for example probably not
be able to fit my transaction/full-digraph-authority based incident
response system that I am currently building to an API that is based
exclusively on capability based systems, while both implement POLA.
You could probably come up with some sort of posix API for minimal
capability based systems though, and while not suitable for IR, this may
get you somthing of a foundation (I may be shooting myself in the foot
with this comment as IR is my personal main concern with respect to POLA).

Rob
David Wagner
2005-11-12 07:38:26 UTC
Permalink
Post by Rob J Meijer
This 'plenty of' may actualy be part of the problem. If it would be
possible for all these papers and implementations to go into the direction
of some form of POLA posix interface definition, than I think the main
stream system developers may be found willing and able to implement it.
But the problem is that I dont believe POLA to be specific enough to allow
a one size fits all posix standard API.
POLA is a principle, not an API. It is a guideline/heuristic for system
design that is intended to be useful for many different kinds of systems.
Asking for a common API for POLA like asking for a POSIX-blessed API for
"be conservative in what you send, liberal in what you accept".
David Mercer
2005-11-14 07:50:03 UTC
Permalink
Post by David Wagner
Post by Rob J Meijer
This 'plenty of' may actualy be part of the problem. If it would be
possible for all these papers and implementations to go into the direction
of some form of POLA posix interface definition, than I think the main
stream system developers may be found willing and able to implement it.
But the problem is that I dont believe POLA to be specific enough to allow
a one size fits all posix standard API.
POLA is a principle, not an API. It is a guideline/heuristic for system
design that is intended to be useful for many different kinds of systems.
Asking for a common API for POLA like asking for a POSIX-blessed API for
"be conservative in what you send, liberal in what you accept".
You are of course entirely correct. However, what I think they meant
(and somebody jump in if I'm off base) is a version of POSIX with (as
Jed would say) a secure rights communication method. At least that's
what I make of it.

Ciao,

-David Mercer
Jed at Webstart
2005-11-14 23:50:32 UTC
Permalink
Post by David Wagner
Post by Rob J Meijer
This 'plenty of' may actualy be part of the problem. If it would be
possible for all these papers and implementations to go into the direction
of some form of POLA posix interface definition, than I think the main
stream system developers may be found willing and able to implement it.
But the problem is that I dont believe POLA to be specific enough to allow
a one size fits all posix standard API.
POLA is a principle, not an API. It is a guideline/heuristic for system
design that is intended to be useful for many different kinds of systems.
Asking for a common API for POLA like asking for a POSIX-blessed API for
"be conservative in what you send, liberal in what you accept".
Hmmm. I think there's a bit of meat related to the above that should
be discussed. For many of you this will be preaching to the choir,
but I expect some of these observations aren't universally shared -
even on this list. Also I think perhaps it's worthwhile to hash out
some of these big picture issues to come to a closer common understanding.

Firstly, POLA (and certainly POLP before it) is indeed a principle
and not itself an API.

However (and I believe this to be an important however), APIs can
lend themselves to POLA or can make POLA difficult and even make
using POLA nearly impossible.

The 'IX (POSIX, Unix, ...) and Windows APIs fall into the category of
making POLA nearly impossible to apply. The difficulty is that all
access control mechanisms in 'IX and Windows systems are based on
'user's, uids. It's uids that are in groups, that are owners, that
processes run as, etc. All access control mechanisms are based on
the notion of a 'user'. When a program executes a system call, for
example trying to write to a file, the system decides whether to
allow the access on the basis of the user's authority - which is what
I believe the notion of "ambient authority" is that has often been
discussed on this list.

If I'm about to start a program running (e.g. perhaps I find myself
downloading a browser plugin to display some new multimedia format or
maybe I received a program from a friend providing a humorous game or
display), how do I apply POLA to the authority that program has while
running under 'IX or Windows? In the case of the plugin I might want
it to have access to some configuration files and to it's input and
output, but nothing more. In the case of the game/display program I
would want it to have access to muxed keyboard input and permission
to write to a display window, but nothing more. How do I do this on
'IX or Windows systems where all the access control APIs are based on
the notion of "user"?

Since I am personally the 'user' with the authority of at least my
command interpreter software, some of my programs run as
me. However, in general to apply POLA I don't want programs I
initiate to run as me as a 'user' in the 'IX/Windows sense. So what
do I do? I believe that the answer is that under 'IX/Windows the
APIs are so 'user' oriented (forks fork to a user, files are
protected to users/groups of users, etc.) that there is no practical
way to apply POLA.

Sure, one can work to build something like Plash for 'IX or Polaris
for Windows where you try to use the minimal facilities that are
there and then turn the access control paradigm on it's head to get
something more like POLA. As you see in both Plash and Polaris they
create what amounts to an artificial 'user' with no permissions and
then grant a program running as that user limited permissions based
on POLA criteria. However, this amounts to ignoring access control
mechanisms provided by the native APIs and putting something else in
their place using more primitive facilities. Even once such a new
mechanism is in place it finds it constantly fighting the native
access control mechanisms.

So ... I argue that something like a standard for a POLA API is
indeed needed. In such an API one would see the equivalent of fork
where the process would run simply as itself, not as any 'user'. It
could be initialized with some permissions when started and it would
have the ability to communicate permissions to and from other
processes during the course of it's execution in accord with POLA principles.

In 'IX/Windows systems there aren't even any ways (APIs) for
processes to communicate permissions in accord with POLA principles.

I view this as a problem. Perhaps before addressing what to do about
such a problem I should check to see if others on the list agree that
the 'IX/Windows APIs are in fact a problem for POLA computing.

--Jed http://www.webstart.com/jed/
Toby Murray
2005-11-15 00:39:09 UTC
Permalink
<snip>
So ... I argue that something like a standard for a POLA API is indeed
needed. In such an API one would see the equivalent of fork where the
process would run simply as itself, not as any 'user'. It could be
initialized with some permissions when started and it would have the
ability to communicate permissions to and from other processes during
the course of it's execution in accord with POLA principles.
In 'IX/Windows systems there aren't even any ways (APIs) for processes
to communicate permissions in accord with POLA principles.
This is untrue. Otherwise, implementing Plash would be a hell of a lot
harder.

(Apologies to Mark Seaborn if I misrepresent Plash here...)

Plash works so well *because* someone had the forsight to allow rights
to be transferred via IPC on Unix. Unix domain sockets allow file
descriptors to be passed, which equate to rights and roughly emulate
capabilities.

(Previous discussions on this list have dealt with this, there was one a
few months back from memory that talked about file descriptors as caps
and file names mapping to caps via open() calls, such that file names
act as "lambda names"
see http://www.eros-os.org/pipermail/cap-talk/2005-August/003883.html
but note that I was wrong in my initial posting. Filenames do not act as
petnams but as lambda names).

The socket option from memory is called "SCM_RIGHTS". Someone was
thinking about fine grained rights communication when they put this in
POSIX.
It is precisely this (little) bit of support that prevents POSIX from
being completely difficult to apply POLA and provides a migration path
to implementing POLA.

I think Adam Langley put it quite well in the last sentences of his
Master's thesis which not-so-coincidentally dealt with implementing POLA
(via capabilities) on top of UNIX using file descriptors.

(begin quote)
"If this project can introduce people to the ideas of capabilities I
feel that it has done its work. More than
anything a change of mind set is required to produce reliable software.
As good as ‘best practices’ are in
reducing security problems (and the introduction of safe langauges also
helps in this respect) the principle
of least authority is still vital, and still ignored.
... Mistakes have been made in the past and should be corrected given
that we now have the hindsight to do so.
But, although there are many willing to correct these problems their
work will be in vain so long as they
fail to plan for the change. Work which starts from a clean slate will
always be limited to mapping out the
territory far away. Real security benefits will come from ugly, impure
work which moves in small steps."
(end quote)

We can do POLA on POSIX. It is also a pretty viable option for helping
to strengthen Unix which (as noted in another msg on this list recently)
is increasingly being deployed as a dependable systems platform.
Finally, it's one of a very small number of viable options for getting
POLA to ordinary users in the near future. We also have some work
already a good way on the path, in Plash, and other groups are thinking
along the same lines (eg. Asnix). We could wait another 20 years for
something like CapROS to gain adoption (which if it happens will be
totally great, I'm sure we all agree) or we could get some real benefit
right now, even if its ugly, impure and breaks the ideology.

Ian Grigg's point about too much "non-realistic" thinking pervading the
cap community is probably on the money. The fact that so many
architecturally pure solutions have been (re)created over the years is
some sort of argument for this point of view. It's probably time to
break this trend and put the proverbial money where the proverbial mouth
is and give people the "killer app" (virus immunity), rather than more
philosophy on the virtues of capabilities. POLA on POSIX via file
descriptors emulting caps (ie. Plash) looks like the best go at the moment.

My congrats and admiration for Mark Seaborn.
--
Toby Murray
Advanced Computer Capabilities Group
Information Networks Division
DSTO, Australia

IMPORTANT: This e-mail remains the property of the Australian Defence
Organisation and is subject to the jurisdiction of section 70 of the
Crimes Act 1914. If you have received this e-mail in error, you are
requested to contact the sender and delete the e-mail.
Jed at Webstart
2005-11-15 23:26:20 UTC
Permalink
<snip>
So ... I argue that something like a standard for a POLA API is
indeed needed. In such an API one would see the equivalent of fork
where the process would run simply as itself, not as any 'user'. It
could be initialized with some permissions when started and it
would have the ability to communicate permissions to and from other
processes during the course of it's execution in accord with POLA principles.
In 'IX/Windows systems there aren't even any ways (APIs) for
processes to communicate permissions in accord with POLA principles.
This is untrue. Otherwise, implementing Plash would be a hell of a lot harder.
(Apologies to Mark Seaborn if I misrepresent Plash here...)
Plash works so well *because* someone had the forsight to allow
rights to be transferred via IPC on Unix. Unix domain sockets allow
file descriptors to be passed, which equate to rights and roughly
emulate capabilities.
Yes, I understand how this works. I think the question is whether
this amounts to communication of a "permission". Certainly some
permission is granted when an open file descriptor is
communicated. However, this sort of permission is essentially
orthogonal to the nominal permissions in the Unix file system (UGO,
RWXS, etc.).

I think perhaps the best way to consider this question is to ask
whether one could essentially turn Unix in to a capability system by
using open file descriptors as capabilities the way Plash is using
them. For example, where would one store such permissions - e.g. on
a long term basis for a user/person? With the way Unix is
implemented the only way to store them is essentially in a
process. One could, I suppose, turn Unix processes into something
like a directory.

When a Unix system restarts all it's open file descriptors are
invalidated. It seems to me it might take some substantive work to
deal with that issue.

Am I alone in starting to have my stomach do a flip flop at the
thought of such a hack?
(Previous discussions on this list have dealt with this, there was
one a few months back from memory that talked about file descriptors
as caps and file names mapping to caps via open() calls, such that
file names act as "lambda names"
see
http://www.eros-os.org/pipermail/cap-talk/2005-August/003883.html
but note that I was wrong in my initial posting. Filenames do not
act as petnams but as lambda names).
Well, I would like to hear from Mark on this topic. If people are
really serious about truly turning Unix into a capability
communication system via this route, then I'd like to get a bit more
serious about it and help push it. I don't care how we get there, I
just want to get there. While I realize it's possible to do about
any kind of transformation with software (the expression "You can't
make a silk purse from a sow's ear." comes to mind), I'm just not
sure that making such a transformation and hoping to somehow pull
POSIX compliant applications along for the ride will prove
effective. Perhaps others who have worked with capability systems
designed for such permission communication (besides me) would like to
comment - unless there is a thread/list on that topic already that I
missed. I looked back at the "filesystem as petnames as usable cap
abstraction in POSIX" thread. I don't believe that threat was
focused on this topic.
The socket option from memory is called "SCM_RIGHTS". Someone was
thinking about fine grained rights communication when they put this in POSIX.
It is precisely this (little) bit of support that prevents POSIX
from being completely difficult to apply POLA and provides a
migration path to implementing POLA.
Do you see this as a path for POLA in POSIX that will support POSIX
compliant applications? That's the part that I'm not getting. If so
then I'd like to understand how it works. I can see some
applications working inside of one of Mark's Plash contained
processes, with what amounts to a POSIX library allowing file access
by name, but limited to permissions passed in via open file
descriptors. Again I'm starting to get a bit queasy seeing an open
call open an already open file descriptor (I guess Mark must have
something like this now?), but I can imagine how it would work. The
step between something like that and a fully POSIX compliant POLA
system with long term permissions of a uniform type being
communicable between process and user by people is stretching my
imagination to the limits.

Hey, maybe I should implement the Distributed Capability
Communication System ( http://www.webstart.com/jed/papers/DCCS/ )
with Unix open file descriptors being the capabilities that are to be
remoted. That would be an amusing exercise. Of course it would all
disappear when the system is rebooted, but we mustn't be hasty about
such things...
I think Adam Langley put it quite well in the last sentences of his
Master's thesis which not-so-coincidentally dealt with implementing
POLA (via capabilities) on top of UNIX using file descriptors.
Ah, perhaps I need to read that thesis. I did some googling with no
luck. Can you point me to it?
(begin quote)
"If this project can introduce people to the ideas of capabilities I
feel that it has done its work. More than
anything a change of mind set is required to produce reliable
software. As good as 'best practices' are in
reducing security problems (and the introduction of safe langauges
also helps in this respect) the principle
of least authority is still vital, and still ignored.
I agree. However, from my perspective we've been "introducing"
people to the ideas of capabilities for some 35 years and we haven't
made much practical progress in getting the value/safety of POLA
execution environments into the main stream.
... Mistakes have been made in the past and should be corrected
given that we now have the hindsight to do so.
But, although there are many willing to correct these problems their
work will be in vain so long as they
fail to plan for the change. Work which starts from a clean slate
will always be limited to mapping out the
territory far away. Real security benefits will come from ugly,
impure work which moves in small steps."
(end quote)
I can accept ugly and impure small steps. What I can't accept is
such steps that continue to be absorbed meaninglessly into the
history of our IT profession. Until some such effort picks up on the
sort of exponential growth that was seen, for example, in Unix at one
time, with DOS/Windows at one time, on the Internet at one time, with
Linux at one time, etc. then these "small steps" will just be more of the same.
We can do POLA on POSIX. It is also a pretty viable option for
helping to strengthen Unix which (as noted in another msg on this
list recently) is increasingly being deployed as a dependable
systems platform. Finally, it's one of a very small number of viable
options for getting POLA to ordinary users in the near future. We
also have some work already a good way on the path, in Plash, and
other groups are thinking along the same lines (eg. Asnix). We could
wait another 20 years for something like CapROS to gain adoption
(which if it happens will be totally great, I'm sure we all agree)
or we could get some real benefit right now, even if its ugly,
impure and breaks the ideology.
Do you see a way to transition Plash onto a growth curve to
relevancy? Perhaps you believe it is already there and might mention
some people involved? Is there a Plash mailing list I can get on?
Ian Grigg's point about too much "non-realistic" thinking pervading
the cap community is probably on the money. The fact that so many
architecturally pure solutions have been (re)created over the years
is some sort of argument for this point of view. It's probably time
to break this trend and put the proverbial money where the
proverbial mouth is and give people the "killer app" (virus
immunity), rather than more philosophy on the virtues of capabilities.
One minor issue I would point out with the "killer app" (virus
immunity) for Unix is that the greatest need for virus immunity (not
really what POLA provides, but I'll go with your term) is on the
desktop. As we know Windows owns the desktop. Maybe Plash for MacOS
X might get some traction with it's BSD roots? I wonder if anybody
has approached Apple with that thought?
POLA on POSIX via file descriptors emulting caps (ie. Plash) looks
like the best go at the moment.
My congrats and admiration for Mark Seaborn.
I'll believe it when I see it in the numbers (people involved in the
work). In the mean time I'll help any way I can and try to keep my
stomach under control.

--Jed http://www.webstart.com/jed/
Toby Murray
2005-11-16 03:56:19 UTC
Permalink
Post by Jed at Webstart
When a Unix system restarts all it's open file descriptors are
invalidated. It seems to me it might take some substantive work to
deal with that issue.
Consider Plash to be the union of a user's authority. At the discretion
of the user, they can invoke other applications and hand them a subset
of their authority with fine-grained controls allowing the user to
achieve almost least authority (except in the area of network access
which could be supported if Plash reified this in the process's file
namespace, so it's not impossible).
The user can define "executable objects" that comprise
1. an executable (eg. /usr/bin/mozilla-firefox)
2. a file namespace for the application (eg. /lib/ld.so, /lib/libc.so,
$HOME/.firefox/<variuos config files> etc.

These executable objects can survive a reboot. The user invokes the
executable object, which inokes the executable, handing it this subset
of the user's authority that is the namespace the user has defined.
While the file descriptors don't survive a reboot, consider the
filenames to be lambda names that map to capabilities in the
application's c-list. The application can access a capability in their
c-list by open() ing it's lambda name (the filename). Since the
namespace is persistent (because the executable objects are persistent),
the capabilities are too. It's just that the binding from lambda name to
capability re-occurs each time at runtime.
To me, this goes a hell of a long way to the virus immune desktop. If
I'm being to over optimisic, or am over-simplifying here, feel free to
step in and add some realism.

Adam Langley's masters thesis is available here
http://www.imperialviolet.org/binary/pucs.pdf
His system is similar to DCCS in that it supports distributed
capabilities (via file descriptors), where the capability is proxied at
each end of a TCP connection. The file descriptor for the TCP connection
to the remote server acts as a proxy for the actual file descriptor at
the other end of the connection on the server to the actual object being
addressed. I believe that you did something similar for DCCS to achieve
distributed caps, although correct me if I'm wrong?
--
Toby Murray
Advanced Computer Capabilities Group
Information Networks Division
DSTO, Australia

IMPORTANT: This e-mail remains the property of the Australian Defence
Organisation and is subject to the jurisdiction of section 70 of the
Crimes Act 1914. If you have received this e-mail in error, you are
requested to contact the sender and delete the e-mail.
John Carlson
2005-11-16 05:55:27 UTC
Permalink
Post by Toby Murray
The user can define "executable objects" that comprise
1. an executable (eg. /usr/bin/mozilla-firefox)
2. a file namespace for the application (eg. /lib/ld.so, /lib/libc.so,
$HOME/.firefox/<variuos config files> etc.
Does someone have examples of the minimal pola-run/plash setup to run
mozilla which will allow websurfing,
mail access, and saving to a download area? Maybe this has been posted
before, so someone just email
me directly. Thanks,

John Carlson
Jed at Webstart
2005-11-17 03:07:49 UTC
Permalink
Post by Toby Murray
Post by Jed at Webstart
When a Unix system restarts all it's open file descriptors are
invalidated. It seems to me it might take some substantive work to
deal with that issue.
Consider Plash to be the union of a user's authority. At the
discretion of the user, they can invoke other applications and hand
them a subset of their authority with fine-grained controls allowing
the user to achieve almost least authority (except in the area of
network access which could be supported if Plash reified this in the
process's file namespace, so it's not impossible).
The user can define "executable objects" that comprise
1. an executable (eg. /usr/bin/mozilla-firefox)
2. a file namespace for the application (eg. /lib/ld.so,
/lib/libc.so, $HOME/.firefox/<variuos config files> etc.
What I take you to be saying is that Plash can manage permissions for
a Unix user by granting just POLA permissions to any program that the
user executes. That's fine (beyond the practical problems with
setting up such POLA restrictions - which I'll deal with
later). Since the executing program disappears when the system is
restarted anyway on current Unix systems, a reboot isn't a new issue
for any execution environment.

However, with regard to management of permissions for users (people)
on Unix, this seems to suggest that such management will continue to
be done with the traditional (standard) Unix user ids and permission
bits. This is somewhat problematic for POLA. Let's say you and I
both have accounts on some Unix system and we wish to work on a
shared project where we need some shared access to some resources -
let's say files. The traditional Unix means for such sharing is
through a shared group. Unfortunately, ordinary users can't create
such shared groups. I won't belabor these problems in Unix as I
believe they are well know.

However, now note that through the Plash mechanism I can set up a
process, you can set up a process and my process can send just POLA
access to resources to your process. I can dynamically start a
collaboration with you though such processes. However, when the
system restarts all such shared access goes away. I have to set it
up again after a system restart if desired.

Also, consider the issue of setting up commonly used permissions for
application execution. On capability system there typically exist
something like a "directory" where one can store named
capabilities. If Plash would somehow allow the creation of such
directories then one could imagine putting the resources needed for
any application to start up in a directory for the
application. Start it up and pass it the directory through Plash and
off it goes. However, how does one create such long lived containers
for capabilities on Unix with Plash? I don't see how to do it. If I
could, then I could create such a directory for the resources that we
need to share for our shared project and pass access to it to you through IPC.
Post by Toby Murray
These executable objects can survive a reboot. The user invokes the
executable object, which inokes the executable, handing it this
subset of the user's authority that is the namespace the user has
defined. While the file descriptors don't survive a reboot, consider
the filenames to be lambda names that map to capabilities in the
application's c-list. The application can access a capability in
their c-list by open() ing it's lambda name (the filename). Since
the namespace is persistent (because the executable objects are
persistent), the capabilities are too. It's just that the binding
from lambda name to capability re-occurs each time at runtime.
To me, this goes a hell of a long way to the virus immune desktop.
If I'm being to over optimisic, or am over-simplifying here, feel
free to step in and add some realism.
I agree with you that Plash can really help get closer to POLA for
Unix. I'm a big fan of Plash - Please don't misunderstand that. I'm
only pointing out that Plash by itself doesn't provide for solutions
for many of the POLA management problems that traditional capability
systems do provide solutions for. Namely the ephemeral nature of the
Plash supported "capabilities" (being Unix open file descriptors)
means that they are limited in what they can do. Useful to be sure,
but still limited in important ways.
Post by Toby Murray
Adam Langley's masters thesis is available here
http://www.imperialviolet.org/binary/pucs.pdf
His system is similar to DCCS in that it supports distributed
capabilities (via file descriptors), where the capability is proxied
at each end of a TCP connection. The file descriptor for the TCP
connection to the remote server acts as a proxy for the actual file
descriptor at the other end of the connection on the server to the
actual object being addressed. I believe that you did something
similar for DCCS to achieve distributed caps, although correct me if I'm wrong?
Thanks! I'm reading that paper with interest. I'll note some
comments while reading here:

Pg. 5 "Under our definition of capabilities the *-Property is indeed
enforced since no capability carrying channel exists between Alice and Bob."

I have to admit that this defense of the '*-Property' for a specific
form of capabilities seems a bit strained to me. With both ACLs and
with any form of capabilities we know that proxied 'serialization' is
possible (as Langley later does over TCP). To me any sort of strict
interpretation of the *-Property is a bit silly.

Pg. 8 'The requirements of the principle of least authority are
stated in [MK] as: "(1) split applications into
smaller protection domains, or 'compartments'; (2) assign exactly the
right privileges to each compartment;
(3) engineer communication channels between the compartments; (4)
ensure that, save for intended
communication, the compartments remain isolated from one another; and
(5) make it easy for themselves,
and others, to perform a security audit."'

At least POLP before it, and I have always assumed POLA, also applies
not just to execution environments, but to any sorts of authority
domains. In particular also and especially to people. The above
definition of POLA would seem to only apply to running computer
applications. As Langley says, "Further to software being flawed,
people are flawed too." - which makes the argument for POLA for
people as well as for software.

Pg. 11 R.e. the Burroughs 5000: There was nothing remotely
capability-like about the B5000, B5500, B6700 systems. In fact they
were so insecure that they depended on protection from their
compilers to generate "safe" code. If one could generate arbitrary
machine code on those systems and get it executed (as I in fact did)
then you could completely take over those systems.

Pg. 12 R.e. Dennis and Van Horn: "Although they were defining a
design, not describing a working computer..." Of course the PDP-1
system at MIT in the late 1960s and early 1970s was based on the
Dennis and Van Horn model and did in fact implement a capability system.

To skip then to "Contemporary Designs" without mention of systems
like RATS, GNOSIS/KeyKOS, NLTSS, Demos, Mach, Amoeba, and others
seems a bit of an oversight to me.


The key to this paper seems to be the requirements:

1. "We are developing this project because previous systems have had
too high a barrier to entry. All but one
of the previous contemporary designs have been operating systems and
it is our belief that common PC
hardware is now too diverse for a fresh-start operating system to
ever gain widespread acceptance. The
complexity in writing hardware drivers, which are vital if people are
going to be able to even boot an
operating system, is now prohibitive for all but the most well funded efforts.

2. Possibly more importantly, it's vitally important that current
applications continue to run without modifi-
cation and any new work can interact with this existing corpus of programs."

It's the last that is the real problem I believe. For example, in
our case with NLTSS we were able to overcome #1 above. We got a
system implemented that was at least as capable as the system it was
replacing (Drivers, etc.), but we found ourselves forced to emulate
exactly the API of the previous system we were replacing to, as
Langley says, "continue to run without modification ... with this
existing corpus of programs." What we found is that once we did so
we ended up with a system that looked in fact just like the previous
system but was somewhat slower due to the library level emulation of
the APIs. For example, we had a directed graph process structure and
we had to make it look like a tree structure. We had to provide a
server to support that view.

I look forward to seeing how Mr. Langley hopes to achieve these
objectives, but if I comment it will have to be in another message...

--Jed http://www.webstart.com/jed/
Toby Murray
2005-11-17 04:18:39 UTC
Permalink
Post by Jed at Webstart
Post by Toby Murray
Post by Jed at Webstart
When a Unix system restarts all it's open file descriptors are
invalidated. It seems to me it might take some substantive work to
deal with that issue.
Consider Plash to be the union of a user's authority. At the
discretion of the user, they can invoke other applications and hand
them a subset of their authority with fine-grained controls allowing
the user to achieve almost least authority (except in the area of
network access which could be supported if Plash reified this in the
process's file namespace, so it's not impossible).
The user can define "executable objects" that comprise
1. an executable (eg. /usr/bin/mozilla-firefox)
2. a file namespace for the application (eg. /lib/ld.so,
/lib/libc.so, $HOME/.firefox/<variuos config files> etc.
What I take you to be saying is that Plash can manage permissions for
a Unix user by granting just POLA permissions to any program that the
user executes. That's fine (beyond the practical problems with
setting up such POLA restrictions - which I'll deal with later).
Since the executing program disappears when the system is restarted
anyway on current Unix systems, a reboot isn't a new issue for any
execution environment.
However, with regard to management of permissions for users (people)
on Unix, this seems to suggest that such management will continue to
be done with the traditional (standard) Unix user ids and permission
bits. This is somewhat problematic for POLA. Let's say you and I
both have accounts on some Unix system and we wish to work on a shared
project where we need some shared access to some resources - let's say
files. The traditional Unix means for such sharing is through a
shared group. Unfortunately, ordinary users can't create such shared
groups. I won't belabor these problems in Unix as I believe they are
well know.
Agreed.
Post by Jed at Webstart
However, now note that through the Plash mechanism I can set up a
process, you can set up a process and my process can send just POLA
access to resources to your process. I can dynamically start a
collaboration with you though such processes. However, when the
system restarts all such shared access goes away. I have to set it up
again after a system restart if desired.
Also, consider the issue of setting up commonly used permissions for
application execution. On capability system there typically exist
something like a "directory" where one can store named capabilities.
Plash *is* this (as far as I see it anyway). The directory of named
capabilities is a directory within your filesystem namespace. The named
capabiltiies are names within that directory. These appear to your
application as files within a directory. For example, by convention, you
could consider that there are a few well known directories where
capabilities live with well known names, such as /etc/passwd (the name
of a capability to the user password file, which your program may or may
not have access to, depending on how much authority it has, controlled
by Plash), /lib/libc.so (the name of a capability to the standard c
library). I don't see how this is any different from what you describe
above.
Each time your app runs, presumably it runs with the same namespace.
Jed at Webstart
2005-11-17 19:27:49 UTC
Permalink
Post by Toby Murray
...However, now note that through the Plash mechanism I can set up
a process, you can set up a process and my process can send just
POLA access to resources to your process. I can dynamically start
a collaboration with you though such processes. However, when the
system restarts all such shared access goes away. I have to set it
up again after a system restart if desired.
Also, consider the issue of setting up commonly used permissions
for application execution. On capability system there typically
exist something like a "directory" where one can store named capabilities.
Plash *is* this (as far as I see it anyway). The directory of named
capabilities is a directory within your filesystem namespace. The
named capabiltiies are names within that directory. These appear to
your application as files within a directory. For example, by
convention, you could consider that there are a few well known
directories where capabilities live with well known names, such as
/etc/passwd (the name of a capability to the user password file,
which your program may or may not have access to, depending on how
much authority it has, controlled by Plash), /lib/libc.so (the name
of a capability to the standard c library). I don't see how this is
any different from what you describe above.
Each time your app runs, presumably it runs with the same namespace.
Toby Murray
2005-11-18 00:19:41 UTC
Permalink
Jed at Webstart wrote:

<snip>
Jed at Webstart
2005-11-18 01:03:25 UTC
Permalink
<snip>
...When user another looks into the directory that user sees C, D,
and E. How is this done with the Plash mechanism?
If people are replaced by processes I see how this is done with Plash....
How do I do perform the analogous sorts of manipulations for people
under Plash?
I agree that here we are starting to reach the limits of what can
be achieved. I think the problem stems largely from the fact that
Plash holds all of my authority as a user and we are really now
talking about wanting to share authority from one instance of Plash
(that "represents" me) with another instance of Plash (that
represents another user). This is definitely reaching the limits of
my understanding of Plash as well ;).
I think this problem mostly results from the fact that people/users
last across system restarts while processes (under typical Unix) do
not - otherwise one could use file descriptors more like 'real' capabilities.
Either some awareness between different Plash instances and a means
by which authority can be shared between them is required, or some
kernel support, I think to achieve the sort of thing you're talking about.
However, while this is quite cool from a tech perspective,
I believe it is much more than that. I believe communicating
permissions between people is at least as important as communicating
them between processes. However, perhaps something like Tyler's
YURLs can more appropriately be focused on the communication of
permissions between people. They are permanent (certainly no reboot
issue in any case) and even have the added value of being network global.
I'm sure we agree that this is less necessary for the more doable
goal of virus-resistance.
I do agree with that. Virus resistance is important in and of itself
and a worthy goal.
I also think that this sort of thing would require some really high
quality user interface in order to allow this sort of power to be
usefully put in the hands of orginary users. Interfaces for
delegating authority from one user to another is something that I'm
not too aware of. Is anyone out there aware of any work on this area?
Once you get to the point of being able to store capabilities in
directories, there is a very simple and effective user interface that
essentially automatically arises. In fact I think one of Tyler's
demos had such a directory structure. All you need to be able to do
is to be able to create directories, put things (capabilities) in
them, take things out, and be able to give them to people. All of
this is quite easy once you have a basic mechanism for communicating
permanent permission tokens (capabilities). I hope to see something
like that demo of Tyler's again on a more permanent basis.
On the other hand, perhaps Tyler's YURLs and the Web can serve for
the permanent management of resource access? Then if Plash could
communicate "capabilities" to processes as open file descriptors
that represent pipes to a local server that accesses the
corresponding resources through YURLs ... Well, that is getting
ahead of the game, but I could imagine it. Quite a hack and
probably unworkable, but close to satisfying the basic needs.
This is quite interesting. I'm imagining (from your description)
YURLs as a sort of "universal" capability representation that is
machine independent (like E's sturdyref's I suppose), where on the
Unix implementation we use file descriptors to represent them
locally. YURLs are "resolved" to local file descriptors, like E
sturdyrefs are "resolved" to object references. (please excuse me if
I've got the terminology wrong here, I'm no E hacker).
I like it. I don't see why you couldn't implement a new system call
to resolve YURLs in the same way that Plash reimplements open() etc.
Then applications could use YURLs. One could then create static
objects (eg. a file) on a remote server that are collections of
authority (YURLs). If Plash also implemented a YURL server, I also
don't see why you couldn't implement a system call that would give
you a YURL from an open file descriptor. Although we are definitely
turning Plash into something it probably wasn't designed to be now.
Essentially, if I take you meaning, we're talking about a
caps-as-data format for Plash capabilities that the server can map
to and from file descriptors.
Pretty much. YURLs already exist. What it amounts to is having a
mechanism to 'map' YURLs into the sorts of open file descriptor
capabilities that Plash uses. Unfortunately the mapping would be
rather strained by the semantics of file access. At a high level one
can do reads and writes of files and one can do gets and puts on
YURLs, but digging even a tiny bit deeper I think starts to result in
interface strain - though perhaps not insurmountable.
I'm curious to know what Mark Seaborn thinks of the above.
And Tyler Close perhaps.
(The remainder of your points I agree with so I've omitted them for brevity)
I'm glad to hear there is some level of common understanding, even if
at this point it seems to be just you and me Toby.

--Jed http://www.webstart.com/jed/
Tyler Close
2005-11-19 17:19:59 UTC
Permalink
On the other hand, perhaps Tyler's YURLs and the Web can serve for
the permanent management of resource access? Then if Plash could
communicate "capabilities" to processes as open file descriptors
that represent pipes to a local server that accesses the
corresponding resources through YURLs ... Well, that is getting
ahead of the game, but I could imagine it. Quite a hack and
probably unworkable, but close to satisfying the basic needs.
This is quite interesting. I'm imagining (from your description)
YURLs as a sort of "universal" capability representation that is
machine independent (like E's sturdyref's I suppose), where on the
Unix implementation we use file descriptors to represent them
locally. YURLs are "resolved" to local file descriptors, like E
sturdyrefs are "resolved" to object references. (please excuse me if
I've got the terminology wrong here, I'm no E hacker).
As Jed wrote, I once implemented something very similar to this.
Instead of binding a YURL to a file descriptor, I bound a YURL to an
operation on a file descriptor. A single file therefore had a set of
YURLs bound to it, one for each exported operation. For example, there
was a YURL to fetch the current file contents, another to overwrite
the file and another to delete the file. There was a similar schema
for directory operations. The resulting application was quite useful
for remote file management through a web browser; however, people
mostly used it as a crude form of wiki. This lead me to build a less
crude wiki that you can find at <https://yurl.net/>. Reviving the file
management application might be fun and useful.

I never built the reverse mapping of turning operations on file
descriptors into HTTP requests on YURLs, though I lusted for it. Such
a mapping would enable use of vim for remote file management, which
may be preferable to the web browser for many tasks.

Tyler

--
The web-calculus is the union of REST and capability-based security:
http://www.waterken.com/dev/Web/

Name your trusted sites to distinguish them from phishing sites.
https://addons.mozilla.org/extensions/moreinfo.php?id=957
David Mercer
2005-11-21 09:09:24 UTC
Permalink
Post by Tyler Close
On the other hand, perhaps Tyler's YURLs and the Web can serve for
the permanent management of resource access? Then if Plash could
communicate "capabilities" to processes as open file descriptors
that represent pipes to a local server that accesses the
corresponding resources through YURLs ... Well, that is getting
ahead of the game, but I could imagine it. Quite a hack and
probably unworkable, but close to satisfying the basic needs.
This is quite interesting. I'm imagining (from your description)
YURLs as a sort of "universal" capability representation that is
machine independent (like E's sturdyref's I suppose), where on the
Unix implementation we use file descriptors to represent them
locally. YURLs are "resolved" to local file descriptors, like E
sturdyrefs are "resolved" to object references. (please excuse me if
I've got the terminology wrong here, I'm no E hacker).
As Jed wrote, I once implemented something very similar to this.
Instead of binding a YURL to a file descriptor, I bound a YURL to an
operation on a file descriptor. A single file therefore had a set of
YURLs bound to it, one for each exported operation. For example, there
was a YURL to fetch the current file contents, another to overwrite
the file and another to delete the file. There was a similar schema
for directory operations. The resulting application was quite useful
for remote file management through a web browser; however, people
mostly used it as a crude form of wiki. This lead me to build a less
crude wiki that you can find at <https://yurl.net/>. Reviving the file
management application might be fun and useful.
I never built the reverse mapping of turning operations on file
descriptors into HTTP requests on YURLs, though I lusted for it. Such
a mapping would enable use of vim for remote file management, which
may be preferable to the web browser for many tasks.
Such a thing, lashed together properly with plash, could indeed be a good start
with bringing networked pola capabilities to posix (or at least
linux!), I would think.
Object/vat locator services hacked onto the dns for host location, as
we've betted around on this list, could start bootstrapping the whole
thing partially off of the dns and pki infrastructure that's bound up
with IPv4. A VLS/ODNS server shouldn't care what IP version the host
that serves locations for an object lives at (note that I'm implying a
layer of indirection beyond a VLS telling clients what host has an
object).
And only the hash of the server that provides names for an object is
ever potentially sent in the clear, and that need only ever happen in
the IPv4 case.

Polarias or a similar open source tool on Windows could take the place
of Plash/it's X-Windows embodiment on that platform. Getting
closer....

-David Mercer
Tucson, AZ
Jed at Webstart
2005-11-22 03:26:52 UTC
Permalink
Post by Tyler Close
On the other hand, perhaps Tyler's YURLs and the Web can serve for
the permanent management of resource access? Then if Plash could
communicate "capabilities" to processes as open file descriptors
that represent pipes to a local server that accesses the
corresponding resources through YURLs ... Well, that is getting
ahead of the game, but I could imagine it. Quite a hack and
probably unworkable, but close to satisfying the basic needs.
This is quite interesting. I'm imagining (from your description)
YURLs as a sort of "universal" capability representation that is
machine independent (like E's sturdyref's I suppose), where on the
Unix implementation we use file descriptors to represent them
locally. YURLs are "resolved" to local file descriptors, like E
sturdyrefs are "resolved" to object references. (please excuse me if
I've got the terminology wrong here, I'm no E hacker).
As Jed wrote, I once implemented something very similar to this.
Instead of binding a YURL to a file descriptor, I bound a YURL to an
operation on a file descriptor. A single file therefore had a set of
YURLs bound to it, one for each exported operation. For example, there
was a YURL to fetch the current file contents, another to overwrite
the file and another to delete the file.
If I'm understanding this properly it sounds like you essentially implemented
permissions (e.g. access bits like rw) with separate YURLs. Does that
mean I would have to have several YURLs if I essentially 'owned' the file,
e.g. one for each of the operations I was able to do on the file? I'm
sure you're familiar with the typical capability approach where permissions
come along with the authority encoded in the capability. E.g. in some
of the capabilities as data mechanisms (e.g. Amoeba and NLTSS)
schemes were developed to allow one capability to suffice in that
situation. To generate a reduced authority capability for the same
resource one would perform an operation of the more powerful
capability to generate one of less power. Sorry for the tome - just
curious if you've touched on this area.
Post by Tyler Close
There was a similar schema
for directory operations. The resulting application was quite useful
for remote file management through a web browser; however, people
mostly used it as a crude form of wiki. This lead me to build a less
crude wiki that you can find at <https://yurl.net/>.
In what sense does the above constitute a "wiki". If a wiki, how do
I modify content?
Post by Tyler Close
Reviving the file management application might be fun and useful.
I never built the reverse mapping of turning operations on file
descriptors into HTTP requests on YURLs, though I lusted for it. Such
a mapping would enable use of vim for remote file management, which
may be preferable to the web browser for many tasks.
That would indeed be an interesting facility. Worthy of a paper I think if
you were interested in such things.

I feel that to effectively be able to demo/sell YURLs I need what amounts
to a file server and directory server behind YURL 'capabilities'. You had
that at one time in a demo and I still feel I'm missing it for selling YURLs.
Maybe I should just try to block out some time over the holidays and
implement such a thing ...

--Jed http://www.webstart.com/jed/
Tyler Close
2005-11-22 04:08:11 UTC
Permalink
Post by Jed at Webstart
Post by Tyler Close
As Jed wrote, I once implemented something very similar to this.
Instead of binding a YURL to a file descriptor, I bound a YURL to an
operation on a file descriptor. A single file therefore had a set of
YURLs bound to it, one for each exported operation. For example, there
was a YURL to fetch the current file contents, another to overwrite
the file and another to delete the file.
If I'm understanding this properly it sounds like you essentially implemented
permissions (e.g. access bits like rw) with separate YURLs. Does that
mean I would have to have several YURLs if I essentially 'owned' the file,
e.g. one for each of the operations I was able to do on the file? I'm
sure you're familiar with the typical capability approach where permissions
come along with the authority encoded in the capability. E.g. in some
of the capabilities as data mechanisms (e.g. Amoeba and NLTSS)
schemes were developed to allow one capability to suffice in that
situation. To generate a reduced authority capability for the same
resource one would perform an operation of the more powerful
capability to generate one of less power. Sorry for the tome - just
curious if you've touched on this area.
You can do something very similar with this design. For example, the
following capability URL identifies an editable wiki page (or file) on
my server:

https://yurl.net/id/poeviudk2socgnc53wdbzrz6rlneydxx

If you do a GET operation on this URL, you will receive:

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="/xsl/http/yurl.org/Author.xsl"?>

<list>
<doc schema="http://yurl.org/Author">
<self schema="http://web-calculus.org/pointer/Embed">
<target>poeviudk2socgnc53wdbzrz6rlneydxx</target>
</self>
<topic>editme</topic>
<value schema="http://web-calculus.org/pointer/Link">
<target>dsuepx4nqke3u5z7tdg534nrrvvnka4l</target>
</value>
<assign schema="http://web-calculus.org/pointer/Link">
<target>wt5zqybevzrf2rxcm5pezjj6izhw4fdu</target>
</assign>
<proxy schema="http://web-calculus.org/pointer/Link">
<target>ehign2wpk5qpp6vhfolj3ojsks7xu3qc</target>
</proxy>
<revoke schema="http://web-calculus.org/pointer/Link">
<target>zhgeftzdoyusqmyqeoxte5z3ibyjz7qm</target>
</revoke>
</doc>
</list>
John Carlson
2005-11-22 06:53:53 UTC
Permalink
Of course, I also wouldn't mind if anyone tried to beat me to the punch.
I think writing this email is getting me closer to doing something,
so bear with me.

I'm reading

http://www.waterken.com/dev/Web/Methodology/

I took a quick look at your code. I think I still need the model of
development you are using. Does a Java class file serve particular
web page? How does the XSL and Java interact? How do I follow the thread
of control from a web request through the system? Can you post
a sequence diagram? Should we be focusing on client development
or server side development (I would prefer doing the client side).
How do I determine which parameters your system takes? When
I do a view source, I just get XML. Does the schema attribute
define the attributes the server takes? What are .jos files? I understand
that the user interface is in the XSLT--I will probably use Swing
and ignore the stylesheet. I assume that I will need to seed the
Swing client with the initial YURL retrieved from a web page,
email, or some other means.

You seem to use some syntax coloring. What does what mean?
What is your meta-model? Is there a reason why you don't use
something like XML Schema for your schema definitions?

JSPs are hard to debug. Do your services offer the potential of
better server side debugging, or is most of the debugging now
in XSL?

Overviews are good, but a page that lists the files required
for a particular application and where they go would be good
too. Does the lack of these mean that your server is somewhat
limited?

Say I wanted to create revokeable YURLs that would 1) allow people
to send messages to me. 2) Allow me to send messages to the server
3) Allow me access and update my buddy list. 4) Allow others to
post requests to my buddy list. 5) Allow me to copy parts of my
buddy list into a new buddy list. 6) Allow me access to chat rooms
on the server. 7) allow the creation of massive worldwide peer to
peer networks. What do I need to manage all these YURLs,
ignoring the communication aspects.

John
John Carlson
2005-11-22 09:06:53 UTC
Permalink
Post by John Carlson
Of course, I also wouldn't mind if anyone tried to beat me to the punch.
I think writing this email is getting me closer to doing something,
so bear with me.
Okay, I figured out all I need is a .java file to create web applications
with Waterken server. However, this style of interaction requires
quite a few mouse clicks, and I would like to reduce that. Any hints?
Ideally, I would like to enter a Java class, and get back the
capabilities as
XML right away.

So far, I have been using the command line to give me the capability
that I can use to instantiate a class. Is this the prefered way to
instantiate objects,
or can we do it through a web browser somehow (maybe not a good idea).

John
Tyler Close
2005-11-22 18:10:14 UTC
Permalink
Hi John,
Post by John Carlson
Post by John Carlson
Of course, I also wouldn't mind if anyone tried to beat me to the punch.
I think writing this email is getting me closer to doing something,
so bear with me.
Thanks for plunging in. The current release of the waterken server is
less well documented than previous releases (as all things _current_
typically are), but is much simpler to use and so hopefully will
require less documentation. That said, please keep notes, or just post
to the cap-talk list, as you progress so that we can seed some new
documentation.
Post by John Carlson
Okay, I figured out all I need is a .java file to create web applications
with Waterken server.
Correct. You provide and compile a .java file. Using the capabilty URL
for a class loader, you can then load your Java class and start
instantiating objects. The server will automatically assign capability
URLs to these objects and persist them. These objects are persisted in
the .jos files you were asking about in your previous email.
Post by John Carlson
However, this style of interaction requires
quite a few mouse clicks, and I would like to reduce that. Any hints?
Hmmmm...

So the current sequence is:

1. Create an empty database with the command:

java -cp jar org.waterken.host.Main label

where "label" is the name of your database.

2. Paste the generated capability URL into your browser's address bar.

3. Click on the link to the class loader.

4. Type in the URI for your Java class, for example:
"java:org.waterken.name.Expert"

5. Use your loaded Java class.

You typically only do this for the root class of your application.
Other Java classes are typically loaded implictly by your Java code.
For example, the above class URI is for the root class of the wiki
application. The other classes that represent individual pages, and so
on, are loaded implicitly by the org.waterken.name.Expert class. You
get pointers to objects of these other classes by invoking methods on
the Expert class.

So typically, the above five steps are performed only rarely. If you
are explictly loading multiple classes, you could bookmark the class
loader for your database and just proceed from step 4 for each class
to be loaded.
Post by John Carlson
Ideally, I would like to enter a Java class, and get back the
capabilities as
XML right away.
You can achieve this by bookmarking your database's class loader,
returned in step 3. You then just need to type in the URI for the
class to load and you're off.
Post by John Carlson
So far, I have been using the command line to give me the capability
that I can use to instantiate a class. Is this the prefered way to
instantiate objects,
or can we do it through a web browser somehow (maybe not a good idea).
The command line step is only done to create a new empty database. You
can load multiple classes within a single database. It sounds like you
just need to bookmark the class loader to get what you want.

Tyler

--
The web-calculus is the union of REST and capability-based security:
http://www.waterken.com/dev/Web/

Name your trusted sites to distinguish them from phishing sites.
https://addons.mozilla.org/extensions/moreinfo.php?id=957
John Carlson
2005-11-23 05:51:20 UTC
Permalink
Post by Tyler Close
java -cp jar org.waterken.host.Main label
where "label" is the name of your database.
2. Paste the generated capability URL into your browser's address bar.
3. Click on the link to the class loader.
"java:org.waterken.name.Expert"
4.B Click on make link if you have a make method, or click new link

4.C Click submit
Post by Tyler Close
5. Use your loaded Java class
What do I need to do to skip 4.B and 4.C?

If you click on a method link with no parameters, there are also a couple
of steps. Are all these steps required? Seems like there will be quite
a bit of network traffic.

Here's a sample class I used. Once when I was trying it, it looked
like the send method wasn't working. I tried several times. I recreated
the object and it worked fine.

This isn't my final implemenation, obviously.

package net.schizophrenics;

public class ChatBuddy implements java.io.Serializable {
static final long serialVersionUID = -6132461331753857219L;
private StringBuffer sb = new StringBuffer();
public synchronized void send(String sender, String message) {
sb.append("<");
sb.append(sender);
sb.append(">");
sb.append(message);
sb.append("\n");
}
public synchronized String receive() {
String s = sb.toString();
sb.delete(0, sb.length());
return s;
}
public ChatBuddy() {}
}

John
Tyler Close
2005-11-23 18:04:55 UTC
Permalink
Post by John Carlson
Post by Tyler Close
java -cp jar org.waterken.host.Main label
where "label" is the name of your database.
2. Paste the generated capability URL into your browser's address bar.
3. Click on the link to the class loader.
"java:org.waterken.name.Expert"
4.B Click on make link if you have a make method, or click new link
4.C Click submit
Post by Tyler Close
5. Use your loaded Java class
What do I need to do to skip 4.B and 4.C?
4.B and 4.C are actually 5.B and 5.C. Once you've got a capability URL
for your class, you can get capability URLs for its static methods and
contructors, which is what you're doing when you click on the make
method, or new, link.

I am not sure how to further optimize this UI, since the class loader
doesn't know whether you want to access the class itself, or one of
its methods and if so, which one. I need the mouse click from the
programmer to tell me what method he wants to invoke.
Post by John Carlson
If you click on a method link with no parameters, there are also a couple
of steps. Are all these steps required? Seems like there will be quite
a bit of network traffic.
So this UI is just the default generated UI to help you navigate your
object graph before you have created any XSLT. It's not meant to be
part of the finished application. If you would like to suggest a
better default UI, you can create your own default XSLT stylesheet to
replace the one at <www/xslt/http/web-calculus.org/Object.xsl> in your
installation.
Post by John Carlson
Here's a sample class I used. Once when I was trying it, it looked
like the send method wasn't working. I tried several times. I recreated
the object and it worked fine.
You must conform to certain coding conventions for the waterken server
to know how to persist your Java objects. For an example, take a look
at the code at <java/org/waterken/name/AuthorScope.java> in your
installation. This is the code that implements a wiki page. It should
be straightforward to modify this code to produce a ChatBuddy like
class. I can post a sample implementation if you like.

Tyler

--
The web-calculus is the union of REST and capability-based security:
http://www.waterken.com/dev/Web/

Name your trusted sites to distinguish them from phishing sites.
https://addons.mozilla.org/extensions/moreinfo.php?id=957
Tyler Close
2005-11-22 19:36:19 UTC
Permalink
Hi John,
Post by John Carlson
Of course, I also wouldn't mind if anyone tried to beat me to the punch.
I think writing this email is getting me closer to doing something,
so bear with me.
Ok, but you'll have to bear with me too. ;) You ask a number of
questions in this email, so I'm just going to peck at it in separate
emails, rather than take on the whole thing at once.
Post by John Carlson
I'm reading
http://www.waterken.com/dev/Web/Methodology/
That document is a high level overview. Over the years, I have
implemented a number of different tools for developing
capability-based web applications. All of these tools fit within the
overview of the Methodology document. Only one of these tools is in
the current waterken server release. The current release supports
binding capability URLs to dynamically created, persistent Java
objects.
Post by John Carlson
I took a quick look at your code. I think I still need the model of
development you are using. Does a Java class file serve particular
web page?
Think of your application as a persistent graph of Java objects. The
waterken server assigns capability URLs to the references in this
graph, enabling traversal of the graph via GET operations, or mutation
of the graph via a POST operation to a particular object method. As
the graph is changed, or grown through the addition of new objects,
these changes are automatically persisted to disk by the waterken
server.
Post by John Carlson
How does the XSL and Java interact?
They don't.

The server side state of your application is expressed entirely in
Java code. In response to a GET operation, the waterken server will
render the identified part of this graph in XML format. The waterken
server will also turn XML rendered POST arguments into Java objects
and deliver them to your object methods. You Java code is oblivious to
the existence of this XML marshalling.

After rendering a sub-graph as XML, the server will then search for an
XSLT file that knows how to transform the XML into HTML. This search
starts with the base schema of the XML and continues with a depth
first traversal of the "super" branches in the XML. A link to the
found XSLT file is placed in the XML output.

If the client software is a browser, the linked to XSLT file will be
used to transform the XML into HTML. This XSLT code only ever
interacts with the XML file and is oblivious to the existence of any
Java code.
Post by John Carlson
How do I follow the thread
of control from a web request through the system?
Since you can get a capability URL for any part of your application
graph, you can get an XML rendering of any part of your application
graph. Using this feature you check the internal state of any part of
your application code. Think of this as like having your server side
code always running inside a debugger, where you can inspect the state
of any object on demand.
Post by John Carlson
Can you post a sequence diagram?
So, based on the above description, what would this sequence diagram look like?

Tyler

--
The web-calculus is the union of REST and capability-based security:
http://www.waterken.com/dev/Web/

Name your trusted sites to distinguish them from phishing sites.
https://addons.mozilla.org/extensions/moreinfo.php?id=957
Sandro Magi
2005-11-23 00:24:17 UTC
Permalink
(cross-posting to waterken-server list since we're getting slightly OT
for cap-talk)
Post by Tyler Close
Think of your application as a persistent graph of Java objects. The
waterken server assigns capability URLs to the references in this
graph, enabling traversal of the graph via GET operations, or mutation
of the graph via a POST operation to a particular object method. As
the graph is changed, or grown through the addition of new objects,
these changes are automatically persisted to disk by the waterken
server.
I've actually been wondering how this would interact with typical n-tier
web-app design in which objects/data is persisted in a relational
database (either directly or through an O/R layer). Unless I'm mistaken,
the waterken db doesn't have an RDBMS's query capabilities, so an RDBMS
still seems necessary in many circumstances. How does this "persistent
object graph" approach interact with objects loaded from an RDBMS? Is
there perhaps a useful pattern that covers this scenario?
Post by Tyler Close
In response to a GET operation, the waterken server will
render the identified part of this graph in XML format. The waterken
server will also turn XML rendered POST arguments into Java objects
and deliver them to your object methods. You Java code is oblivious to
the existence of this XML marshalling.
Just out of curiosity, is the object-to-XML transformer easily
replaceable? While XML is fairly ubiquitous and flexible, I can imagine
some applications might warrant some alternate output format; YAML is
gaining popularity for instance.
Post by Tyler Close
If the client software is a browser, the linked to XSLT file will be
used to transform the XML into HTML.
I imagine client-side XSLT is preferable to reduce server load because
of this. Have you ever found the ubiquitous XSLT transformations to be a
performance problem?

Sandro
Tyler Close
2005-11-23 17:33:25 UTC
Permalink
Post by Sandro Magi
(cross-posting to waterken-server list since we're getting slightly OT
for cap-talk)
Post by Tyler Close
Think of your application as a persistent graph of Java objects. The
waterken server assigns capability URLs to the references in this
graph, enabling traversal of the graph via GET operations, or mutation
of the graph via a POST operation to a particular object method. As
the graph is changed, or grown through the addition of new objects,
these changes are automatically persisted to disk by the waterken
server.
I've actually been wondering how this would interact with typical n-tier
web-app design in which objects/data is persisted in a relational
database (either directly or through an O/R layer). Unless I'm mistaken,
the waterken db doesn't have an RDBMS's query capabilities, so an RDBMS
still seems necessary in many circumstances. How does this "persistent
object graph" approach interact with objects loaded from an RDBMS? Is
there perhaps a useful pattern that covers this scenario?
The approach I've used in the past is to separate the object graph
part of the application from the relational search part of the
application and use separate app servers for each. I implement the
object graph part of the application in Java using a server like the
one in the current waterken server download. For the relational part,
I had a separate app server. If you search for "RDB Webizer", you'll
find the documentation for it. Using the RDB Webizer, I could
implement the server side logic entirely in SQL, with no Java code. If
found this app server incredibly useful, but I never really got any
uptake from others. If the need arises I may dust off the code and add
it to the waterken server download.
Post by Sandro Magi
Post by Tyler Close
In response to a GET operation, the waterken server will
render the identified part of this graph in XML format. The waterken
server will also turn XML rendered POST arguments into Java objects
and deliver them to your object methods. You Java code is oblivious to
the existence of this XML marshalling.
Just out of curiosity, is the object-to-XML transformer easily
replaceable?
Yes. The waterken server can and has spoken many languages.
Post by Sandro Magi
While XML is fairly ubiquitous and flexible, I can imagine
some applications might warrant some alternate output format; YAML is
gaining popularity for instance.
Making a JSON rendering might also have appeal to the AJAX crowd,
since AFAICT, none of the AJAX programmers actually do AJAX, but
instead do AJAJ, where the server sends Javascript to execute, not XML
to parse.
Post by Sandro Magi
Post by Tyler Close
If the client software is a browser, the linked to XSLT file will be
used to transform the XML into HTML.
I imagine client-side XSLT is preferable to reduce server load because
of this. Have you ever found the ubiquitous XSLT transformations to be a
performance problem?
No. The dominant factor in my page load times is typically network
latency. The XSLT used in the wiki application is probably the most
extreme use of XSLT, since it implements a lexer for the wiki syntax
in XSLT, and yet it peforms just fine unless you create a very large
wiki page. For more typical use, the XSLT runtime is just not
noticeable.

Tyler

--
The web-calculus is the union of REST and capability-based security:
http://www.waterken.com/dev/Web/

Name your trusted sites to distinguish them from phishing sites.
https://addons.mozilla.org/extensions/moreinfo.php?id=957
Sandro Magi
2005-11-23 19:05:51 UTC
Permalink
Post by Tyler Close
The approach I've used in the past is to separate the object graph
part of the application from the relational search part of the
application and use separate app servers for each. I implement the
object graph part of the application in Java using a server like the
one in the current waterken server download. For the relational part,
I had a separate app server. If you search for "RDB Webizer", you'll
find the documentation for it. Using the RDB Webizer, I could
implement the server side logic entirely in SQL, with no Java code. If
found this app server incredibly useful, but I never really got any
uptake from others. If the need arises I may dust off the code and add
it to the waterken server download.
I had read up on the RDB Webizer before. It seemed very interesting at
the time. I'll have to review it again. Just to clarify: the waterken
server persists the object graph at the end of every HTTP request?
Post by Tyler Close
Making a JSON rendering might also have appeal to the AJAX crowd,
since AFAICT, none of the AJAX programmers actually do AJAX, but
instead do AJAJ, where the server sends Javascript to execute, not XML
to parse.
Indeed this is one popular method. Many people do actually return and
process XML though, since they sometimes wish to transparently interact
with web services.

Another technique is to render a change to the HTML on the server, and
just return that snippet. The client javascript then places the snippet
in an empty targeted element's innerHTML property. I think those 3 cover
the techniques I've read about it.

Re: JSON vs. YAML, JSON is effectively a subset of YAML; depends whether
JSON is expressive enough for your purposes.

Sandro
Tyler Close
2005-11-23 19:45:10 UTC
Permalink
Post by Sandro Magi
I had read up on the RDB Webizer before. It seemed very interesting at
the time. I'll have to review it again.
Thanks. It was pretty neat. Maybe it's time will come yet.
Post by Sandro Magi
Just to clarify: the waterken
server persists the object graph at the end of every HTTP request?
The dirty part of the object graph is persisted at the end of the HTTP
request. The application marks mutable objects as dirty by calling the
org.waterken.db.Key.update() method.
Post by Sandro Magi
Post by Tyler Close
Making a JSON rendering might also have appeal to the AJAX crowd,
since AFAICT, none of the AJAX programmers actually do AJAX, but
instead do AJAJ, where the server sends Javascript to execute, not XML
to parse.
Indeed this is one popular method. Many people do actually return and
process XML though, since they sometimes wish to transparently interact
with web services.
Interesting. I think there must be clans within AJAX. Chip for one
seems to have an immune reaction to XML. ;)

Tyler

--
The web-calculus is the union of REST and capability-based security:
http://www.waterken.com/dev/Web/

Name your trusted sites to distinguish them from phishing sites.
https://addons.mozilla.org/extensions/moreinfo.php?id=957
Sandro Magi
2005-11-25 18:47:38 UTC
Permalink
Post by Tyler Close
Post by Sandro Magi
Just to clarify: the waterken
server persists the object graph at the end of every HTTP request?
The dirty part of the object graph is persisted at the end of the HTTP
request. The application marks mutable objects as dirty by calling the
org.waterken.db.Key.update() method.
Thanks, this is all starting fill in the gaps I've had regarding the
waterken server.

Is it possible to interject some sort of authentication step before a
capability request is satisfied? In explaining the web-calculus to
others, they've often expressed dismay that the unguessable URL is the
only authentication required to access a resource. Their main concern is
that a user might inadvertently leak a URL to a third party who
shouldn't have access to the resource.

I can understand this position since passwords have been firmly
entrenched in a user's mind as something you do not share with others
(even though users still readily share passwords); URLs do not have this
"do-no-share" reputation.

This authentication could certainly be simulated in the application
object's logic, but it's largely orthogonal to it, so I was wondering if
there was some sort of well known hook in the waterken request/response
cycle.

Sandro
David Chizmadia (JHU)
2005-11-25 21:06:44 UTC
Permalink
Post by Sandro Magi
Is it possible to interject some sort of authentication step before a
capability request is satisfied? In explaining the web-calculus to
others, they've often expressed dismay that the unguessable URL is the
only authentication required to access a resource. Their main concern is
that a user might inadvertently leak a URL to a third party who
shouldn't have access to the resource.
I can understand this position since passwords have been firmly
entrenched in a user's mind as something you do not share with others
(even though users still readily share passwords); URLs do not have this
"do-no-share" reputation.
This authentication could certainly be simulated in the application
object's logic, but it's largely orthogonal to it, so I was wondering if
there was some sort of well known hook in the waterken request/response
cycle.
HTTPSY runs over SSL: force mutual authentication and the server
will know who it is talkin to.

Of course, the underlying problem is a complete failure on
the part of those looking for the extra authentication to
understand the concept of capability discipline, but if this
provides comfort, it is worth it.

-DMC
Sandro Magi
2005-11-25 21:41:29 UTC
Permalink
Post by David Chizmadia (JHU)
HTTPSY runs over SSL: force mutual authentication and the server
will know who it is talkin to.
Wouldn't this be authenticating the machine, not the user? For example:
sales reps (who are authorized) on their client's computer (who are not
authorized) using a web application to design and quote a custom product
lineup for said customer.

Browser's retain history (so straight capability URLs don't seem
secure), and authenticating each potential client's machine is
infeasible. A standard login/password combo seems to provide some
measure of protection in this scenario.

Of course, the client machine could have all sorts of keyloggers and
screen scrapers installed, so one could make the argument that all bets
are off in this example. The barrier to entry with login/password combo
is certainly higher than just checking the browsing history though.
Post by David Chizmadia (JHU)
Of course, the underlying problem is a complete failure on
the part of those looking for the extra authentication to
understand the concept of capability discipline, but if this
provides comfort, it is worth it.
I'd like to hear a solution to the above if anyone could provide one.
The best one I can imagine is portable Firefox on a USB key:

http://johnhaller.com/jh/mozilla/portable_firefox/

It's still somewhat vulnerable since it's running on the client machine,
but the bar is raised yet again.

Sandro
coderman
2005-11-25 22:06:41 UTC
Permalink
Post by Sandro Magi
...
Is it possible to interject some sort of authentication step before a
capability request is satisfied? In explaining the web-calculus to
others, they've often expressed dismay that the unguessable URL is the
only authentication required to access a resource. Their main concern is
that a user might inadvertently leak a URL to a third party who
shouldn't have access to the resource.
two possible enhancements:
- make YURL's distinct to each user; invalidate a user and all the
YURL's associated with them no longer convey any authority.
- make YURL's leased / ephemeral; after a lease period or timeout is
reached the YURL's no longer convey any authority. [tradeoff here is
bandwidth]

designing the distinct, leased YURL distribution mechanism is a little
tricky to do efficiently but tractable.

if the real concern is insecure design (allowing private YURL's to be
leaked accidentally) no amount of code or process will restore trust
where it cannot be placed.
Sandro Magi
2005-11-25 22:54:17 UTC
Permalink
Post by coderman
Post by Sandro Magi
...
Is it possible to interject some sort of authentication step before a
capability request is satisfied? In explaining the web-calculus to
others, they've often expressed dismay that the unguessable URL is the
only authentication required to access a resource. Their main concern is
that a user might inadvertently leak a URL to a third party who
shouldn't have access to the resource.
- make YURL's distinct to each user; invalidate a user and all the
YURL's associated with them no longer convey any authority.
This doesn't address the problem I outlined in my follow-up e-mail.
Post by coderman
- make YURL's leased / ephemeral; after a lease period or timeout is
reached the YURL's no longer convey any authority. [tradeoff here is
bandwidth]
There is still a window of opportunity during which the unauthorized
customer can access the URLs. If you were trying to sell this solution,
this would probably be unacceptable.

Sandro
coderman
2005-11-25 23:25:02 UTC
Permalink
Post by Sandro Magi
...
This doesn't address the problem I outlined in my follow-up e-mail.
i misunderstood the nature of your question then.
Post by Sandro Magi
There is still a window of opportunity during which the unauthorized
customer can access the URLs. If you were trying to sell this solution,
this would probably be unacceptable.
session granularity / life time is an orthogonal topic. is your user
authenticated for the duration of an HTTP session with AUTH? only
while connected via the VPN used to access the web application? only
while a vascular hand print biometric authentication is provided?

i mentioned user distinct YURL's because they can be tied to an
authenticated session. once that session is destroyed / timed out all
associated YURL's no longer convey any authority.

exactly what authentication mechanism you use for session management
is up to you. (HTTP AUTH, passphrase, biometric, hardware token, etc)
i apologize for explaining this poorly with reference to leases and
timeouts.
Sandro Magi
2005-11-26 00:13:39 UTC
Permalink
Post by coderman
Post by Sandro Magi
...
This doesn't address the problem I outlined in my follow-up e-mail.
i misunderstood the nature of your question then.
Or I misunderstood your solution. By time-limited, did you mean limited
past the point the user "logs out"? That's what I thought you meant.
Post by coderman
session granularity / life time is an orthogonal topic. is your user
authenticated for the duration of an HTTP session with AUTH? only
while connected via the VPN used to access the web application? only
while a vascular hand print biometric authentication is provided?
I'm envisioning typical web applications at the moment in which I'd like
to make a convincing argument that the web-calculus can meet their
needs. Nothing fancy.

Open a browser and access a capability URL, have to authenticate to
create a session first. Close the browser, you have re-authenticate. Log
out, need to re-authenticate. These are ingrained patterns for users,
and most people expect them by now. Typically, authentication is not
required for any of the above in the web-calculus, but they are all
motivated by real concerns in my problem scenario.

Note that I'm only concerned with user-centric applications here. The
waterken implementation is great because it is agnostic who/what's
making the request, but it'd be great if a generic resource could be
aliased at another URL with an added authentication guard for users. I
agree that unmitigated capabiliy passing is right for software agents,
but it should be possible to insert additional authentication mechanisms
to impede users (if deemed necessary). I think web applications
sometimes make this necessary (like the scenario I described).
Post by coderman
i mentioned user distinct YURL's because they can be tied to an
authenticated session. once that session is destroyed / timed out all
associated YURL's no longer convey any authority.
That's certainly viable depending on how it's implemented. As I said, if
the session is still alive when starting up a new browser instance, then
it won't be acceptable. An authentication cookie which expires on
browser shutdown would be sufficient I believe.

Now it's essentially a comparison of waterken as a development framework
against whatever else might be in the running. Capabilities don't seem
to enter into the equation. Waterken might suffer since authentication
and session management aren't automatically handled however.
Post by coderman
exactly what authentication mechanism you use for session management
is up to you.
Right, I'm not interested in the authentication mechanism so much as
whether it's possible in a web-calculus style application, and how
difficult it is compared with other frameworks where it comes pre-packaged.

Sandro
Eric Jacobs
2005-11-26 01:26:51 UTC
Permalink
On Fri, 25 Nov 2005 19:13:39 -0500
Post by Sandro Magi
Note that I'm only concerned with user-centric applications here. The
waterken implementation is great because it is agnostic who/what's
making the request, but it'd be great if a generic resource could be
aliased at another URL with an added authentication guard for users. I
agree that unmitigated capabiliy passing is right for software agents,
but it should be possible to insert additional authentication mechanisms
to impede users (if deemed necessary). I think web applications
sometimes make this necessary (like the scenario I described).
If by "authentication guard" it is meant an additional layer of
validation to prevent the user from _unintentionally_ mishandling the
capabilities, then I agree 100%.

Of course if the meaning is to prevent them from intentionally
circumventing the security policy, then it is useless, and I believe
that is already known to all of those who are reading this list.

But the usability consideration is significant. This is, I think, the
essential barrier that prevents the pure "capability discipline" from
ever yielding a drop-in solution or enhancement to existing systems.
There is still that one non-technological component that we all must
interface to, that being the user; and the way that users have learned
to use technology is entrenched with certain ideas, one being the idea
that security is accomplished by combining a insecure reference with one
or more forms of ambient authority.

Various other impedance mismatches crop up as a result of this
assumption; a good example is the browser history database mentioned in
this thread. Pure capabilities in any form should not be stored in the
browser history not for any technical reason but simply because _it's
not what the user expects_ -- the user (and associates such as the
browser itself and the OS) are not operating under the idea that
disclosing a list of URL's could result in the ability for another to
impersonate any of that user's securable activities.

(This might be a good argument for when YURL's are used in a pure
capability [non-ambient] fashion, that they not be called URL's at
all, to prevent confusion.)

Of course, it is possible to solve the user-expectations problems with
YURL's by requiring that they be used in conjunction with some form of
ambient authority (either session- or user-level, or perhaps both), as
some have mentioned. But I would be curious in such a case what
improvement the use of YURL's or capability ideas at all could result
in relative to the traditional Web application environment centered
around session security.

-Eric
Sandro Magi
2005-11-26 08:41:37 UTC
Permalink
Post by Eric Jacobs
If by "authentication guard"
Sorry, I'm sometimes a little too loose with my terminology.
Post by Eric Jacobs
it is meant an additional layer of
validation to prevent the user from _unintentionally_ mishandling the
capabilities, then I agree 100%.
Yes, to a certain extent. I think the concept of "secret/private URL" is
pretty straightforward to users. I'm more concerned with issues where
the user is unintenionally and/or unknowingly leaking his authority,
such as browsing a secure capability web application on a third party
computer.

In this scenario, the session-limited pattern commonly used in web
applications has a higher security barrier to overcome than a straight
web-calculus design.
Post by Eric Jacobs
Of course if the meaning is to prevent them from intentionally
circumventing the security policy, then it is useless, and I believe
that is already known to all of those who are reading this list.
Right, I'd prefer to allow them to intentionally pass around
capabilities. Unfortunately, the browser design is what it is.
Post by Eric Jacobs
Various other impedance mismatches crop up as a result of this
assumption; a good example is the browser history database mentioned in
this thread. Pure capabilities in any form should not be stored in the
browser history not for any technical reason but simply because _it's
not what the user expects_ -- the user (and associates such as the
browser itself and the OS) are not operating under the idea that
disclosing a list of URL's could result in the ability for another to
impersonate any of that user's securable activities.
As I said, I think the capability URL is straightforward for users. The
problem is that most users aren't aware of all the ways browsers can
leak their capability URLs. Since we can't realistically expect users to
know this, as designers we have to provide a solution.

Since the "session-limited facade on the underlying object graph" design
*prevents even intentional* capability-passing, I had considered the
notion that each session-limited URL will also provide a reference to
the underlying real capability URL (sort of like Google Map's "Link to
this page" which constructs a "persistent link" to the user's current
page). This underlying URL could be stored in the browser cache however,
so it's still unintentionally leaking authority.
Post by Eric Jacobs
Of course, it is possible to solve the user-expectations problems with
YURL's by requiring that they be used in conjunction with some form of
ambient authority (either session- or user-level, or perhaps both), as
some have mentioned.
I don't see how the proposed design is ambient. Why do you think it is?

Sandro

John Carlson
2005-11-24 07:59:57 UTC
Permalink
Okay,

Working with Waterken again.

I need to take a form and figure out how to reply with a post or get
from my Java Swing client. I assume that the parameters to the form are
doc.0, doc.1,
doc.2 etc. I assume that I post back to the method link that I came
from (the same
YURL).

An interesting project would be to write the client side that directly calls
the serverside Java, then create an RMI/EJB/JAX-RPC interface that interacts
with a standardized Waterken webservices application, hiding the YURL
details.

Am I barking up the wrong tree? How do we make YURLs easier
to use from the client side?

John
Rob J Meijer
2005-11-25 17:48:06 UTC
Permalink
I'm all rather exited about the possibilities of using
unix domain socket filehandles as capabilities, and would like
to try and use the concept in some of my private projects.
But now looking at using the concept in a design, I'm running into a few
issues, that I hope maybe someopne on the list would have some usefull
thoughts about.

1) Using suid and chrootuid in process bootsttrap makes it possible to
give a single process part of the users authority. Using uid based
firewalling makes it possible to take away networking from a user id, so
far so good, but how about the two together, giving a process part of a
users disk access, but disabelling its networking without breaking the
users networking? Is it in unix in any way possible to have a process drop
its possibility to create new sockets?

2) When bootstrapping the basic interconnection process and socket
infrastructure, initialy authentication is essential. With normal
networking we have kerberos to take care of that, but over a tcp/ip socket
we can not
comunicate filehandles. Does anyone have any notion on how one could
integrate kerberos and the bootstrapping of a unix domain socket based
capability system design?

T.I.A.

Rob J Meijer
John C. McCabe-Dansted
2005-11-26 06:44:59 UTC
Permalink
Post by Rob J Meijer
I'm all rather exited about the possibilities of using
unix domain socket filehandles as capabilities, and would like
to try and use the concept in some of my private projects.
But now looking at using the concept in a design, I'm running into a few
issues, that I hope maybe someopne on the list would have some usefull
thoughts about.
1) Using suid and chrootuid in process bootsttrap makes it possible to
give a single process part of the users authority. Using uid based
firewalling makes it possible to take away networking from a user id, so
far so good, but how about the two together, giving a process part of a
users disk access, but disabelling its networking without breaking the
users networking? Is it in unix in any way possible to have a process drop
its possibility to create new sockets?
The way systrace does it is to run the process in debug mode and trap
syscalls, and only allowing certain syscalls with certain parameters to
succeed.
Post by Rob J Meijer
2) When bootstrapping the basic interconnection process and socket
infrastructure, initialy authentication is essential. With normal
networking we have kerberos to take care of that, but over a tcp/ip socket
we can not
comunicate filehandles. Does anyone have any notion on how one could
integrate kerberos and the bootstrapping of a unix domain socket based
capability system design?
As I understand, capabilities typically remove the need for other access
control mechanisms, If a capability is passed into an object then it has
rights to it with no need to double check against a rights lists or
authentication protocol.
Post by Rob J Meijer
T.I.A.
Rob J Meijer
--
John C. McCabe-Dansted
Masters Student
John C. McCabe-Dansted
2005-11-18 04:06:27 UTC
Permalink
Post by Jed at Webstart
However, with regard to management of permissions for users (people)
on Unix, this seems to suggest that such management will continue to
be done with the traditional (standard) Unix user ids and permission
bits. This is somewhat problematic for POLA. Let's say you and I
both have accounts on some Unix system and we wish to work on a
shared project where we need some shared access to some resources -
let's say files. The traditional Unix means for such sharing is
through a shared group. Unfortunately, ordinary users can't create
such shared groups. I won't belabor these problems in Unix as I
believe they are well know.
As I understand a well designed ACL system would solve this problem.

Alternatively a simple solution would be to have a cap-manager that grants
Joe's powerbox access to Jane's file if that file is listed under
~jane/.caps/joe. Allowing recursive caps would allow Joe to pass the cap onto
other people. We could do something similar for granting suid like
permissions to binaries.
Post by Jed at Webstart
off it goes. However, how does one create such long lived containers
for capabilities on Unix with Plash? I don't see how to do it. If I
If we needed proper persistant unforgeable capabilities it seems that it would
be relatively easy to store them as normal files (owned by cap-manager to
prevent forgery) on the filesystem containing the lambda name granted, or
stored in a simple DB. The cap-manager would be responsible for pickling and
restarting processes that need to survive a reboot.

Is there some reason why this would be problematic?

Below is an example of a possible implementation of persistant caps as files:


If a process has access to a lambda name (name of a file or named pipe), it
can pickle access to that filename by sending a request to cap-manager.
Cap-manager will create a file where it directs (and has write access). This
file will be owned by cap-manager and contain the lambda-name, a cap ID and
a number N you specify.

If a process has access to a pickled cap (i.e. if the cap is in the namespace
plash generated for it) and the pickled cap is owned by cap-manager (to
prevent forgery) and the process' user id equals N (to not break existing UNIX
security model), and cap-manager can track the Cap ID back to the original
grant with no revocations, then the process is allowed to unpickle the cap.

Using this methodology, if you wanted to pass Joe a directory of persistant
caps you could pickle the desired caps to a directory, using Joe's user ID as
n, and then pass Joe the directory, e.g. by moving it to
~/sharedwith/joe/
or to Joe and some application that Joe allows you to pass caps, e.g.
~/sharedwith/joe/.someapp/
i.e. use a shared directory where at least one app has cap-write capability

To grant suid like rights to a particular binary, give the name&path of the
application in the cap file instead-of/as-well-as the a user id.

If you want to grant a persistent cap to a particular constrained process
(instance of an application), then on shutdown we could allow the process to
generate a cap-manager file that contains:
The application,
persistant data that the application needs to keep.
persistant caps that the application needs to keep.
Then, when the cap-manager restarts the process, and pass it only those caps
stored in the file.

--
John McCabe-Dansted
Masters Student.
Mark Seaborn
2005-11-16 22:55:21 UTC
Permalink
Post by Jed at Webstart
I think perhaps the best way to consider this question is to ask
whether one could essentially turn Unix in to a capability system by
using open file descriptors as capabilities the way Plash is using
them. For example, where would one store such permissions - e.g. on
a long term basis for a user/person? With the way Unix is
implemented the only way to store them is essentially in a
process. One could, I suppose, turn Unix processes into something
like a directory.
When a Unix system restarts all it's open file descriptors are
invalidated. It seems to me it might take some substantive work to
deal with that issue.
Implementing transparent persistence is not going to be practical on
top of Unix.

However, some simple form of non-transparent persistence is doable,
for the purposes of installing programs. You could have a
"persistable" file object. A process holding a reference to one of
these would be able to save it to a persistent store object (eg. to an
array of references). The persistent store (and persistable objects)
would be managed by a single process. When serialising the file, the
store would record its filename.

In other words, an object store with the ability to restore references
to files outside the store. This could be provided on top of Plash's
existing object system.

However, I don't think this is immediately necessary. Polaris's
approach for setting up an application with a pet name is simple and
looks adequate for installing apps.
Post by Jed at Webstart
Post by Toby Murray
(Previous discussions on this list have dealt with this, there was
one a few months back from memory that talked about file descriptors
as caps and file names mapping to caps via open() calls, such that
file names act as "lambda names"
see
http://www.eros-os.org/pipermail/cap-talk/2005-August/003883.html
but note that I was wrong in my initial posting. Filenames do not
act as petnams but as lambda names).
Well, I would like to hear from Mark on this topic. If people are
really serious about truly turning Unix into a capability
communication system via this route, then I'd like to get a bit more
serious about it and help push it.
Plash has a capability communication system. You can pass object
references from one process to another. The POSIX API and filenames
are implemented on top of Plash's object system. I'm not sure if you
realised that already, but I can answer questions about it.
Post by Jed at Webstart
Do you see a way to transition Plash onto a growth curve to
relevancy?
My next step for Plash is to patch Gtk so that GtkFileChooser invokes
the powerbox rather than providing a file chooser itself. This should
make a large number of programs usable in a POLA environment. I think
the process of patching Gtk could provide an opportunity for
explaining these ideas to some people.

At the moment I'm talking to folks from the GNU Project about making
Plash into "GNU Plash". Today I've been explaining how powerboxes
work and why they're good for security. It might not pan out but it
seems worth exploring. It's early days, but eventually I'd like for
there to be some umbrella "POLA project" that can encompass the
patches to different components, such as Gtk, and explain to those
projects why they should merge them into their official releases.

It's a slow process for me because I have limited time to work on
this. Take the powerbox code, for example: if you do a diff between
Plash 1.13 and 1.14, it's really not that much code, and it was pretty
simple to implement. But it took about a month, counting from the
release before that. So any help would be welcome. :-)

Mark
David Wagner
2005-11-12 07:45:34 UTC
Permalink
Post by Jed at Webstart
What are we missing that might bring the need for POLA
execution environments enough into the main stream to combat the
current scourge of Trojan horses? Certainly there are plenty of
papers that have been written and published. There are plenty of
systems that have been designed and implemented with demonstrable
results (except of course that none are widespread enough
commercially to demonstrate significant benefit from combating Trojan horses).
I would dispute the "Certainly" part. The standard objection to
fine-grained access control, such as is required by POLA, is that
the burden of managing the access control policies will become
overwhelming. That has been the past evidence. I understand that
this community has put forward the argument that object capability
systems will be different. But I believe there is a dearth of papers
evaluating the actual, real-life cost of security in this model.
I can't point to a single one that provides compelling evidence to
show that the standard concern is misplaced. The systems that have
been built so far (e.g., CapDesk) in some sense look like "toys" when
you put them up against the richness and functionality
of an environment like Windows XP.
Post by Jed at Webstart
I'm looking for strategic thoughts. Perhaps we can draw David into
the discussion to get an idea why the lack of a POLA execution
environment didn't occur to him as something to hate about Unix (Windows).
Citing "lack of POLA" as a failing of Unix makes no sense to me. POLA is
a technique, a means to an end, not a goal. It might make sense to say
"I hate Unix because its network performance is too slow", but it would
be crazy to say "I hate Unix because its TCP stack doesn't use single-copy
data structures".

If you amended your statement to something like "Unix has too many
security problems" or "Unix keeps screwing me over with Trojan horses",
well, that would be something to hate about Unix, if true. But I've
rarely heard that complaint. For many people, standard Unices (like
OpenBSD or Linux) are adequately secure for most purposes. Now if we
want to talk about Microsoft, the answer is probably different...
David Hopwood
2005-11-12 17:41:31 UTC
Permalink
Post by David Wagner
If you amended your statement to something like "Unix has too many
security problems" or "Unix keeps screwing me over with Trojan horses",
well, that would be something to hate about Unix, if true. But I've
rarely heard that complaint. For many people, standard Unices (like
OpenBSD or Linux) are adequately secure for most purposes. Now if we
want to talk about Microsoft, the answer is probably different...
Part of the reason why most people don't perceive Unix to be insecure is
that most security failures in a Unix system manifest as application failures.
The argument that capability supporters are making is that most of those
application failures would be prevented or mitigated if a capability OS
were used.

Comparing Unix to Windows, there are also fewer people trying to attack
Unix systems. (Just a fact; not a defence of Windows.)
--
David Hopwood <***@blueyonder.co.uk>
David Mercer
2005-11-14 08:04:50 UTC
Permalink
Post by David Hopwood
Post by David Wagner
If you amended your statement to something like "Unix has too many
security problems" or "Unix keeps screwing me over with Trojan horses",
well, that would be something to hate about Unix, if true. But I've
rarely heard that complaint. For many people, standard Unices (like
OpenBSD or Linux) are adequately secure for most purposes. Now if we
want to talk about Microsoft, the answer is probably different...
Part of the reason why most people don't perceive Unix to be insecure is
that most security failures in a Unix system manifest as application failures.
The argument that capability supporters are making is that most of those
application failures would be prevented or mitigated if a capability OS
were used.
Most of those application failures lead to exploitation of the ambient
authority in the underlying C-based system. Or mis-configuration
leading to use of the faulty applications ambient authority. But the
Unix system call api is in C, so there is of course only so far you
can go there. I'll corroborate the 'unix hacks through application
holes' point with my own anecdotal experience: In over a decade of
running unix systems the few times I've been personally rooted on the
bare net on any unix flavor was through an un-patched bind nameserver
version and a crufty
imap server I forgot to turn off. Both were written in C, and both
had some bit that was setuid root to attach themselves to a privledged
tcp port. Which is a bit that I think might be handled better in a
capability secure OS, no?
Post by David Hopwood
Comparing Unix to Windows, there are also fewer people trying to attack
Unix systems. (Just a fact; not a defence of Windows.)
Well yes of course there are more Windows systems numerically under
attack, but I think that the average value of compromising a unix
system is going to be higher box for box compared to windows machines.
All of the large oil and automotive networks are on unix, as are
nearly all stock exchange and currency trading platforms. Unix lives
in the middle layers of most bank networks (windows on the front and
mainframes on the back....and worse all over). Those are just things
I've been exposed to in my personal experience as a consultant over 20
years.

Oh, and all of the the big phone switch platforms (ATT and Nortel) are
on Unix. As is most of the VOIP that's starting to replace the PSTN.
So yeah, if you wanna get zombies to send spam, target Windows. If
you wanna crack big money flow infrastructure and other high value
targets, Unix is the target du jour. Note that Solaris Sparc crackers
are very, very 3133t (elite).

OpenBSD just happens to be the *nix with the most defaults closed off,
the underlying model is still flawed.

Ciao,

David Mercer
David Hopwood
2005-11-14 11:36:28 UTC
Permalink
Post by David Mercer
Post by David Hopwood
Post by David Wagner
If you amended your statement to something like "Unix has too many
security problems" or "Unix keeps screwing me over with Trojan horses",
well, that would be something to hate about Unix, if true. But I've
rarely heard that complaint. For many people, standard Unices (like
OpenBSD or Linux) are adequately secure for most purposes. Now if we
want to talk about Microsoft, the answer is probably different...
Part of the reason why most people don't perceive Unix to be insecure is
that most security failures in a Unix system manifest as application failures.
The argument that capability supporters are making is that most of those
application failures would be prevented or mitigated if a capability OS
were used.
Most of those application failures lead to exploitation of the ambient
authority in the underlying C-based system. Or mis-configuration
leading to use of the faulty applications ambient authority. But the
Unix system call api is in C, so there is of course only so far you
can go there.
I suppose I'm considering Unix and C to be inseparable here. After all,
they were basically codesigned, and C is *still* the only language for
which POSIX officially defines an API binding. So it's not surprising that
many criticisms of Unix are really criticisms of the Unix + C combination.

Would Unix be more secure when used from a capability language? Probably,
but only because it is being used with a redesigned OS <-> application
interface.
Post by David Mercer
I'll corroborate the 'unix hacks through application
holes' point with my own anecdotal experience: In over a decade of
running unix systems the few times I've been personally rooted on the
bare net on any unix flavor was through an un-patched bind nameserver
version and a crufty imap server I forgot to turn off.
I am always dubious about extrapolating from personal experience to how
secure something is. An experience of a security breach tells you about
an insecurity of that system; lack of experience of a security breach
doesn't tell you much of anything.
Post by David Mercer
Both were written in C, and both had some bit that was setuid root to
attach themselves to a privileged tcp port.
Yes, the old and still not fixed "privileged ports" problem -- a textbook
example of a counterproductive security measure.
Post by David Mercer
Which is a bit that I think might be handled better in a
capability secure OS, no?
Right.
Post by David Mercer
Post by David Hopwood
Comparing Unix to Windows, there are also fewer people trying to attack
Unix systems. (Just a fact; not a defence of Windows.)
Well yes of course there are more Windows systems numerically under
attack, but I think that the average value of compromising a unix
system is going to be higher box for box compared to windows machines.
All of the large oil and automotive networks are on unix, as are
nearly all stock exchange and currency trading platforms. Unix lives
in the middle layers of most bank networks (windows on the front and
mainframes on the back....and worse all over). Those are just things
I've been exposed to in my personal experience as a consultant over 20
years.
I think that widespread exploitation of such systems is a problem waiting
to happen. Especially in networks where you can get past the firewall by
compromising a Windows box (by email or something else that goes through
firewalls), then exploit the juicy Unix interior.
--
David Hopwood <***@blueyonder.co.uk>
David Mercer
2005-11-14 21:54:25 UTC
Permalink
Post by David Hopwood
Post by David Mercer
Post by David Hopwood
Comparing Unix to Windows, there are also fewer people trying to attack
Unix systems. (Just a fact; not a defence of Windows.)
Well yes of course there are more Windows systems numerically under
attack, but I think that the average value of compromising a unix
system is going to be higher box for box compared to windows machines.
All of the large oil and automotive networks are on unix, as are
nearly all stock exchange and currency trading platforms. Unix lives
in the middle layers of most bank networks (windows on the front and
mainframes on the back....and worse all over). Those are just things
I've been exposed to in my personal experience as a consultant over 20
years.
I think that widespread exploitation of such systems is a problem waiting
to happen. Especially in networks where you can get past the firewall by
compromising a Windows box (by email or something else that goes through
firewalls), then exploit the juicy Unix interior.
Right, and I've seen worse than that, too. One of the 20 largest
banks in North America, which shall remain nameless, once had a
frickin' ethernet hub plugged into the network in their main
operations center in such a way that it completely bypassed all the
firewalling that was in place. We found it when traffic that
shouldn't have gotten through without ssh tunneling just popped right
on out. This was in the middle of an large email system upgrade, and
of COURSE the BigCo auditing company missed completely in their
security audit.

In short: however scary and insecure one think current networks are,
the reality is almost certainly much, much worse!

-David Mercer
Richard Uhtenwoldt
2005-11-15 20:50:46 UTC
Permalink
Post by David Mercer
In short: however scary and insecure one think current networks are,
the reality is almost certainly much, much worse!
One thing I haven't seen mentioned on these lists hosted at eros-os.org
over the last 4 y is that maybe the present situation is _preferable_
to the situation we capability adherents are trying to bring about.

One can make the argument that the world's multinationals
and governments as presently organized are already or will
inevitably become controlled by evildoers, and consequently it is
actually _desirable_ that their computer systems have security
holes that can be exploited by people fighting the evildoers.

Jefferson would of course immediately understand such an argument.

It is unethical IMO to help advance the IT capabilities of
evildoers.

I do not expect this missive to cause anyone on this list to
re-evaluate their career goal, but when talking about the present
state of information security in the real world, I wish people
would in the future avoid language like "scary and insecure"
because such language implies that it is a forgone conclusion
that increasing the security of a computer system is always a
positive thing.

On the contrary: increasing the security of a computer system is
an ethical act only when the computer system will not be used for
destructive ends.
Jed at Webstart
2005-11-16 01:30:12 UTC
Permalink
Post by Richard Uhtenwoldt
Post by David Mercer
In short: however scary and insecure one think current networks are,
the reality is almost certainly much, much worse!
One thing I haven't seen mentioned on these lists hosted at eros-os.org
over the last 4 y is that maybe the present situation is _preferable_
to the situation we capability adherents are trying to bring about.
One can make the argument that the world's multinationals
and governments as presently organized are already or will
inevitably become controlled by evildoers, and consequently it is
actually _desirable_ that their computer systems have security
holes that can be exploited by people fighting the evildoers.
Ha! That's a good one.
Post by Richard Uhtenwoldt
Jefferson would of course immediately understand such an argument.
It is unethical IMO to help advance the IT capabilities of evildoers.
Doesn't this argument amount to an argument against any new
technology that might amplify human power? I don't buy that argument
personally. I don't think it has been born out so far in the development
of human societies. Truly bad things have happened of course, but overall
I would say that most people are better off. I think the same is true of
information security. However, in this case I think the argument against
information security is even weaker. To me it seems that weak
information security is much more likely to be exploited by "evildoers"
against others than visa versa (e.g. why governments sometimes
intentionally try to weaken security mechanisms such as encryption).
Post by Richard Uhtenwoldt
I do not expect this missive to cause anyone on this list to
re-evaluate their career goal, but when talking about the present
state of information security in the real world, I wish people
would in the future avoid language like "scary and insecure"
because such language implies that it is a forgone conclusion
that increasing the security of a computer system is always a
positive thing.
People can differ. I see lack of IT security mostly from a personal
perspective and from that of the systems I personally use.
David Mercer
2005-11-16 08:19:33 UTC
Permalink
(my replies interspersed with jed's points, as he quotes all of
richard's msg -dm)
Post by Jed at Webstart
Post by Richard Uhtenwoldt
Post by David Mercer
In short: however scary and insecure one think current networks are,
the reality is almost certainly much, much worse!
One thing I haven't seen mentioned on these lists hosted at eros-os.org
over the last 4 y is that maybe the present situation is _preferable_
to the situation we capability adherents are trying to bring about.
I have had that thought, more than once in the last 10 years. It has
crossed my mind that a secure enough agoric computing environment
could enable all kinds of nastiness by governments and corporations.
Scared the willies out of one of my business partners during the
dot-com-boom (co-founder of my consulting company). The thought of a
possible techno-dystopia brought about by such tech didn't help the
depression that led up to his suicide the month the NASDAQ crashed.
Post by Jed at Webstart
Post by Richard Uhtenwoldt
One can make the argument that the world's multinationals
and governments as presently organized are already or will
inevitably become controlled by evildoers, and consequently it is
actually _desirable_ that their computer systems have security
holes that can be exploited by people fighting the evildoers.
Ha! That's a good one.
Post by Richard Uhtenwoldt
Jefferson would of course immediately understand such an argument.
It is unethical IMO to help advance the IT capabilities of evildoers.
Doesn't this argument amount to an argument against any new
technology that might amplify human power? I don't buy that argument
personally. I don't think it has been born out so far in the development
of human societies. Truly bad things have happened of course, but overall
I would say that most people are better off. I think the same is true of
information security. However, in this case I think the argument against
information security is even weaker. To me it seems that weak
information security is much more likely to be exploited by "evildoers"
against others than visa versa (e.g. why governments sometimes
intentionally try to weaken security mechanisms such as encryption).
Yeah, the little guy doesn't have a farm of CRAYs to crack the
corporate or govt. encryption, that's for sure. These concerns of
one of the factors influencing the choice of license (GPL2) for an
SHA2 library I'm writing right now in Haskell. It's portable
infrastructure, and I don't want any closed source derivatives with
backdoors built from it.
Post by Jed at Webstart
Post by Richard Uhtenwoldt
I do not expect this missive to cause anyone on this list to
re-evaluate their career goal, but when talking about the present
state of information security in the real world, I wish people
would in the future avoid language like "scary and insecure"
because such language implies that it is a forgone conclusion
that increasing the security of a computer system is always a
positive thing.
Well, I personally do find the fact that things like the volume of
large-attachment spam (dancing baby the time I saw it) can cause ATM
machine transactions to time out to be a bit scary. And dangerous, if
the pre-auth didn't free instantly, and you need cash for, say, a cab
in a scary place. Yes I'm drawing a hypothetical out of an anecdote,
but the anecdote is pretty horrible in reality! I don't think that
words like 'scary and insecure' are actually hyperbole in such a case.

"Fear all browser cookies", now that's fear-mongering hyperbole. Most
of the ATM's in Mexico grinding to a halt being scary is not.
Post by Jed at Webstart
People can differ. I see lack of IT security mostly from a personal
perspective and from that of the systems I personally use.
Richard Uhtenwoldt
2005-11-13 05:18:14 UTC
Permalink
Post by David Wagner
But I've
rarely heard that complaint. For many people, standard Unices (like
OpenBSD or Linux) are adequately secure for most purposes.
I think you'll get several objections to this assertion. My
objection is that it needs the following qualification.

If users were suddenly to start using a hypothetical "completely
secure" operating system, then they will over the next year or
two come to rely more heavily on the security guarantees provided
by the OS they use so that, e.g., users will begin to store
high-value digital money tokens on computers directly connected
to the public internet. After that, standard Unices will not be
adequately secure for most purposes.

Had I more time to invest in cap talk, maybe I would have
contributed something more valuable to this thread than just this
statement of opinion.
--
Richard Uhtenwoldt
Text by me above is hereby placed in the public domain
Karp, Alan H
2005-11-15 01:18:22 UTC
Permalink
Post by Jed at Webstart
Sure, one can work to build something like Plash for 'IX or Polaris
for Windows where you try to use the minimal facilities that are
there and then turn the access control paradigm on it's head to get
something more like POLA. As you see in both Plash and Polaris they
create what amounts to an artificial 'user' with no permissions and
then grant a program running as that user limited permissions based
on POLA criteria. However, this amounts to ignoring access control
mechanisms provided by the native APIs and putting something else in
their place using more primitive facilities. Even once such a new
mechanism is in place it finds it constantly fighting the native
access control mechanisms.
One minor correction. Polaris doesn't fight the native access control
mechanisms. It uses them. The problem with being constrained the way
we are is getting "Least" authority. The best we can say is that
Polaris supports the "Principle of Less Authority."

________________________
Alan Karp
Principal Scientist
Virus Safe Computing Initiative
Hewlett-Packard Laboratories
1501 Page Mill Road
Palo Alto, CA 94304
(650) 857-3967, fax (650) 857-7029
https://ecardfile.com/id/Alan_Karp
http://www.hpl.hp.com/personal/Alan_Karp
Karp, Alan H
2005-11-18 21:17:30 UTC
Permalink
Post by John C. McCabe-Dansted
If we needed proper persistant unforgeable capabilities it
seems that it would
be relatively easy to store them as normal files (owned by
cap-manager to
prevent forgery) on the filesystem containing the lambda name
granted, or
stored in a simple DB. The cap-manager would be responsible
for pickling and
restarting processes that need to survive a reboot.
Is there some reason why this would be problematic?
This approach is very similar to what we did in the Client Utility
(e-speak Beta). Our problem was somewhat easier, though, because we
weren't trying to emulate an existing operating system.

Each client (process) had a private namespace (file system) consisting
of an ordered set of name frames (directories). Each name frame
contained mappings from lambda (file) names to repository handles (inode
numbers). Name frames could be declared transient or persistent, the
latter surviving across reboots. When one user/process wanted to share
a revocable capability with another, it would create a new name frame,
insert the bindings to be shared, and send a reference to the frame to
the other user/process. Revocation involved destroying that name frame.

________________________
Alan Karp
Principal Scientist
Virus Safe Computing Initiative
Hewlett-Packard Laboratories
1501 Page Mill Road
Palo Alto, CA 94304
(650) 857-3967, fax (650) 857-7029
https://ecardfile.com/id/Alan_Karp
http://www.hpl.hp.com/personal/Alan_Karp
David Wagner
2005-11-25 21:43:46 UTC
Permalink
Post by David Chizmadia (JHU)
Post by Sandro Magi
Is it possible to interject some sort of authentication step before a
capability request is satisfied? In explaining the web-calculus to
others, they've often expressed dismay that the unguessable URL is the
only authentication required to access a resource. Their main concern is
that a user might inadvertently leak a URL to a third party who
shouldn't have access to the resource.
Of course, the underlying problem is a complete failure on
the part of those looking for the extra authentication to
understand the concept of capability discipline, but if this
provides comfort, it is worth it.
I don't understand your comment. This is not about comfort and warm
fuzzies. This is about what seems to me to be a valid real-world concern.
Framing this as about "comfort" leaves the impression that you consider
this to be not a real problem and the questioner not thoughtful enough
to understand why. If the goal is to see capabilities more widely used,
this kind of response seems, quite frankly, more likely to put people off
than to win converts. I'd think that patient education and explanation of
your preferred solution is going to be a lot more effective than telling
people they don't understand capability discipline. Perhaps you could
explain what is a better solution to this problem that is more in keeping
with capability discipline. I like to think that anyone who is reading
this list is likely to be interested enough to listen to your preferred
solution, and thoughtful enough to have the capacity to understand.
David Chizmadia (JHU)
2005-11-25 23:00:32 UTC
Permalink
Post by David Wagner
Post by David Chizmadia (JHU)
Post by Sandro Magi
Is it possible to interject some sort of authentication step before a
capability request is satisfied? In explaining the web-calculus to
others, they've often expressed dismay that the unguessable URL is the
only authentication required to access a resource. Their main concern is
that a user might inadvertently leak a URL to a third party who
shouldn't have access to the resource.
Of course, the underlying problem is a complete failure on
the part of those looking for the extra authentication to
understand the concept of capability discipline, but if this
provides comfort, it is worth it.
I don't understand your comment. This is not about comfort and warm
fuzzies. This is about what seems to me to be a valid real-world concern.
Framing this as about "comfort" leaves the impression that you consider
this to be not a real problem and the questioner not thoughtful enough
to understand why.
I agree that it is a real concern in terms of tradeoffs, but it
is indeed the case that the questioner has either not thought through
or not asked enough questions about the principles of capability
discipline. Caveating that I'm referring to object capabilities
here, my understanding of those principles is as follows:

* An object capability contains both the designation (name) of
a resource and the authority required to access that resource;

* Possessing the capability, such that it can be presented to the
resource server, is the only necessary and sufficient condition
for actually getting access to the resource;

* Revocation and additional accountability are accomplished through
the careful use of facades (membranes?);

As with all security architectures, capability discipline implies
certain obligations on the capability wielder (i.e., client). When
these obligations cannot be met in a particular environment, the
security architect must reconsider the use of capabilities. (When
doing so however, the architect will often find that other approaches
actually impose the same obligations on the client and are therefore
no more or less effective.)
Post by David Wagner
If the goal is to see capabilities more widely used,
this kind of response seems, quite frankly, more likely to put people off
than to win converts.
I agree, but I was speaking to someone who appears to have
been a convert longer than myself, so I was considerably more
blunt than I would be with someone I was trying to convince.
Post by David Wagner
I'd think that patient education and explanation of your preferred
solution is going to be a lot more effective than telling people
they don't understand capability discipline.
Absolutely! I do find the comment somewhat ironic since
education and explanation is usually only required when a
person does, in fact, not understand something; otherwise,
it is either being pendantic or more correctly characterized
as arguing architecture ;-)
Post by David Wagner
Perhaps you could
explain what is a better solution to this problem that is more in keeping
with capability discipline. I like to think that anyone who is reading
this list is likely to be interested enough to listen to your preferred
solution, and thoughtful enough to have the capacity to understand.
I was having trouble understanding the first sentence, when I
realized that the quote above dropped my statement that the solution
to the originally stated problem is to use the SSL protocol (which is
an integral component of HTTPSY) in mutual authentication mode. I
assert that this would provide all of the advantages of passwords
with greater assurance.

Looking at the problem from a "pure" capability perspective, the
more correct solution would be to require the server to hand out a
different YURL to each unique client and keep a record of what known
characteristics of the client make that client unique to the server.
Then, when the YURL is presented to the server for access, the server
can check the known characteristics of the client presenting the YURL
against its record for the client that was given the YURL. If the
server had required client login, then it would make perfect sense to
use that login information as a unique characteristic.

-DMC
Sandro Magi
2005-11-26 00:34:59 UTC
Permalink
Post by David Chizmadia (JHU)
As with all security architectures, capability discipline implies
certain obligations on the capability wielder (i.e., client). When
these obligations cannot be met in a particular environment, the
security architect must reconsider the use of capabilities. (When
doing so however, the architect will often find that other approaches
actually impose the same obligations on the client and are therefore
no more or less effective.)
Indeed, which is why I posited the need for an authenticated session of
some sort.
Post by David Chizmadia (JHU)
Absolutely! I do find the comment somewhat ironic since
education and explanation is usually only required when a
person does, in fact, not understand something;
Well, you could say I don't understand if there's a proper solution for
capability URLs in the described scenario. :-)
Post by David Chizmadia (JHU)
I was having trouble understanding the first sentence, when I
realized that the quote above dropped my statement that the solution
to the originally stated problem is to use the SSL protocol (which is
an integral component of HTTPSY) in mutual authentication mode. I
assert that this would provide all of the advantages of passwords
with greater assurance.
Except that it authenticates machines, not users. The advantage of web
applications is mobility.

http://eros.cs.jhu.edu/pipermail/cap-talk/2005-November/004200.html

The above is motivated by real-world uses I've encountered. The data
wasn't incredibly important so it didn't warrant truly hardcore
protection, but the business had to have some sort of protection, and
the simple authentication methods available in most web frameworks was
sufficient. Since authentication is not part of the web-calculus, I was
wondering how it could be supported.

I think I've now accepted that "authentication" in the web-calculus can
be accomplished by constructing session-limited facades. Traversing the
object graph builds these new proxies/facades on the fly. Accessing the
original URL causes a redirect to a proxy after successful
authentication(?). These proxies can be garbage collected after
specified a timeout just like typical web frameworks. I believe this
strategy is what coderman was describing.

So authentication is possible, but is it feasible(in terms of
resources)? :-)

Sandro
Rob J Meijer
2005-11-26 06:19:00 UTC
Permalink
Post by David Chizmadia (JHU)
Post by David Wagner
Post by David Chizmadia (JHU)
Post by Sandro Magi
Is it possible to interject some sort of authentication step before a
capability request is satisfied? In explaining the web-calculus to
others, they've often expressed dismay that the unguessable URL is the
only authentication required to access a resource. Their main concern is
that a user might inadvertently leak a URL to a third party who
shouldn't have access to the resource.
Of course, the underlying problem is a complete failure on
the part of those looking for the extra authentication to
understand the concept of capability discipline, but if this
provides comfort, it is worth it.
I don't understand your comment. This is not about comfort and warm
fuzzies. This is about what seems to me to be a valid real-world concern.
Framing this as about "comfort" leaves the impression that you consider
this to be not a real problem and the questioner not thoughtful enough
to understand why.
I agree that it is a real concern in terms of tradeoffs, but it
is indeed the case that the questioner has either not thought through
or not asked enough questions about the principles of capability
discipline. Caveating that I'm referring to object capabilities
* An object capability contains both the designation (name) of
a resource and the authority required to access that resource;
* Possessing the capability, such that it can be presented to the
resource server, is the only necessary and sufficient condition
for actually getting access to the resource;
* Revocation and additional accountability are accomplished through
the careful use of facades (membranes?);
As with all security architectures, capability discipline implies
certain obligations on the capability wielder (i.e., client). When
these obligations cannot be met in a particular environment, the
security architect must reconsider the use of capabilities. (When
doing so however, the architect will often find that other approaches
actually impose the same obligations on the client and are therefore
no more or less effective.)
At the risk of sounding stupid, I've always kinda understood that
being able to 'transfer' a capability , itself is also a capability.
Thus making tha fact that you can have 'and' COPY/forward etc an
'unguesable' URL that represents a capability, will in my thought make
it a dual capability reference, holding also the implicit capability to
forward it.

Thus I would think that if you want to 'single' capability reference, than
this reference should not have a transfer capability to itself implied,
and in order to remove the transfer capability removed from an unguesable
URL, I would think that having the URL linked to its holder, would help to
overcome the shortcommings of the use of storable capability references.
Having said this, I believe that in order to be able to create practical
networking capability system architectures using a hybrid set of platforms,
the integration with authentication systems (kerberos i would think to be
practical) would actualy be THE most crucial part to make this viable.

I agree that in a pure POLA enviroment authentication on all but host level
would be without merrits, and implicitly side-channel transferable
capabilities would be of no concequence, but given the fact that in any
practical hybrid networked situation there will be many side channels ,
the integration of authentication (kerberos?) in order to have this
implicit transferability removed would I think be of the very most
importance.

Rob
David Wagner
2005-11-25 22:55:18 UTC
Permalink
Post by coderman
if the real concern is insecure design (allowing private YURL's to be
leaked accidentally) no amount of code or process will restore trust
where it cannot be placed.
Unfortunately, the two do not seem to be equivalent. "insecure design"
is not the only way that "private YURL's [can be] leaked accidentally".
The email you are responding to gave another example: if humans ever see
YURLs, then it is very easy to imagine that humans might occasionally
leak those YURLs to others without intending the consequences that follow.
Do you have any thoughts on the best way to deal with this risk?
coderman
2005-11-25 23:11:44 UTC
Permalink
Post by David Wagner
...
Unfortunately, the two do not seem to be equivalent. "insecure design"
is not the only way that "private YURL's [can be] leaked accidentally".
The email you are responding to gave another example: if humans ever see
YURLs, then it is very easy to imagine that humans might occasionally
leak those YURLs to others without intending the consequences that follow.
Do you have any thoughts on the best way to deal with this risk?
leaking a 256bit identifier via spoken transport is difficult; how
would you get someone to read you 64 hex digits in a phishing/social
eng. attack?

when dealing with such identifiers in a user interface they should be
aliased with a pet name; they are meaningless to human brains. a pet
name in this instance could be embodied as a bookmark, a hyperlink,
etc.

you can't prevent malicious users from intentionally exposing secret
YURL's but you can make it very difficult to disclose a secret YURL by
accident.
David Wagner
2005-11-26 09:50:38 UTC
Permalink
Post by David Chizmadia (JHU)
* An object capability contains both the designation (name) of
a resource and the authority required to access that resource;
* Possessing the capability, such that it can be presented to the
resource server, is the only necessary and sufficient condition
for actually getting access to the resource;
* Revocation and additional accountability are accomplished through
the careful use of facades (membranes?);
Those are good principles when we're talking about the design of
software. But everything is different when the client is actually
a human. If YURLs are visible to the user, then I'd be concerned
that users might forward those YURLs to others without realizing the
security implications of doing so. That sounds unfortunate from a
human factors point of view.

The original poster asked about how to provide mutual authentication,
but I suspect the real question here is how to deal with the risk of
YURLs being leaked to others. That question makes me worry that the
model of YURLs is at odds with the mental model that users have, and
wonder whether capabilities are really the right solution for human
consumption. It feels like tacking mutual authentication onto YURLs
can't be the right solution (as I think you were suggesting), because
it seems to defeat the whole point of capabilities (why bother with a
capability model in the first place if you're going to add mutual
authentication anyway?).
Sandro Magi
2005-11-26 08:16:01 UTC
Permalink
Post by David Wagner
The original poster asked about how to provide mutual authentication,
but I suspect the real question here is how to deal with the risk of
YURLs being leaked to others.
Yes, that would be a better description of the problem I posed, although
I'd amend "leaked" to "unintentionally leaked". Authentication is simply
a means to achieving that end.

I have no problem with users intentionally passing capabilities since
they can be made accountable for any abuses. I'd simply like to ensure
that a user *knows* when he is passing a capability. If a user is
utilizing a third-party computer (internet cafe?), browser history makes
capability-based web applications useless because it would leak all of
his authority *unintentionally*.
Post by David Wagner
That question makes me worry that the
model of YURLs is at odds with the mental model that users have, and
wonder whether capabilities are really the right solution for human
consumption.
I think there are only a few corner cases that would surprise users
(like the one I proposed).
Post by David Wagner
It feels like tacking mutual authentication onto YURLs
can't be the right solution (as I think you were suggesting), because
it seems to defeat the whole point of capabilities (why bother with a
capability model in the first place if you're going to add mutual
authentication anyway?).
Capability designs have other advantages which make them attractive.

Was I premature in concluding that dynamically constructed,
session-limited facades will adequately solve the problem? I'm certainly
open to other possibilities; session-limited authority is a fairly
common pattern in web applications though.

Sandro
David Wagner
2005-11-26 09:58:03 UTC
Permalink
Post by coderman
Post by David Wagner
if humans ever see
YURLs, then it is very easy to imagine that humans might occasionally
leak those YURLs to others without intending the consequences that follow.
Do you have any thoughts on the best way to deal with this risk?
when dealing with such identifiers in a user interface they should be
aliased with a pet name; they are meaningless to human brains. a pet
name in this instance could be embodied as a bookmark, a hyperlink,
etc.
Petnames are plausible for frequently visited sites. I can see
having a petname for cnn.com. But petnames for one-time URLs don't
seem very attractive, from a usability stand point. If my friend
forwards me a link in email, do I have to set up a petname before I
can follow it? Doesn't sound very attractive. Then, there's also
the issue of how I forward a link to a friend. Let's say I visit a
site I really like and I want to send a link to my friend (but without
giving away my authentication tokens, obviously). I'm not quite sure
how one would go about supporting this kind of use case in a flexible,
general fashion. Right now I just click on a link and copy the URL
into the clipboard, but doing that with YURLs would risk unwanted
leakage of capabilities.
Post by coderman
you can't prevent malicious users from intentionally exposing secret
YURL's but you can make it very difficult to disclose a secret YURL by
accident.
Right. Preventing accidental disclosure of YURLs is the problem I'd
like to learn how to solve. Can it be done without harming usability?
Loading...