mirror of
https://github.com/Decentralized-ID/decentralized-id.github.io.git
synced 2024-10-01 01:05:54 -04:00
145 lines
16 KiB
Markdown
145 lines
16 KiB
Markdown
#Schutte’s Critique of the Self-Sovereign Identity Principles
|
||
I’m taking a quick pass through Christopher Allen’s 10 principles for Self-Sovereign Identity, with an eye toward highlighting the primary shortcomings that I perceive. Note: I have a very unusual take on this. I understand that. I’m trying to be guided primarily by how mechanisms of coherence formation, perception and interaction amongst agents operate in complex adaptive systems.
|
||
|
||
I assume that the meta-patterns that we can observe in terms of how nature organizes itself are FUNCTIONAL ADAPTATIONS THAT HAVE EMERGED OVER COUNTLESS MILLENNIA OF TRIAL AND ERROR BECAUSE THEY BALANCE TENSIONS OF RESILIENCE (achieved through generation of diversity) AND EFFICIENCY (achieved through curation of diversity through actual interactions with surrounding entities, which cumulatively constitute an environment). This process of GENERATION and CURATION, often referred to as “Evolution” has come up with some pretty decent patterns — after having tried pretty much darn near everything and finding these patterns to have persistently find formation / activation in our present day world. And yes, I’m stating that the patterns themselves are a product of evolution. The stuff that worked, continues to make an appearance. The patterns that lead to self-extinguishing dead-ends, are not so common (though they have to potential to be generated anew)
|
||
|
||
Chris Allen’s 10 Principles for Self-Sovereign Identity
|
||
|
||
Schutte’s Take:
|
||
|
||
Naming it “Self-Sovereign Identity” packs in a bunch of false assumptions. The MetaCurrency Project focuses on generating adaptive capacity for individuals and organizations and focuses on a concept of Mutual Sovereignty as a result.
|
||
|
||
This is not to undermine the importance of the individual — but it is intended to draw attention to the way in which that word: “In-Dividual” or Non-Divisible is misleading, for even individuals are composed of intricate sets of collaborations between various agents. (And yes, this is a “turtle’s all the way down” situation).
|
||
|
||
A perceived sense of “a self” is the product that emerges from the interactions of these various agents.
|
||
|
||
Their coherent operation takes form in the world in ways that enable other actors to treat them as if they were a single actor rather than the complex set of collaborations amongst different processes that they actually are.
|
||
|
||
Of course, this is a heuristic, and like all heuristics, it may be a useful shortcut, but that doesn’t mean it accurately reflects reality.
|
||
|
||
In truth, our “self” is constantly interacting with agents both externally and internally, and these transform the functioning (and even the perceived boundaries) of the self.
|
||
|
||
Simple example: Observe the difference in capacity and skillsets when you compare me in a normal state, and me after my body’s cells begin collaborating with the better part of a bottle of tequila. When “collaborating” with strong drink, I may not be as adept at driving as when those cells are being sustained by just water and other nutrients.
|
||
|
||
##On to the principles!
|
||
|
||
1. Existence. Users must have an independent existence. Any self-sovereign identity is ultimately based on the ineffable “I” that’s at the heart of identity. It can never exist wholly in digital form. This must be the kernel of self that is upheld and supported. A self-sovereign identity simply makes public and accessible some limited aspects of the “I” that already exists.
|
||
|
||
Schutte’s Take:
|
||
|
||
This is the first false assumption: The belief in an identity as an object. The perception of an “I” is a heuristic that simplifies information processing and decision making, but it is not an underlying reality that we should be anchoring Identity processes to — at least not in total. There is a truth in that an entity has a coherence that is distinct from others, but as Joe Andrieu phrases it “Identity is in the eye of the beholder.” This is true even when the beholder is the self. Natalie Smolenski’s paper about the shifting boundaries of self touches on this aspect as well.
|
||
|
||
|
||
|
||
2. Control. Users must control their identities. Subject to well-understood and secure algorithms that ensure the continued validity of an identity and its claims, the user is the ultimate authority on their identity. They should always be able to refer to it, update it, or even hide it. They must be able to choose celebrity or privacy as they prefer. This doesn’t mean that a user controls all of the claims on their identity: other users may make claims about a user, but they should not be central to the identity itself.
|
||
|
||
Schutte’s Take:
|
||
|
||
This assumes that
|
||
|
||
1) “identities” are a static referent,
|
||
|
||
2) identities are maintained at a system wide scale.
|
||
|
||
These claims align with past attempts at identity administration architectures, but don’t map to the actual functioning of identity in the real world.
|
||
|
||
I would argue that:
|
||
|
||
1) claims are all that exist
|
||
|
||
2) these claims can be thought of as signals that are “published” (sent) by some actors and “received” (sensed) by others. After receipt, the recipient bears the burden of prioritizing and interpreting the signals that they have sensed.
|
||
|
||
There are complex adaptive system dynamics in play here that lead to a differentiation in the sensitivities of various actors.
|
||
|
||
|
||
|
||
3. Access. Users must have access to their own data. A user must always be able to easily retrieve all the claims and other data within his identity. There must be no hidden data and no gatekeepers. This does not mean that a user can necessarily modify all the claims associated with his identity, but it does mean they should be aware of them. It also does not mean that users have equal access to others’ data, only to their own.
|
||
|
||
Schutte’s Take:
|
||
|
||
Again, though noble in intent, this does not map to reality. Remember: “Identity is in the eye of the beholder.”
|
||
|
||
If I see you slap a child, that impression gets “written” on my brain. You don’t have access to it.
|
||
|
||
If I later write it in my notebook, you still might not have access to it.
|
||
|
||
If it is shared with someone else in private, you won’t necessarily have access to it.
|
||
|
||
These private channels of impression, interpretation and communication are critically important, and yet do not lend themselves to the type of “user centric” identity scheme being proposed here.
|
||
|
||
|
||
|
||
4. Transparency. Systems and algorithms must be transparent. The systems used to administer and operate a network of identities must be open, both in how they function and in how they are managed and updated. The algorithms should be free, open-source, well-known, and as independent as possible of any particular architecture; anyone should be able to examine how they work.
|
||
|
||
Schutte’s Take:
|
||
|
||
Transparency is useful, but it always comes at a cost. Some levels of detail are irrelevant (until they are not). Other levels of detail can actually be obfuscatory (who among you has read the entire tax code?)
|
||
|
||
Transparency as a principle is really an attempt to indicate that the processes that we rely upon should auditable — i.e. audit-able. In order to audit (assess) a system, we need not just access to the details, but the literacy to interpret those details. Furthermore, we need to have the capability to take action based on what we find.
|
||
|
||
If we are given details that we cannot understand, or that we have no way of acting upon, they don’t do us much good.
|
||
|
||
A critical part of making something understandable and actionable is the ability to synthesize the details — to convert them from one form that is filled many signals, to another form, one with fewer signals, but of more relevant meaning.
|
||
|
||
simple example: 10,000 ratings of restaurants in my neighborhood, each on different attributes and all listed as numbers on a wall, might not do me much good. To my eyes, the volume of “signals” would likely be overwhelming (particularly if each was organized in some not-so-easy-to-interpret-at-a-glance structure like JSON and not ordered in any useful way). I would see just a whole bunch of stuff. Too much, most likely. But run that same information through a filter — and distill down those insights to something like an average rating for each restaurant on price, atmosphere, taste and timeliness and then put two restaurants side-by-side with those “synthesized assessments” and we have information that I can act upon.
|
||
|
||
It will be the same for information about the very processes that we rely upon for making, storing, and interpreting claims about agents using a digital system.
|
||
|
||
We will rely upon not only transparency, but synthesis and judgments about who to rely upon for when to “dive deeper into the details,” for what level of minutia to ignore altogether, and for who to rely upon for distillation. This process will itself operate in the manner of a complex adaptive system — and will get us to answers that prove useful — though certainly not to truth. Truth, as I, and others, have pointed out elsewhere, is too costly to be maintained in all of its painstaking detail.
|
||
|
||
|
||
|
||
5. Persistence. Identities must be long-lived. Preferably, identities should last forever, or at least for as long as the user wishes. Though private keys might need to be rotated and data might need to be changed, the identity remains. In the fast-moving world of the Internet, this goal may not be entirely reasonable, so at the least identities should last until they’ve been outdated by newer identity systems. This must not contradict a “right to be forgotten”; a user should be able to dispose of an identity if he wishes and claims should be modified or removed as appropriate over time. To do this requires a firm separation between an identity and its claims: they can’t be tied forever.
|
||
|
||
Schutte’s Take:
|
||
|
||
This persistence principle sounds attractive, but introduces risk. It also builds upon the same flawed framework of “an Identity is an object” and “Identity objects will be managed at system scale rather than by individual observers.” These are fatal flaws that do not map to how signals, agents, interpretation and steering operate in complex adaptive systems.
|
||
|
||
We can choose not to willingly pull our previous interactions into the present relationship. However, we are incapable of preventing others from attempting to correlate our past with our present — or to prevent them entirely from taking steps to improve the likelihood that our present interaction will be discoverable by those who interact with us in the future.
|
||
|
||
There are ways in which we can pressure others to reduce the level of such sharing that occurs, but these are primarily through the mechanism of social pressures, not technical limitations of the infrastructure we make use of.
|
||
|
||
|
||
|
||
6. Portability. Information and services about identity must be transportable. Identities must not be held by a singular third-party entity, even if it’s a trusted entity that is expected to work in the best interest of the user. The problem is that entities can disappear — and on the Internet, most eventually do. Regimes may change, users may move to different jurisdictions. Transportable identities ensure that the user remains in control of his identity no matter what, and can also improve an identity’s persistence over time.
|
||
|
||
7. Interoperability. Identities should be as widely usable as possible.Identities are of little value if they only work in limited niches. The goal of a 21st-century digital identity system is to make identity information widely available, crossing international boundaries to create global identities, without losing user control. Thanks to persistence and autonomy these widely available identities can then become continually available.
|
||
|
||
|
||
|
||
Schutte’s Take (on Portability and Interoperability):
|
||
|
||
The way that those of us at the metacurrency project might frame this is: our ability to communicate and interoperate should not be encloseable by any third-party.
|
||
|
||
Along these lines, interoperability is certainly a goal, for it is a requirement of communication that social preferences rather than technological limitations constrain who we interact with. That is not to say that interoperability will come without cost — or without loss of meaning. Any claim is always made in a context. Parts of its meaning are dependent on that context. When a claim is carved off from its context and shared with others (who by necessity do not completely share that same context) there is meaning lost or altered in the process. This is natural, but it is also worth noting and designing for. At the MetaCurrency Project we think about the ways in which context shapes meaning as analogous to Phenotypes (raw code) and Genotypes (code in a particular context). One code is in a particular context it will behave in ways that get shaped by that context. DNA provides a great example of this. More detail is available in the as yet unfinished Ceptr Revelation Document.
|
||
|
||
|
||
|
||
8. Consent. Users must agree to the use of their identity. Any identity system is built around sharing that identity and its claims, and an interoperable system increases the amount of sharing that occurs. However, sharing of data must only occur with the consent of the user. Though other users such as an employer, a credit bureau, or a friend might present claims, the user must still offer consent for them to become valid. Note that this consent might not be interactive, but it must still be deliberate and well-understood.
|
||
|
||
Schutte’s Take:
|
||
|
||
Again, this seems to be an appropriate principle for an “identity system” where “identities are objects” that are “managed at the system level.” None of these feels appropriate to me. See above comments for more detail on that. On the other hand, my expectations about your use should be upheld. If they are not, I, and others like me, will cease to interact with you (or become unwilling to trust our assessments of context and risk). This will trigger a reduction of interactions (i.e. the withdrawal of future consents through alternate mechanisms).
|
||
|
||
|
||
|
||
9. Minimalization. Disclosure of claims must be minimized. When data is disclosed, that disclosure should involve the minimum amount of data necessary to accomplish the task at hand. For example, if only a minimum age is called for, then the exact age should not be disclosed, and if only an age is requested, then the more precise date of birth should not be disclosed. This principle can be supported with selective disclosure, range proofs, and other zero-knowledge techniques, but non-correlatibility is still a very hard (perhaps impossible) task; the best we can do is to use minimalization to support privacy as best as possible.
|
||
|
||
Schutte’s Take:
|
||
|
||
This is a good goal to aim for. To accomplish it, users will likely rely upon others to give them guidance with regard to “how much information is enough information.”
|
||
|
||
A couple of years ago at IIW, I mapped out a rough protocol for how to query an agent with a one-time pseudonym to discover what they had to offer/what the mechanisms by which a user could gain access. Based on what was returned, a user could then submit appropriately narrow sets of claims, certifications, signatures etc to begin an interaction without:
|
||
|
||
1) disclosing more than necessary in that one exchange and
|
||
|
||
2) enabling the party that you are interacting with to build up a profile of you through multiple independent exchanges (counter to your intent).
|
||
|
||
|
||
|
||
10. Protection. The rights of users must be protected. When there is a conflict between the needs of the identity network and the rights of individual users, then the network should err on the side of preserving the freedoms and rights of the individuals over the needs of the network. To ensure this, identity authentication must occur through independent algorithms that are censorship-resistant and force-resilient and that are run in a decentralized manner.
|
||
|
||
Schutte’s Take:
|
||
|
||
I agree! Though I would argue that decentralized is actually the wrong language, and that what we actually want is distributed systems — that can mutate (thus enabling adaptation even at that layer) while still maintaining the possibility of interoperability.
|