Yesterday, Worldcoin, a crypto project co-founded by OpenAI founder Sam Altman, was officially launched. This is an encrypted identity system project that generates and verifies World ID (identity) for users by scanning their irises with the eye scanner Orb.

On the same day, Ethereum co-founder Vitalik Buterin published an article titled "What do I think about biometric proof of personhood?", in which he elaborated on his views on Worldcoin and human identity proof.

One of the most thorny but valuable problems that people in the Ethereum community have been trying to build is a decentralized Proof of Humanity solution, which is a limited form of real-world identity that allows a registered account to be controlled by a real person, ideally without revealing the real person behind it.

There have been many efforts in the crypto community to try to solve this problem before: BrightID, Idena, and Circles are representative examples. Some of them come with their own applications (usually UBI tokens), and some have found solutions to verify which accounts are valid for quadratic voting through Gitcoin Passport. Zero-knowledge technology like Sismo adds privacy to many similar solutions.

Only recently have we seen the rise of a much larger and more ambitious human identity project: Worldcoin.

Worldcoin was founded by Sam Altman, who was previously known for being the CEO of OpenAI. The idea behind the project is simple: Artificial Intelligence (AI) will create a lot of wealth for humanity, but it may also kill a lot of people's jobs because AI will eventually become almost impossible to tell who is a human and not a robot. So we need to fill this gap by:

(1) Create a really good human identity system so that humans can prove that they are actually real people;

(2) Providing UBI to everyone. Worldcoin is unique in that it relies on highly sophisticated biometric technology, using something called

The Orb's specialized hardware scans each user's iris.

The goal of Worldcoin is to produce a large number of these "spheres" and distribute them widely around the world, placing them in a public place so that anyone can easily get their own ID - a World ID.

To its credit, Worldcoin is also committed to decentralized development. That means decentralized technology: using OP Stack to become Ethereum’s L2, and using ZK-SNARK and other cryptography to protect users’ privacy, but also decentralized governance of the system itself. Worldcoin has been criticized for privacy and security issues with Orb, design issues with its token, and the ethics of some of the choices the company has made. In fact, the Worldcoin project itself is still a work in progress. However, some have raised more fundamental concerns about whether biometrics — not just Worldcoin’s eye-scanning biometrics, but also the simpler facial video upload and verification used in Proof of Humanity and Idena — can gain acceptance among the public.

These projects are certainly not without their critics, as risks include inevitable privacy breaches, further erosion of people’s ability to browse the internet anonymously, coercion from authoritarian governments, and how to ensure security while being decentralized.

This article will discuss these questions and provide some arguments to help you decide whether it is a good idea to scan your iris in front of Worldcoin’s “sphere” tool? And what are the alternatives to developing human identity?

What is a human identification system and why is it important?

The simplest way to define an identity proof system is this: you create a list of public keys, and the system guarantees that each key is controlled by a unique person. In other words, if you are human, you can put one key on the list, but you can't put two keys, and if you are a robot, you can't put any keys on the list.

Proof of human identity is valuable because it solves the anti-spam and anti-centralization of power problems that many people face in a way that avoids reliance on centralized authorities and reveals as little information as possible. If proof of human identity is not solved, decentralized governance (including "micro-governance" such as voting on social media posts) will be more easily controlled by very wealthy actors, including hostile governments. Many services can only prevent denial of service attacks by setting a price for access, and sometimes the price is high enough to exclude attackers but is too high for many low-income legitimate users.

Many of the world’s major applications today deal with this by using government-backed identity systems, such as ID cards and passports. This does solve the problem, but it comes at a huge and unacceptable sacrifice in terms of privacy, and can be vulnerable to even the slightest attack from the government itself.

Many human identity projects — not just Worldcoin, but also “flagship apps” like Circles — have “everyone gets a token” code built into them (also known as a “UBI token”). Every user who registers in the system receives some fixed number of tokens every day (or hourly or weekly). There are many other applications, including:

- Airdrop mechanism for token distribution

- Token or NFT sales with more favorable terms for less wealthy users

- Voting in the DAO

- How to initialize the graphical reputation system

- Quadratic voting (payment of funds and attention)

- Protection against bot/sybil attacks in social media

- CAPTCHA alternative to prevent DoS attacks

In many of these cases, the common theme is the creation of open and democratic mechanisms that avoid centralized control by project operators and dominance by its wealthiest users. The latter is particularly important in decentralized governance.

In such cases, existing solutions today rely on:

(1) Highly opaque AI algorithms that discriminate against users that operators dislike in an undetectable manner

(2) Centralized identity, namely “KYC”.

Therefore, an effective identity proof solution would be a better option to achieve the security properties required by these applications without falling into the trap of existing centralized approaches.

What were some early attempts at human identity proofing?

There are two main forms of human identity: social graphs and biometrics.

Social graph-based human identity verification relies on some form of attestation: if Alice, Bob, Charlie, and David are all verified humans, and they all say Emily is a verified human, then Emily is probably also a verified human.

Such proofs are often reinforced by incentives: if Alice says Emily is real, but it turns out she is not, then both Alice and Emily may be punished.

Biometric-based human identification involves verifying some physical or behavioral characteristic of Emily that distinguishes humans from robots (and individual humans from each other).

Most projects use a combination of these two techniques.

The four systems I mentioned at the beginning of this article are roughly as follows:

(1) Proof of Humanity: You upload your video and provide a deposit. To get approved, existing users need to vouch for you, and other challengers will challenge you for a period of time. If there is a challenger, Kleros' decentralized court will verify whether your video is authentic; if not, the deposit will be confiscated and the challenger will be rewarded.

(2) BrightID: You join other users in a video call “verification party” where everyone verifies each other. Through Bitu, you can get higher levels of verification as long as enough other Bitu-verified users vouch for you.

(3) Idena: You have to play a captcha game at a specific point in time (to prevent people from playing multiple times); part of the captcha game involves creating and verifying a captcha that will be used to verify others.

(4) Circle: requires existing Circle users to vouch for you. Circles is unique in that it does not attempt to create a “globally verifiable ID”; instead, it creates a trust graph where someone’s trustworthiness can only be verified from your own position in that graph.

How does Worldcoin work?

Each Worldcoin user must install an app on their phone that generates private and public keys, just like an Ethereum wallet. They must then go offline and in person to find a verifiable “Orb.” The user stares into the Orb’s camera while showing the Orb a QR code generated by their Worldcoin app, which contains their public key. The Orb scans the user’s eyes and uses sophisticated hardware scanning and machine learning classifiers to verify:

(1) The user is a real person; (2) The user’s iris does not match the iris of any other user who has used the system before.

If both scans pass, Orb will sign a message approving the dedicated hash of the user's iris scan. The hash is uploaded to a database - currently a centralized server, which is intended to be replaced by a decentralized on-chain system once they determine that the hashing mechanism works. The system does not store the full iris scan, it only stores the hash, which is used to check uniqueness. From this point on, the user has a "World ID".

World ID holders are able to prove they are a unique human being by generating a ZK-SNARK to prove they hold the private key that corresponds to a public key in a database, without revealing which key they hold. So even if someone rescans your iris, they won’t be able to see any actions you’ve taken.

What are the main problems with Worldcoin’s construction?

There are four main risks that people are concerned about:

(1) Privacy

A registry of iris scans could potentially reveal information. At a minimum, if someone else scans your iris, they could compare it to the database to determine if you have a World ID. Iris scans could potentially reveal more information.

(2) Accessibility

Unless there are enough Orbs that anyone in the world can easily find one, the World ID will not be reliably accessible.

(3) Centralization

The Orb is a hardware device and we have no way of verifying that it is constructed correctly and has no backdoors. Therefore, even if the software layer were perfect and fully decentralized, the Worldcoin Foundation would still have the ability to insert backdoors into the system, allowing it to create as many fake human identities as it wants.

(4) Safety

A user's phone could be hacked, and the user could be forced to scan their iris while presenting a public key belonging to someone else, and it would be possible to 3D print a "dummy" and obtain a World ID through an iris scan.

It is important to distinguish:

(1) Problems with Worldcoin’s specific selection;

(2) Problems that are inevitable in any human identification system based on biometrics;

(3) Problems that exist in any human identity verification system. For example, registering for “Proof of Humanity” means publishing your face on the Internet.

Even joining BrightID’s video call “verification parties” doesn’t completely change that, as it still exposes your identity to many other people. Joining Circles exposes your social graph publicly.

Worldcoin is much better at protecting privacy than both of them.

On the other hand, Worldcoin relies on specialized hardware, which brings up the question of whether you can fully trust the manufacturer of the Orb. This is not the case with Proof of Humanity, BrightID, or Circles. It is even conceivable that in the future, someone might create a dedicated hardware solution different from Worldcoin, which would have different tradeoffs.

How do biometric-based human identification systems address privacy issues?

The most obvious and largest potential privacy leak of any human identification system is to tie each person's behavior to their real-world identity. This data leak is very large, arguably unacceptably large. But fortunately, it is easily solved using zero-knowledge proof technology.

Instead of signing directly with a private key whose corresponding public key is in a database, a user can generate a ZK-SNARK that proves they have the private key corresponding to a public key in the database, without revealing which specific key they have. This can be done using tools like Sismo, and Worldcoin has its own built-in implementation. It's worth giving Worldcoin a thumbs up here for being a "crypto-native" human identity proof system: they actually care about taking the fundamental step of ZK-SNARKs to provide anonymization, which almost all centralized identity solutions don't.

The existence of a public registry of biometrics is a more subtle privacy breach. In the case of Proof of Humanity, this centralizes a huge amount of data: you get a video of every Proof of Humanity participant, which makes it clear to anyone in the world who is willing to investigate who the Proof of Humanity participants are.

In the case of Worldcoin, the leak is much more limited: Orb locally computes and publishes only a "hash" of each person's iris scan. This hash is not a regular hash like SHA256; instead, it is a specialized algorithm based on machine learning Gabor filters that handles the inherent imprecision in any biometric scan and ensures that consecutive hashes of the same person's iris have similar outputs.

Blue: The percentage of differences between two scans of the same person's iris. Orange: The percentage of differences between two scans of two different people's irises.

These iris hashes can only leak a small amount of data. If an adversary can brute-force (or covertly) scan your irises, then they can calculate your iris hash themselves and compare it to a database of iris hashes to see if you are participating in the system. This ability to check if someone has registered is necessary for the system itself to prevent people from registering multiple times, but it also has the potential to be abused.

Additionally, iris hashes have the potential to reveal a certain amount of medical data (gender, race, and perhaps medical conditions), but this is far less than what is captured by almost any other large-scale data collection system in use today (e.g., even street cameras). Overall, I think the privacy of storing iris hashes seems adequate.

If one disagrees with this judgement and decides to design a system with more privacy, there are two ways to do it:

1. If the iris hash algorithm can be improved so that the difference between two scans of the same person is smaller (e.g. reliably less than 10% bit flips), then the system can store a smaller number of error correction bits of the iris hash (see: fuzzy extractor) instead of storing the full iris hash. If the difference between two scans is less than 10%, then the number of bits that need to be released will be at least 5 times less.

2. If we want to go further, we can store the iris hash database in a multi-party computation (MPC) system that can only be accessed by Orb (with rate limits), making the data completely inaccessible, but at the expense of Significant protocol complexity and social complexity of managing a collection of MPC participants. This would have the benefit that users wouldn't be able to prove a link between two different World IDs they had at different times even if they wanted to.

Unfortunately, these techniques are not applicable to Proof of Humanity, which requires that the full video of each participant be publicly available so that challenges can be raised when signs of forgery (including AI-generated forgeries) appear, and more detailed investigations can be conducted in such cases.

Overall, despite the "dystopian" feel of staring at an Orb and having it deeply scan your eyeballs, specialized hardware systems do seem to do a pretty good job of preserving privacy. However, the flip side of this is that specialized hardware systems introduce even greater centralization issues. So we cypherpunks seem to be stuck: we have to balance two deeply held cypherpunk values.

What are the accessibility issues in biometric-based human identification systems?

Specialized hardware introduces accessibility issues as it is not readily available. Between 51% and 64% of Sub-Saharan Africans now own a smartphone, a proportion that is expected to increase to 87% by 2030.

But while there are billions of smartphones in the world, there are only a few hundred Orbs. Even with more distributed manufacturing, it would be hard to have an Orb within five kilometers of every person.

But to Worldcoin’s credit, they’re keeping trying!

It’s also worth noting that the accessibility problem for many other forms of human identity is even worse. It’s very difficult to join a social graph-based human identity system unless you already know someone in the social graph. This makes it easy for such systems to be limited to a single community in a single country.

Even centralized identity systems have learned this lesson: India’s Aadhaar ID system is based on biometrics because it was the only way to quickly onboard the country’s massive population while avoiding massive fraud caused by duplicates and fake accounts (thus saving a lot of costs). Of course, the Aadhaar system is far less privacy-preserving than anything the crypto community has proposed at scale.

From an accessibility perspective, the best performing system is actually one like Proof of Humanity, where you can sign up using nothing more than your smartphone. However, as we’ve seen, such systems come with all sorts of other trade-offs.

What is the centralization problem with biometric-based human identification systems?

There are three types of questions:

(1) There is a risk of centralization in the highest level of governance of the system (especially when different participants in the system have different subjective judgments and the system makes the final highest level decision).

(2) Centralization risks unique to systems using specialized hardware;

(3) Centralization risk using proprietary algorithms to determine who the real participants are.

Any human identity system must contend with problem (1), unless the system’s “accepted” IDs are completely subjective. If a system uses incentives priced in external assets (e.g., ETH, USDC, DAI), then it cannot be completely subjective, so governance risk is inevitable.

Issue (2) is a much greater risk for Worldcoin than for Proof of Humanity (or BrightID) because Worldcoin relies on specialized hardware, while the other systems do not.

Problem (3) is a particular risk for centralized systems where there is a single system doing the verification, unless all algorithms are open source and we have assurance that they are actually running the code they claim to be. This is not a risk for systems that rely entirely on users verifying other users (such as Proof of Humanity).

How does Worldcoin solve the hardware centralization problem?

Currently, Tools for Humanity is the only organization making Orbs. However, the source code for Orb is mostly public: you can see the hardware specs in the github repository, and the rest of the source code is expected to be released soon. Orb's license is another "share source but technically open source after four years" license, similar to the Uniswap BSL, and in addition to preventing forks, it also prevents what they consider unethical behavior - they specifically list mass surveillance and three international human rights declarations.

The stated goal of Tools for Humanity is to allow and encourage other organizations to create Orbs, and over time transition from Orbs created by Tools for Humanity to having some kind of DAO that approves and governs which organizations can make Orbs recognized by the system.

But this design can fail in two ways:

1. It fails to actually decentralize. This is probably because of a common pitfall of DAOs: one maker becomes dominant in manufacturing, causing the system to re-centralize. Presumably, governance could limit how many valid Orbs each maker can produce, but this would require careful management, and there would be a lot of pressure on governance to be both decentralized and able to effectively monitor the ecosystem and respond to threats effectively: this is much more difficult than a fairly static DAO that only handles top-level dispute resolution tasks.

2. It may not be possible to create such a decentralized manufacturing mechanism to ensure security. Here, I see two risks:

(1) In the event of an Orb Maker: Even if an Orb Maker is malicious or hacked, it can also generate an unlimited number of fake iris scan hashes and assign them World IDs.

(2) Government restrictions on Orbs: Governments that do not want their citizens to participate in the Worldcoin ecosystem can ban the use of Orbs in their country. Going a step further, they can even force citizens to undergo iris scans to give the government access to their accounts, which citizens cannot cope with.

To make the system more resistant to bad Orb manufacturers, the Worldcoin team proposes regular audits of Orbs, verifying that they were built correctly, that key hardware components were built to spec, and that they had not been tampered with after the fact. This is a challenging task: it’s basically like the IAEA nuclear inspection agency, but for Orbs. The hope is that even a very imperfect auditing regime could significantly reduce the number of fake Orbs.

To limit the damage any bad Orb can do, a second mitigation is necessary. That is, using World IDs registered by different Orb manufacturers, ideally, different Orbs should be distinguishable from each other. This is acceptable if this information is private and only stored on the World ID holder's device; but it does need to be provable when needed. This allows the ecosystem to react to (inevitable) attacks, removing individual Orb manufacturers, or even individual Orbs, from the whitelist at any time if needed. If we see the North Korean government going around forcing people to scan their eyeballs, those Orbs and any accounts spawned from them can be immediately and retroactively disabled.

What security issues exist in general human identity verification?

In addition to issues specific to Worldcoin, there are a number of issues that affect the design of human identity systems. The main ones that come to mind are:

(1) 3D printed dummies: One can use AI to generate photos or even 3D prints of dummies that are realistic enough to be accepted by the Orb software. If a group does this, they can generate an unlimited number of identities.

(2) World IDs may be sold: When registering, people can provide someone else's public key instead of their own, thus giving control of their registered ID to someone else in exchange for money. This seems to be happening already. In addition to selling, it is also possible to rent an ID to an application for a short period of time.

(3) Mobile phone hacking: If someone’s mobile phone is hacked, the hacker can steal the key that controls their World ID.

(4) Government-enforced identity theft: A government can force its citizens to verify themselves while displaying a QR code belonging to the government. In this way, a malicious government can obtain millions of IDs. In biometric systems, this can even be done covertly: a government can use obfuscated Orbs to extract the World ID from everyone entering their country at a passport control kiosk.

The first point is specific to biometric proof-of-identity systems. The second and third points are common to both biometric and nonbiometric designs. The fourth point is also common to both, although the techniques required will be very different in the two cases; in this section, I will focus on the issues in the biometric case.

These are all very serious weaknesses. Some have already been addressed in the existing protocol, others may be addressed through future improvements, and some appear to be fundamental limitations.

How do we deal with dummies?

For Worldcoin, this is much less risky than systems like Proof of Humanity: a live scan of a person can check many features of a person, and is considerably harder to fake than just a deepfake video. Specialized hardware is inherently harder to fool than regular hardware, which in turn is harder to fool than digital algorithmic verification of pictures and videos sent remotely.

Could someone 3D print something that could fool the Orb? Very likely. I expect that at some point we’ll see a growing tension between the goals of keeping mechanisms open and keeping them secure: open source AI algorithms are inherently more susceptible to adversarial machine learning. At some point in the further future, even the best AI algorithms might be fooled by the best 3D printed dummies.

However, from my discussions with the Worldcoin and Proof of Humanity development teams, it appears that neither protocol has seen significant deepfake attacks so far, for the simple reason that it is extremely cheap and easy to hire real low-wage workers to register on your behalf.

Can we prevent the sale of IDs?

In the short term, preventing this sale is difficult because most of the world doesn’t even know about the human ID protocol, and if you tell them they can make $30 by holding up a QR code and scanning their eyes, they will do it.

Once more people know what the Human Identification Protocol is, a fairly simple mitigation becomes possible: allow people with registered IDs to re-register, cancelling the previous ID. This makes "ID sales" much less credible, since the person who sold you an ID can just re-register, cancelling the ID they just sold. However, to get to this point requires the protocol to be very well known, and Orbs to be very widely accessible, to make instant registration possible.

This is one reason why it’s valuable to integrate the UBI token into a human identity system: the UBI token provides an easily understood incentive for people to learn about the protocol and sign up, and for them to immediately re-register if they have previously signed up on behalf of someone else.

Can we prevent coercion threats in biometric-based human identification systems?

It depends on what kind of coercion we are talking about. Possible forms of coercion include:

- Governments scan people's eyes (or faces, or...) at border controls and other routine government checkpoints, and use this to register (and frequently re-register) their citizens

- The government banned Orbs in the country to prevent people from signing up independently

- Some people buy IDs and then threaten the owner that if they find out that the owner has re-registered and the ID has been invalidated, the owner will be harmed

- Apps (likely run by governments) asking people to "log in" by "signing" directly with their public key, letting them see the corresponding biometric scan and thus the link between the user's current ID and any future IDs they get by re-registering. There are widespread concerns that this would make it too easy to create a "permanent record" that follows a person throughout their life.

For less sophisticated users, this situation seems difficult to prevent completely. Users can leave their country and (re)register with an Orb in a safer country, but this is a difficult process and high cost. Finding an independent Orb in a truly hostile legal environment seems too difficult and risky.

It would be feasible to make this abuse more troublesome and easier to detect. The Proof of Humanity approach, requiring a person to say a specific phrase when registering, is a good example: it might be enough to thwart hidden scans, requiring the coercion to be more obvious, and the registration phrase could even include a statement confirming that the respondent knows they have the right to re-register independently and potentially receive UBI tokens or other rewards. Orbs used to force registrations at scale could have access revoked if coercion is detected.

A normal human identity proof system can lock a user’s key in trusted hardware, preventing any application’s process from using that key directly without an intermediate ZK-SNARK layer. If a government or application developer wants to solve this problem, they need to enforce the use of their own custom application.

Through a combination of these techniques and active early warning, it seems possible to target regimes that are truly hostile, and keep those that are merely neutral (like much of the world) honest. This could be done by projects like Worldcoin or Proof of Humanity maintaining their own bureaucracy for this task, or by revealing more information about how an ID was registered (e.g. in the case of Worldcoin, which orb it came from), and leaving this classification task to the community.

Can we prevent ID renting (e.g. renting votes)?

Re-registration doesn’t stop some people from renting out their IDs. This is OK in some applications: the cost of renting out your right to receive your share of that day’s UBI Coins will just be the value of that day’s UBI Coins. But in applications like voting, being able to easily sell voting rights is a huge problem.

Systems like MACI can prevent you from selling your vote, allow you to cast another vote later, invalidate your previous vote, and so that no one can know if you actually cast such a vote. However, this won’t help if the briber controls the key you were given when you registered.

I see two solutions here:

(1) Run the entire application process within a multi-party computation (MPC). This also includes the re-registration process: when a person registers with an MPC, the MPC assigns them an ID that is separate from their personal proof ID and cannot be linked to it. When a person re-registers, only the MPC knows which account needs to be deactivated. This prevents users from proving their actions because every important step is completed in the MPC using private information known only to the MPC.

(2) A decentralized registration ceremony. Basically, implement a live key registration protocol similar to this one that requires four randomly selected local participants to work together to register someone. This ensures that registration is a "trusted" process that attackers cannot eavesdrop on.

Social graph-based systems may fare better here, as they can automatically create native decentralized registration processes as a byproduct of the way they work.

Biometric-based human identity verification vs. social graph-based verification

In addition to biometrics, the main contender for proof of identity is social graph-based authentication. Social graph-based authentication systems all follow the same principle: if there is a large group of verified identities that prove your identity is valid, then you are likely valid and should also be verified.

If only a few real users verify the fake users (either by accident or maliciously), then you can use basic graph theory techniques to determine an upper bound on the number of fake users that the system can verify.

Proponents of social graph-based human identification systems often describe them as a better alternative to biometrics for the following reasons:

- It does not rely on dedicated hardware, making it easier to deploy;

- It avoids a perpetual arms race between manufacturers trying to make dummies and Orbs that need to be updated to reject such dummies;

- No need to collect biometric data, better protect privacy;

- It is potentially friendlier to anonymity, because if someone chooses to spread their online life across multiple identities that they keep separate from each other, then those identities can all potentially be verified (however, maintaining multiple authentic and separate identities sacrifices network effects and is costly, so it can’t be easily done by an attacker)

Biometric methods give a binary score of “is human” or “is not human”, which is fragile: people who are accidentally rejected will not be able to obtain UBI tokens and may not be able to participate in online life. Social graph-based methods can give more nuanced numerical scores, which may be unfair to some participants, but are less likely to completely "eliminate" a person.

My take on these arguments is that I basically agree with them! These are real strengths of social graph-based approaches and should be taken seriously. However, it’s also worth considering the weaknesses of social graph-based approaches:

- Onboarding: For a user to join a social graph-based system, the user must know someone who is already in the graph. This makes mass adoption difficult and has the potential to exclude regions of the world that are unlucky during initial launch.

- Privacy: Although social graph-based approaches avoid collecting biometric data, they often leak information about a person’s social connections, which can lead to greater risks. Of course, zero-knowledge techniques can mitigate this problem (see, for example, this proposal by Barry Whitehat), but the interdependencies in the graph and the need to perform mathematical analysis of the graph make it more difficult to achieve the same level of data hiding as biometrics.

- Inequality: Each person can only have one biometric ID, but a wealthy and well-connected person can use their connections to generate multiple IDs. Essentially, the same flexibility that might allow a social graph-based system to give multiple pseudonyms to someone (like an activist) who really needs it, but it's more likely that more powerful and well-connected people can get more pseudonyms than fewer people.

- Risk of falling into centralization: most people are too lazy to take the time to report who is real and who is not in an internet application. Therefore, there is a risk that the system will over time favor "easy" onboarding methods that rely on centralized authority, and the "social graph" of the system's users will effectively become which countries recognize which people as citizens - bringing us centralized KYC, but with an extra step.

Is human identity proof compatible with real-world pseudonyms?

In principle, proof of identity is compatible with all kinds of pseudonyms. Applications can be designed in such a way that a person with a single proof of identity ID can create up to five profiles in the application, leaving space for pseudonymous accounts. One can even use a quadratic formula: $N² cost for N accounts. But will they do it?

However, a pessimist might argue that it is naive to try to create a more privacy-focused approach to identity and hope that it will be adopted in the right way, because the powerful do not care about privacy, and if a powerful actor has a tool that can be used to obtain more information about a person, they will do so. In such a world, this argument goes, the only realistic approach, sadly, is to undermine any identity solution and defend a world of fully anonymous and digital islands of high-trust communities.

I understand the rationale behind this way of thinking, but I worry that this approach, even if successful, will lead to a world where there is no way to do anything against the concentration of wealth and the centralization of governance, because one person can always pretend to be 10,000 people. In turn, such points of centralization will be easily controlled by the powerful. Instead, I prefer a more moderate approach, where we should strongly advocate for proof-of-person solutions with strong privacy, perhaps even including a "cost of $N² for N accounts" mechanism at the protocol level if necessary, and create something that aligns with privacy-friendly values ​​and has a chance of being accepted by the outside world.

So...what do I think?

When it comes to personal identification, there is no ideal form. Instead, we have at least three different approaches, each with its own unique advantages and disadvantages. A comparison chart might look like this:

Ideally, we should view these three technologies as complementary and combine them. Dedicated hardware biometrics have the advantage of being secure at scale, as demonstrated by India’s Aadhaar. They are very weak at decentralization, although this can be addressed by making individual spheres responsible.

Today, general biometrics are easy to adopt, but their security is rapidly declining, and they may only work for another 1-2 years. Social graph-based systems, bootstrapped by a few hundred people who are socially close to the founding team, may face a constant trade-off of either missing large parts of the world entirely or being vulnerable to attacks by communities they can't see. However, a social graph-based system bootstrapped by tens of millions of biometric ID holders could actually work. Biometric bootstrapped may work better in the short term, while social graph-based technologies may be more robust in the long term and have greater accountability over time as algorithms improve.

Possible hybrid development paths in the future

All of these teams are likely to make a lot of mistakes, and there is an inevitable tension between corporate interests and the needs of the broader community, so we must remain vigilant. As a community, we can and should push all participants beyond their comfort zones in open sourcing their technologies, requiring third-party audits and even third-party-written software, and other checks and balances. We also need more choice in every category.

At the same time, it’s also important to recognize the work that’s already been done: many of the teams running these systems have demonstrated a willingness to take privacy more seriously than almost any government or major corporate-run identity system, and this is a success we should learn from.

The problem of making a valid and reliable proof-of-person system in the hands of people far from the existing crypto community seems quite challenging. I definitely don't envy those who attempt this task, and it may take several years to find a working solution. The principle of proof-of-person seems very valuable in principle, and while various implementation methods have their risks, so does not having any proof-of-person at all: a world without proof-of-person is more likely to be a world dominated by centralized identity solutions, money, small closed communities, or some combination of all three. I look forward to seeing more progress on all types of proof-of-person, and hope to see different approaches eventually integrated into a coherent whole.

risk warning:

According to the "Notice on Further Preventing and Dealing with the Risks of Virtual Currency Trading Speculation" issued by the Central Bank and other departments, the content of this article is for information sharing only and does not promote or endorse any business or investment activities. Readers are requested to strictly abide by the laws and regulations in their area and not participate in any illegal financial activities.