The Anti-Parler: Principles for decentralized social networking
Last year, the Beaker Browser team spent 6 months on a p2p social network. We dropped it when we realized that we hadn’t solved the hard problems. We had only solved, as I say in the thread below, “p2p tweets.”
Today I’m going to talk about those hard problems. How do we design a social networks that don’t rely on a the total authority of a service, but which contains better models of authority?
Forking power
FOSS doesn’t always mean “anybody can contribute,” but it does mean that users can fork if the core devs abuse their position. The same principle can apply to any authority. In fact, we’ve seen it multiple times in blockchains.
The ability to “fork” an authority means to copy an authority, its data, and its rules into a new identifier which other people can adopt. Adopting a new authority means configuring to the new endpoint, which is a social process — same as when codebases are forked.
Forks are only successful when there’s some consensus among users that it’s necessary, and so there is some barrier to accomplishing it. However, the ability to fork acts as a constant constraint on people in authority. They know that abusing their power could lead to a fork, so they’re incentivized to work with their users and behave accordingly.
Orthogonality
Social systems should include explicit barriers between various identities and spaces. Moderation in a public space should not effect unrelated systems.
To give an example, Google’s identity system is integrated across Youtube and Gmail. This could create a situation where being banned on Youtube causes you to lose access to your email. (Thankfully, Google has applied orthogonality and this does not appear to be a risk.)
The same principle must apply to any new system, otherwise a moderative action will have unjust consequences. This could have a chilling effect on users AND on moderators, since both will be disincentivized to take actions with that much risk. If we want moderation to be effective, it should work only as intended.
Individual rights
Connected to orthogonality is the idea of individual rights. In public and shared spaces, collective rights take precedence. But in private and interpersonal spaces, individual rights should be maintained.
Deciding what individual rights exist is a task for the design process. They might include:
- Rights to an identity
- Rights to host data
- Rights to download data
- Rights to modify data before presentation (think: adblockers)
- Rights to choose software
And so on.
Individual rights may also center around a concept of “cryptographic property rights.” This idea is based on our ability to maintain secret keys which are used to encrypt or sign. The cryptographic operations can prove ownership of those secrets, and so we can theoretically anchor a concept of digital property around ownership of secret keys.
Constitutional networking
In some systems, it is possible to create “smart contracts,” software which has its execution audited externally and therefore prove whether the executor is following the code.
Smart contracts constrain the authority of the executor node (a server, a miner, etc) such that they are only able to execute the operations specified in the contract. If the executor violates the contract, users would be able to detect the violation and remove authority by some process (forking, failed consensus, and so on).
Smart contracts therefore introduce a “code as law” mechanism in which users can come to agreement on the definition of the contract before entering a shared space with each other. The law acts as a kind of “constitution” for the network. Choosing a constitution which promotes the interests of everyone in the network would be an ongoing experiment, with amendments and legislative processes, as in any governance.
Extensibility
To whatever degree possible, we should be empowering users to develop the software of their community. This is no mean feat in a shared space: how often can a subset of users make changes without breaking compatibility with the rest?
But it’s compelling to explore how consensus around those changes could be driven by users. Perhaps we could break the “one size fits all” mentality of companies driving every development decision.
Additional thoughts
When people look at Parler or Trump being booted from the Internet, they might say that’s the concerning show of monopoly power. It is, but that’s not actually what concerns me most. What concerns me is the ten years that led up to this point when platforms *didn’t* make those choices because they felt they had to remain unbiased.
It’s not a problem of too much moderation, but too little. It led us to say “I don’t want Mark Zuckerberg to decide what is true,” because we know that would be biased toward Facebook’s interests as a company, not biased toward the interests of users. But that doesn’t mean we don’t need moderation. It means we need moderation that’s biased toward the interest of users.
Decentralization is about authority: who has it, and how can they exercise it.
It’s not about removing authority; the idea is a fiction. When users have abilities to modify a shared space, then that’s an authority model.
It’s not only about evenly distributing authority either. You can certainly try it, but what happens when somebody builds a popular user registry? How is that managed, is it evenly distributed too? If not, that creator now has power over other peoples’ identity. Perhaps they also have the power to ban people by removing them from the registry. That’s not solving questions about authority; it’s ignoring them.
Decentralization trends toward libertarian ideals, but collective rights are as important as individual rights. Shared spaces must have ways to moderate, to decide how and what goes viral, whether somebody can be banned, and even details about how the software is built (how many characters are allowed in a post, or whether the community has a like button).
When we’re talking about somebody’s own data — their posts, their social relationships, perhaps their identity — we’re talking about their rights as an individual. When we’re talking about shared data — posts visible in a forum, the assignment of usernames, the ranking of search results — we’re talking about everyone’s collective rights.
And why do we talk about rights at all? It’s because they’re what make moderation work for users. An individual right tells us what a moderator can’t do to a user. A collective right tells us what a moderator can’t do to the community.
You can find more of my thoughts about these ideas here: https://github.com/pfrazee/infocivics