Productizing P2P

Paul Frazee
5 min readJun 27, 2021

I work with the Hypercore protocol (formerly known as Dat). Previously I worked with Secure Scuttlebutt, which is a similar technology. If I weren’t using Hypercore, I’d probably be using IPFS (though I’m quite happy with Hypercore).

All of these technologies share common properties: they are BitTorrent variants which exchange distributed data structures. I tend to think of Hypercore as a mesh database since it now exposes an API similar to LevelDB.

I’ve always preferred P2P over blockchains, though I see them as parallel technologies rather than competitors. I’ve been concerned about blockchains’ feasibility; Ethereum successfully adopting Proof-of-Stake will be a big step toward longevity in my mind. Assuming it does survive and thrive, I expect blockchains to trend towards payments, global agreement (names), financial systems (exchanges), and contractual systems (DAOs).

Since I’ve always been more interested in common computing tasks — blogging, calendars, chat, social media — I’ve been more inclined to focus on peer-to-peer, which seems like a more natural fit for the use-cases I’m interested in.

Why P2P?

There are some attributes of p2p which are generally useful. Peer-discovery automates network configuration; hole-punching enables devices to accept connections from home networks; encrypted connections provide end-to-end privacy.

A p2p applications network would benefit hobby and FOSS programmers by removing the need to host applications as services. End-users would install applications on their own devices and connect to each other directly. This would save programmers time and money, and it would avoid the common issue of services shutting down and taking user-data with them. In some cases, it could lead to simpler software since scaling will only be required for features which actually depend on scale, as opposed to always needing to scale to support any user growth.

The BitTorrent variants (e.g. Hypercore) are designed to scale horizontally through seeding. This feature can reduce bandwidth costs for hosting services, though the more interesting benefit is self-hosting economically and managing to withstand a surge of traffic. Arguably seeding is a necessity if there’s the potential for self-hosted content to go viral, otherwise the average user will need to pay for a CDN.

Free software activists stand to gain a lot by peer-to-peer. Client/server leads to closed, black-box programs. While p2p apps don’t have to be free and open-source, they do store user-data on the local device and (ideally) communicate via an open protocol, adding political power to end-users. I believe the decentralization effort of today is the modernization of the FOSS efforts of yesterday.

Inherent challenges

Peer-to-peer software faces a lot of novelty challenges. The protocols themselves are challenging and time-intensive to develop; from personal observation, I’d say these protocols took 3 to 5 times longer to mature than most of us expected (though I am so happy with the quality of Hypercore that I consider this a worth-while cost). The protocols also require multiple adjacent solutions including key-management, data schema agreement, reliable backup, reliable data availability on the network, user identity, and so on.

Taking full advantage of P2P networks as an open stack, particularly when shared data-storage is involved, requires permissions schemes for sharing access to data, identity, and so on. This can be challenging to design while the technology is immature.

P2P networks demand more resources than usual from the user device; it’s common to idle at 5–10% CPU as various peers are polled and synced, and disk usage will necessarily increase. This is why I generally advise against cross-network integration between IPFS, SSB, and Hypercore — most user devices won’t have the resources for them all.

That said, I’ve lately been convinced the ideal deployment model for p2p might be through dedicated hardware — home servers or “personal data servers” in the cloud. This would counteract some of the novel technology challenges (relieving pressure around device pairing and management, for instance) and would increase the resources available for applications and the protocol stack(s).

Viewed from the opposite direction, peer-to-peer may be the solution that the home server and automation market needs. Home servers typically lack the means to interconnect with other home servers, limiting them to local file storage/backup, media hosting, and home automation. This leaves a variety of applications on the table (file/photo sharing, mail, document authoring, etc). A peer-to-peer home server could run any public-facing service a user wants —effectively a private AWS — and would enable other users to interact via their home servers without sacrificing data ownership.

There is also the common grievance with IOT devices — their habit of connecting through a third-party cloud with all kinds of sensitive information — which could be solved through p2p networking.

Given the relatively low costs of consumer hardware, it seems like a good bet that home servers will be widely adopted over the next five years. This is how I plan to be looking at P2P in the coming years.

Product/market fit and the need to take risks

Product/Market Fit (PMF) is when your product solves people’s needs. Anybody attempting to bring new technology to the world will need PMF.

Decentralization-focused developers face unique challenges for PMF. We’re often looking to have open alternatives to software we already use, which means we tend to create undifferentiated clones of market leaders. (I’m guilty of this.) We may also focus on the broader goal of providing a platform for free & open-source software. This runs head-long into the “platform before product” problem, in which the lack of immediate utility slows adoption, while the lack of adoption slows the developer interest in the platform.

New technologies are inherently risky, and so it may not be appealing to consider that the products need to take additional riskier bets, but I’m quite sure that they must. Peer-to-peer tech is not inherently differentiating; at least, not for clones of existing products. We should instead be looking for novel ideas and use-cases which may not necessarily share a direct relationship to the technology itself.

The first iPhone was an amalgamation of new technology bets (touchscreens, wireless internet, mobile computing) and would have been a less remarkable device if it had only improved cell phones in one of those areas. Not every product needs to be an iPhone, but productizing peer-to-peer seems to need similar kinds of holistic product thinking.

The Gemini project is commendable for asserting a strong view-point, even if it’s a view that many users might not agree with. What stands out about Gemini is that it’s willing to sacrifice “obviously good” features of the Web such as images and videos in order to maintain a simpler, cleaner experience. I think there’s a broad desire to reduce the noise of digital life, and Gemini is staking a strong position within that desire.

As useful as CSS and JS have been for cheaply delivering applications over the Internet, we’re now mired in the downsides of “publisher over reader” preference: slow page-loads, user tracking, ads, prompts to download the mobile app, and so on. This is a reflection of the Web’s divided purpose between a documents browser and an applications platform. I’m inclined to say that we could peel the document-browsing usecase off and apply a “reader first” mentality. This is especially fitting when end-user software freedom is a priority — an inherently reader-first mentality.

--

--