The Convergence Stack

At Outlier Ventures we support the development of the next era of digital infrastructure: The Convergence Stack

The Convergence Stack is a set of privacy-protecting, peer-to-peer, and open-source technologies that will decentralize the cloud and unbundle the Internet platforms.

Application 6
Applications

(Botanic, Evernym)

Marketplaces

(SEED, IOTA, Ocean Protocol)

Learning

(Fetch.AI)

Interfacing 5
UX

(Botanic, Evernym, Brave)

API

(SEED)

Middleware

(Ocean Protocol, Aragon)

Verification 4
Authentication

(Sovrin)

Query

(Haja Networks, FOAM, Chainlink)

Compute

(Agoric, Golem)

Routing 3
Scaling

Databases

(Haja Networks)

Bridges

(Agoric, Cosmos)

Distribution 2
Ledgers

(IOTA, Fetch.AI, Sovrin, Bitcoin, Ethereum)

Storage

Networking

(Rightmesh)

Hardware 1
Hardware Processing

Hardware Storage

Hardware Networking

 

We have identified 18 technologies that we believe are needed to build the next era of digital infrastructure.

6 5 4 3 2 1
Applications Marketplaces Learning UX API Middleware Authentication Query Compute Scaling Databases Bridges Ledgers Storage Networking Hardware Processing Hardware Storage Hardware Networking

Applications

Applications, apps, or decentralized apps (dapps) are the programs that users interact with.

Dapps are different from today's web or mobile apps because they use the decentralized technologies and networks of the Stack. Dapps will look and feel very similar to web and mobile apps but will have a few unique features under the hood. The peer-to-peer and decentralized architecture of the Stack means that new types of activities like digital asset ownership, transferable rewards, self-sovereign identity, peer-to-peer transactions, autonomous agent activities is all now possible. This can all get abstract fast and the key is to make applications easy to use, convenient and useful.

As we move beyond pioneers and early adopters, few customers will care about decentralization, trustlessness, self-sovereign, or blockchain-based. People care about what they can now do that makes their lives easier. Today, the vast majority of applications are targeting the core crypto customer base. Despite the broad scope of potential applications, we have seen the majority of apps focus on decentralized finance (DeFi) and gaming. People that already understand crypto and hold crypto have new ways to store, swap, trade, lend, borrow, and stake crypto. The DeFi startup scene are building out an ecosystem of compatible services that are making crypto easier to use which is going to be vital as the industry grows.

We are interested in applications that are focused on the consumer benefits rather than the features of the technology. We want to support startups building applications that are laser-focused on onboarding billions of users. Applications need to be as easy to use and navigate as modern web and mobile apps and must address the underlying complexity of decentralized networks like holding and baking up private keys, forgetting passwords, and hex addresses. We think applications have an opportunity to hide much of that complexity. The pitch should be simple. Just like the apps you know and love, but your identity and data is owned by you, and you are rewarded for usage.

Marketplaces

We see marketplaces as a specific type of application that will flourish in the Convergence Stack because of the shift from a client-server to peer-to-peer technical architecture and the blockchain-based decentralized transaction network.

Marketplaces like Uber, Airbnb and TaskRabbit thrived in the Web2 area using real-time and high performance tools to match crowdsourced supply and demand. As these services continue to scale, there has been a backlash against the power these platforms have over their users. As the hammer of regulation comes down, as is always the case with information and communication technologies, the technologies that will disrupt these services and others is beginning to be deployed.

Open-source, decentralized and peer-to-peer storage, compute, networking, databases will create an environment in which marketplaces can be built in which all users can be rewarded equitably for their contributions. In the case of a P2P ride-sharing service, the riders and drivers would be rewarded for using the network and the underlying platform wouldn’t be able to extract rent from the network. We see all stakeholders as direct network participants and benefit from using the service by having a stake in the network in the form of a token. All human trade is mediated by marketplaces where buyers and sellers come together. In the real-world this happens physically person to person, maybe the owner of the land or building in which the market takes place will be paid for access, but the market owner doesn’t mediate between every activity.

Web3 marketplaces will move us back to a more natural state of affairs in which trade is peer-to-peer. We are looking for projects that not only seek to replace current marketplace platforms, but to develop new marketplaces around totally new assets like digital assets, non-fungible tokens (NFTs), and data products. Today there is no way to trade machine or sensor data, or to bundle up personal data, price it and trade it. Today there is no way to format, price and trade personal or corporate data products in a private and confidential way. We want to support projects that are building marketplaces that make it easy to issue, manage and exchange digitized assets and wholly new digital-only assets.

Learning

Machine learning is probably the single most important technology today. Machine learning much like electricity will be embedded in all our products and services and will become invisible.

Nobody today asks: does it use electricity? Machine learning will become a utility and will be tapped into by all our digital services. Ownership and access to machine learning capacity, therefore, becomes one of the most valuable places to be in the 21st century. Machine learning capacity can be split up into the algorithm, the hardware, the data, and the trained model. In modern deep learning algorithms, the algorithms and hardware are important but data and the resulting model is the true asset. The problem with Web2 is that category leaders in web search, social media, and e-commerce have all the data but Web3 offers the possibility to break open these data monopolies.

We expect to see the use of a variety of secure, privacy-preserving, and collectively-owned tools to deliver machine learning. Tools like federated learning can be used to take a model to the data rather than bringing all the data into a single place owned by a monopoly. Cryptographic tools like differential privacy, multi-party computation, and homomorphic encryption can all be used to ensure that service providers give customers the opportunity to utilise machine learning tools without revealing their data and maintaining control of trained models. Access to machine learning shouldn’t be conditional on handing overall data in plaintext and having no control or ability to monetize the trained model. We are already supporting Fetch.ai that has developed an entire system designed for software agents to perform machine learning as well as directly exchange value.

Web3 gives us the opportunity to break up the data monopolies and enable peer-to-peer machine learning in the same way we want to see all digital infrastructure become peer-to-peer. The customer would only have to reveal as much data as they wanted and could be rewarded for submitting resources like raw data, labelled data, compute and algorithms. Contributors are rewarded for adding value and receive a proportionate share of value that is delivered. Think of it as a machine learning value ecosystem in which value isn’t all captured by the platform in the middle rather it is distributed to all stakeholders based on the value of inputs. We think this is one of the most important parts of the Convergence stack and we are keen to support any project that is building a network for value-aligned machine learning.

UX

User experience in The Stack refers to a range of interfaces that make interacting with Web3 protocols and technologies more convenient and intuitive.

UX includes things like browsers and mobile apps that dominated Web1 and Web2 but also emerging interfaces that will become more important such as augmented reality (AR), virtual reality (VR), voice interfaces (VUI), conversational interfaces (CUIs) and NFC tags. Until recently, the vast majority of crypto products were not designed for the average consumer. They were for other developers, and mainly other crypto developers, which made them challenging to use for non-technical people. Web3 experiences have the additional UX challenge with the management of tokens. Many of the Web3 experiences will involve users having custody of their own identity, data and money. This is at odds with almost all other experiences in the modern digital world in which a third-party is responsible and liable for the users' identity, data and money. Designers have a task in helping educate users around this in a low-friction way. Very few people around the world are used to paying 0.0001 for something. Or making sure to add gas fees for transactions. Or remembering their passwords. Tools like Radical Address and ZenGo are showing the way here. Managing digital assets in an easy and seamless way from a dashboard is easy to picture. But what about interfaces to manage all your data? Or a dashboard to manage all your digital bots? How can we build interfaces to surface relevant information but without overwhelming?

Beyond making using token-based products easier to use, there is a more interesting opportunity to see how token products fit into new interfaces like conversational user interfaces and augmented reality for example. Browsers and mobile apps aren’t going anywhere for many years, but we expect to see a plethora of exciting new experiences supported by more advanced natural language processing and computer vision. Web3 is a web that needs to support all endpoints and so we want to see more projects exploring how chatbots can be leveraged to make Web3 apps easier. Or how augmented reality platforms can be designed to be peer-to-peer, decentralized supported by a token economy. Different interfaces will be more appropriate for different interactions and we want to see entrepreneurs thoughtfully use UX to make interactions seamless and ubiquitous rather than intrusive and addictive.

API

APIs are simply a set of specifications that make it easier for developers to build software.

APIs make it easier for developers to build apps, The Web is not useful if applications and protocols cannot communicate with each other. Imagine having a neighbourhood with 100+ houses and shops that are not interconnected. To make Web3 software useable, protocols and applications need to be able to request and update data in real-time. Developers need this to be happening without too much code on their side, there is no need to reinvent the wheel every time they’re coding an app. Every time a device connects to the internet, data is retrieved from a server. We have over 20,000 live APIs only developed in the past decade. We currently have over 100 blockchains and over 300 exchanges and wallets that are not connected limiting the usefulness of The Stack.

The ecosystem for APIs in Web3 is around three main areas: ledgers (including infrastructure like servers, nodes and integration points), exchanges and wallets, and market and trading data. Ledger API projects like Infura, Pocket, CryptoAPIs, Blockcypher, Moesif, and Infinito are jostling for market share. Most of the action is with exchanges, wallets, and market and trading data APIs because there is an immediate market for fast, reliable access to data in these systems. However, there is a general lack of consistent developer experiences across Web3 projects outside of trading and finance. Projects like SEED are developing specific APIs enabling frameworks for building chatbots. Others like Fission Codes are focused on developing APIs that connect Web2 software like Wordpress or Heroku with Web3 tools like IPFS and Ethereum. There is plenty of room for improvement, especially around unified gateways. We are still not seeing extremely narrowly specialising companies, but there is a good number focused only on Bitcoin or only on Ethereum. Eventually, we will start seeing dapps that are fully dependent on third party APIs as the ecosystem matures. Just like today, we are measuring dapp usage today, expect to see lists of startups using popular Web3 APIs soon. The Stack needs more APIs to connect all the parts together. There is an opportunity for a team to bring build Convergence Stack APIs that tie all the microservices together.

Middleware

Also known as service-layer protocols, middleware protocols are a complex bundle of APIs sitting below the interface layer.

Middleware provides a range of services to developers to make it easier and faster to develop and deliver applications and marketplaces. Middleware often abstracts away all the complexity of interacting with the lower level protocols and delivers tools that make it much easier to just focus on solving customer problems.

We are seeing an explosion of market and application-specific middleware. Aragon for DAOs, Loom for games, Ocean Protocol for data markets, and 0x for decentralized exchanges for example. Each benefit from the network effects surrounding the ledger like Ethereum or market as with the DeFi movement. MakerDAO benefits from the Ethereum community. And Dharma, Set, 0x, Compound and dydx benefit from MakerDAO and Dai. But equally, a lot of this middleware is currently constrained by the limitations of Ethereum in terms of performance, but as more and more layer 1 ledgers come online, we expect to see hundreds of middleware solutions that plug into different ledgers. There is going to be an interesting interoperability challenge for middleware as some projects move to WASM, others with direct LLVM exposure and others with customer state transition machines. What good is a DAO that only works on one ledger? Instead of regulation limiting global scope we might see a lack of interoperability limiting global scale.

We are particularly interested in projects that are developing middleware to solve problems that have gone untackled. Middleware can be seen as a coordination mechanism. In the same way organisations are a collection of people that are formed to solve customer problems and consortia is a collection of organisations formed to solve an industry problem, middleware is a collection of APIs formed to solve software problems. If this is the case, middleware is arguably the most important part of the Convergence Stack turning generalized protocols into application-specific tools. What does middleware for education look like? Or middleware for energy? Construction? Robotics? Virtual reality?

Authentication

Authentication is the process of determining whether someone or something is, in fact, who or what it declares itself to be.

This is related to the following process of authorization which is the act of checking if the authenticated person or software has permission to access a particular file, program or system. As everyone knows the Internet was never designed with identification in mind leading to the old adage "On the Internet, nobody knows you're a dog". Buy dogs aren’t allowed to do online banking and advertising to them is a waste of money. So either “trusted” third-party organisations and central verifiers sprung up to identify and authenticate users or every service provider had to check identity for every new user. This process is time-consuming, expensive and creates a honeypot for hackers for the verifier. And for the user, it has led to a situation in which users have lost control of their identity and personal data.

We believe digital identity cannot rely on a trusted third party. Individuals need to own their own identity and be self-sovereign. This means projects in the stack need to use open standards such as decentralized identifiers (DIDs) and the verifiable credentials standard. This belief led us to support the Sovrin network and Evernym. Identity will increasingly be needed for machines and agents as well as humans, and we believe the same identity systems can scale to include unique identifiers for non-human entities allowing new levels of automation and economic activity to take place.

Query

For decentralized systems to be useful, we need to get data in and out. Applications need to solve two problems: querying on-chain data, and querying off-chain data.

For web3 apps to compete with traditional web apps they need to be able to query data quickly. And to fulfil the promise of truly trustless applications we must get accurate and verifiable data into applications from off-chain sources. First, querying on-chain data. Today’s web applications already have robust querying capabilities from all the major SQL and NoSQL databases making it easy and fast for applications to provide information to users. Just use MongoDB or RethinkDB and have efficient, functional, and composable indexing and querying capabilities. But we currently lack the same tools so developers are forced to build their own fragile indexing server and relying on Infura which was never designed for indexing and query services. The Graph and Haja Networks are examples of projects aiming to solve this problem.

Second, querying off-chain data. Unlike centralized web services, decentralized applications can’t just pull data from anywhere. The data in blocks on the blockchain has been mutually agreed by network participants, so any external data must go through a verification process before being used in a smart contract or added to a block. This is the realm of oracles or trusted data feeds. Projects are utilising various cryptographic and game theory techniques to prove where data comes from and incentivise data to be correct. The market has already split into more general-purpose oracle networks like ChainLink, Shintaku and Oraclize, and networks designed to verify particular types of data like location as with FOAM, XYO and Fysicial. We expect to see plenty more experimentation in this field combining cryptographic tools with well-designed incentive systems to make submitting incorrect data sufficiently costly.

Compute

Compute refers to projects aiming to reduce the reliance on cloud computing platforms by offering peer-to-peer, privacy-preserving, decentralized, and open-source software.

Blockchains are a data structure, not a computer. Smart contract platforms like Ethereum offer limited ability to run smart contract programs but are limited in their capacity and capabilities. In order to run complex programs and be broadly useful, we are seeing experimentation at the smart contract layer and so-called “off-chain” compute. Instead of participants computing and verifying the results on the ledger, calculations are executed by resources outside of a blockchain. But now we have recreated the same problem off-chain that was solved on-chain by decentralized consensus algorithms. How can we run calculations in a way that is resistant to censorship and can be trusted?

You can take the Agoric approach and attempt to make smart contracts more secure by reducing the complexity of the contracts and using a more secure language. This doesn’t solve the broader trusted compute problem, but it helps make smart contracts more useful. The broader trusted compute problem is being tackled by lots of projects coming from various directions. Golem is offering a peer-to-peer computing platform. Enigma is creating secret nodes that can run computations over encrypted data. And you have Truebit using oracles to verify computational outcomes. Different projects are using a variety of cryptographic tools like homomorphic encryption, differential privacy, multi-party computation, zk-SNARKS, zk-STARKs, and bulletproofs, ultimately they are all aiming to deliver software that can run computations in a private and verifiable way.

Solving private and verifiable computer will require orchestration across numerous other services including query, learning, APIs, ledgers, storage, all the way down to the hardware layer with hardware processing. We expect to see flexible tools that can incorporate different privacy and security guarantees to customers depending on needs. We want to see more projects not just focusing on the technology, but having a clear narrative and go-to-market strategy articulating to consumers and business, exactly why they can’t trust their current cloud provider and the benefits that come from private and peer-to-peer compute.

Scaling

The ‘Scaling Trilemma’ refers to the trade-offs we make when we optimize for one or two blockchain properties: decentralization, security, and performance.

Scaling solutions are technologies that increase the performance of the blockchain ideally with only a small decrease in security or security. Some applications simply don’t need broad decentralization and therefore can focus on high performance and security guarantees as is the case with many private blockchain solutions like R3 Corda. But to support applications that require censorship-resistance and decentralized transaction validation, public chains need to find novel ways to increase performance. Some projects are tweaking the ledger itself by experimenting with non-proof-of-work consensus algorithms such as Ethereum and proof-of-stake, Chia with proof of space and time, or Tezos with delegated proof-of-stake. Others like Coda are using zk-snarks to reduce the size of the blockchain itself.

Some other interesting approaches are how to increase performance using scaling tools, or so-called Layer 2 solutions. This is where a second network is created which connects to the main Layer 1 blockchain and is configured to synchronise intermittently using a two-way peg. Transactions can be validated much faster and depending on the particular implementation can handle different levels of transactions complexity. The technical differentiation between different sidechain or state channel implementations vary and it is an open question as to which particular approach will win out, or indeed if different approaches will be required for different use cases. Lightning is becoming even more useful on the Bitcoin network and Skale, Loom and Elph, as well as many others, are pushing forward the usability of Ethereum layer 2. Bloxroute is another interesting scaling solution which aims to speed up transaction verification by propagating blocks faster using a network of servers and a token incentive scheme. Fundamentally though, to be globally useful, blockchains have to get faster. We want to support projects that are finding novel solutions to make this a reality.

Databases

Customers’ data are stored in centralized databases that face challenges such as security breaches, reduce fault tolerance and lack of interoperability.

Users do not own their data and it is very hard to move their data along to a different service. Decentralized databases just like ledgers, compute, and storage are also aiming to move to a peer-to-peer delivery model with no single database administrator and data being distributed across several databases. With the advent of Fog compute, the cloud extends to our personal devices and internet of things that perform parts of compute. We need to improve our database systems to facilitate easier storage and access of data across devices and databases in an error-free way.

Decentralized databases are not controlled by a single administrator which addresses the problem of data ownership effectively giving back control of the data to the user. They ensure structured data is stored and shared in an accessible, peer to peer way and the aim is to make the decentralized web faster and interoperable. By distributing data over multiple databases we benefit from increased fault tolerance and interoperability. OrbitDB is a building a decentralized database protocol which enables users to build decentralized databases, own, and control their data. The protocol integrates with existing databases, unlocking them to the decentralised web, and enables interoperability between structured data sources. It also provides a marketplace for decentralized database services that allow providers to monetize their software whilst data-hungry businesses have straightforward access to the widest range of services.

Bridges

Also known as interoperability, bridges are most commonly associated with Blockchain-to-blockchain communication but can refer to any technology that enables two or more systems or networks to communicate.

Bridges enable the breaking down of walled gardens and allow individuals to use multiple platforms or protocols with relative ease. For blockchains, atomic swaps are an active research area, but traction is coming predominantly from the likes of Cosmos and Polkadot. These networks are taking slightly different technical and governance approaches, but if successful, make it easier value and data to be shared across blockchains which should, in theory, make the entire convergence stack more efficient and usable.

Interoperability is always a tension in software and often incompatibility is the result of misaligned incentives, a lack of standards, or businesses actively deciding to create their own standards. Bridges are so important in the blockchain space so as to avoid building the same closed financial system that we already have and avoid replicating the Web 2 data silos that have emerged. Bridges help solve this problem by allowing blockchains and other networks to continue to innovate independently but benefit from the security and user base of a larger network of participants. We at Outlier are creating our own bridges between projects such as ANVIL (a bridge between Sovrin and Fetch) and H2O (a bridge between Ocean and Haja Networks). We expect to see bridges develop across the convergence stack making it easy for systems to communicate and transfer value up and down the entire stack.

Ledgers

Bitcoin and the Bitcoin network ushered in a renewed interest in distributed ledgers, peer-to-peer networking, and decentralized consensus algorithms.

Ethereum added a turing-complete scripting language to broaden the range and complexity of applications that could utilise blockchain functionality. From there we have had an explosion of experimentation designing for different performance, security and decentralization requirements. Projects like Ethereum, Dfinity, Hashgraph, Sovrin and Fetch.AI are differentiating on scalability, security and decentralization; making different trade-offs depending on the targeted applications. Some are prioritizing scalability like IOTA and Zilliqa, others security like Tezos and Cardano, and realistically only Bitcoin is committed above all else to maximising decentralization. Private chains like Corda, Quorum and other blockchain-as-a-service (BaaS) solutions focusing on enterprise customer requirements where with known participants decentralization is less of a concern than security and scalability. Cosmos and Polkadot are making it even easier for developers to build their own custom blockchain that is configurable but benefits from the security protections of a larger network.

We still expect to see a few new ledger projects over the next few years, but most of the innovation will be upgrading existing ledgers in terms of privacy, interoperability, security, and performance. The open-source nature of the industry will make technical differentiation increasingly difficult, instead, we expect network effects to begin to take hold as applications attract users which in turn attract developers. Ethereum has for a long time had the developers but has lacked a real consumer killer-app. Libra certainly has the customers, although appears more focused on financial services than the broader world computer market. We think in the short to medium term there will be a role for numerous ledgers that have technical, crypto-economic, and governance designs specific to the customers, applications and markets they are targeting.

Storage

Storage refers to how and where information is stored and the rules governing how it can be accessed.

Instead of data being stored in the ‘cloud’ on the servers of a select few companies, new storage architecture aims to store data on peer-to-peer decentralised networks. Instead of data being stored on a trusted third-party server, data is stored on a p2p network and private keys are retained by the owner. Blockchain-based storage system prepares the data for storage by dividing the data into segments, encrypting them, generating a unique hash and then creating extra copies of each segment. These segments are then distributed to the network of nodes. Utilising distributed storage can improve security, management, efficiency, uptime. Projects like Maidsafe, Storj and Sia have been building solutions for many years now, and obviously, the elephant in the room is Filecoin and it’s huge ICO back in 2017.

Storage unlike many other Web3 products today has a very clear value proposition: cheaper and faster. There is a huge amount of inertia to overcome, especially for enterprises that have spent the last five years moving to the cloud. The last thing they want to hear is, actually, now you need to put all of your data on other people’s devices. Not only are there issues of reliability and quality assurance, but there is a cultural shift required in the CIO office from client-server to peer-to-peer. The speed and costs will have to be 10x better to even consider the risks associated with p2p storage. That said as privacy moves higher up in the purchasing criteria for consumers, p2p storage could be even more attractive especially with token incentives. When marketplaces built on top of decentralized storage you insert the market dynamic in which providers and users compete ensuring economic rents are not charged. Web3 will connect all sorts of endpoints, as the variety of endpoints grows each will have storage and compute capacity. Connecting all this idle IoT storage in a network and finding novel ways to generate value is an area we believe is ripe for further experimentation.

Networking

As described with hardware networking, there is a need to ensure telecommunications networks do not become a point of trust in the convergence stack.

With open-source hardware projects working on networking equipment, the software that runs on the equipment in the next component that needs to be decentralized. Software-defined networking (SDN) and network function virtualization (NFV) are making networks more flexible and configurable, and so the obvious next question is how to make sure the software is both open-source and cryptographically secure.

Projects like OpenFlow, Facebook Open Switching System (FBOSS), OpenBTS, UPSat are all working to make network open-source as commonplace in networks and data centres as Linux is in servers. And then there are crypto-related projects that are bringing cryptographic techniques and in many cases, crypto token incentive designs to build fully decentralized wireless communication networks. Projects like Rightmesh, Althea, Helium, and Global Mesh Labs are all building peer-to-peer solutions that aim to reduce the reliance or even remove the need for mobile network operators. All of these providers will need to make decisions around how best to deliver useable products today and using as much open-source and peer-to-peer tooling as possible. We think there is a huge opportunity to support new applications that require network edge capacity as well as connecting the unconnected.

Hardware Processing

In the end, software is only as secure as the hardware it runs on.

The goal of the Convergence Stack is to avoid central points of trust in any technology or protocol and this extends to hardware too. Software should run on open-source and cryptographically-secure hardware. The cryptographically secure part is progressing with ARMs Trustzone or Intels SGX already deploying secure enclaves in its chips. These secure enclaves prevent the chips main computer from gaining direct access to sensitive data. But vulnerabilities persist and because the secure enclaves are not open-source, and designs lack transparency and verifiability is a problem.

To make secure computation a reality, design and implementation of hardware secure enclaves need to be open-source. Designs need to be RISC-V based with ideally no reliance on closed-source code. We want to see more projects utilizing open-source hardware principles and not just relying on AWS or Google Cloud, instead of thinking about security and confidentiality all the way down to hardware. Oasis Labs’ development of Keystone is a great example. If enterprise, public sector and personal data is to be moved onto blockchain-based systems, open and secure hardware processing is a foundational component of any successful solution.

Hardware Storage

Equally important as secure computation is secure hardware storage of private keys and digital assets.

Software-based storage will not be sufficient as the industry matures and the value of private keys and digital assets grows. We already have plenty of hardware wallet providers for consumers from the likes of Trezor, KeepKey and Ledger, and businesses from Anchorage, Coinbase, and BitGo. There is a trade-off in security and convenience especially for consumers, and we expect to see experimentation around balancing self-custody and user experience. Consumers are not used to self-custodianship and so novel insurance products and offline vault solutions will be required otherwise a new class of vendors offering private key custodianship services will emerge undermining the core value proposition of peer-to-peer solutions offering self-custody.

Ultimately, we want to get to a world in which hardware storage is integrated into smartphones and wearables in much the same way the Apple A7 secure enclave stores TouchID and FaceID encrypted biometric data. Potentially, we will see separate secure enclaves for storage and processing on the same device, combined with peer and social recovery like the HTC social key recovery feature in their Exodus phone. As with processing, we want to see projects think deeply about the hardware that is used for their products to avoid introducing a trusted service provider into the solution.

Hardware Networking

The final piece of hardware that could end up as a central point of trust is networking equipment.

This includes all equipment used for communication and interaction between devices on a computer network such as core and edge equipment like routers, gateways, switches and repeaters. Hardware networking is rarely seen as a crypto area but there is a rich heritage of open-source hardware projects in the space from NetFPGA and the Open Compute Projects work with open networking switches.

The current dispute between Huawei and the U.S. Government shows that there is an awareness that network equipment vendors have to be trusted to keep information that cross the network confidential and private. All open-source and cryptographically-secure networking equipment would mitigate these risks allowing for transparency and verifiability. In a general sense, networking hardware is beginning to look more like small servers and with software-defined networking making networks more dynamic and efficient, it’s crucial these ‘servers’ are use privacy-preserving cryptography and open-source code.

Applications

Applications, apps, or decentralized apps (dapps) are the programs that users interact with.

Dapps are different from today's web or mobile apps because they use the decentralized technologies and networks of the Stack. Dapps will look and feel very similar to web and mobile apps but will have a few unique features under the hood. The peer-to-peer and decentralized architecture of the Stack means that new types of activities like digital asset ownership, transferable rewards, self-sovereign identity, peer-to-peer transactions, autonomous agent activities is all now possible. This can all get abstract fast and the key is to make applications easy to use, convenient and useful.

As we move beyond pioneers and early adopters, few customers will care about decentralization, trustlessness, self-sovereign, or blockchain-based. People care about what they can now do that makes their lives easier. Today, the vast majority of applications are targeting the core crypto customer base. Despite the broad scope of potential applications, we have seen the majority of apps focus on decentralized finance (DeFi) and gaming. People that already understand crypto and hold crypto have new ways to store, swap, trade, lend, borrow, and stake crypto. The DeFi startup scene are building out an ecosystem of compatible services that are making crypto easier to use which is going to be vital as the industry grows.

We are interested in applications that are focused on the consumer benefits rather than the features of the technology. We want to support startups building applications that are laser-focused on onboarding billions of users. Applications need to be as easy to use and navigate as modern web and mobile apps and must address the underlying complexity of decentralized networks like holding and baking up private keys, forgetting passwords, and hex addresses. We think applications have an opportunity to hide much of that complexity. The pitch should be simple. Just like the apps you know and love, but your identity and data is owned by you, and you are rewarded for usage.

Marketplaces

We see marketplaces as a specific type of application that will flourish in the Convergence Stack because of the shift from a client-server to peer-to-peer technical architecture and the blockchain-based decentralized transaction network.

Marketplaces like Uber, Airbnb and TaskRabbit thrived in the Web2 area using real-time and high performance tools to match crowdsourced supply and demand. As these services continue to scale, there has been a backlash against the power these platforms have over their users. As the hammer of regulation comes down, as is always the case with information and communication technologies, the technologies that will disrupt these services and others is beginning to be deployed.

Open-source, decentralized and peer-to-peer storage, compute, networking, databases will create an environment in which marketplaces can be built in which all users can be rewarded equitably for their contributions. In the case of a P2P ride-sharing service, the riders and drivers would be rewarded for using the network and the underlying platform wouldn’t be able to extract rent from the network. We see all stakeholders as direct network participants and benefit from using the service by having a stake in the network in the form of a token. All human trade is mediated by marketplaces where buyers and sellers come together. In the real-world this happens physically person to person, maybe the owner of the land or building in which the market takes place will be paid for access, but the market owner doesn’t mediate between every activity.

Web3 marketplaces will move us back to a more natural state of affairs in which trade is peer-to-peer. We are looking for projects that not only seek to replace current marketplace platforms, but to develop new marketplaces around totally new assets like digital assets, non-fungible tokens (NFTs), and data products. Today there is no way to trade machine or sensor data, or to bundle up personal data, price it and trade it. Today there is no way to format, price and trade personal or corporate data products in a private and confidential way. We want to support projects that are building marketplaces that make it easy to issue, manage and exchange digitized assets and wholly new digital-only assets.

Learning

Machine learning is probably the single most important technology today. Machine learning much like electricity will be embedded in all our products and services and will become invisible.

Nobody today asks: does it use electricity? Machine learning will become a utility and will be tapped into by all our digital services. Ownership and access to machine learning capacity, therefore, becomes one of the most valuable places to be in the 21st century. Machine learning capacity can be split up into the algorithm, the hardware, the data, and the trained model. In modern deep learning algorithms, the algorithms and hardware are important but data and the resulting model is the true asset. The problem with Web2 is that category leaders in web search, social media, and e-commerce have all the data but Web3 offers the possibility to break open these data monopolies.

We expect to see the use of a variety of secure, privacy-preserving, and collectively-owned tools to deliver machine learning. Tools like federated learning can be used to take a model to the data rather than bringing all the data into a single place owned by a monopoly. Cryptographic tools like differential privacy, multi-party computation, and homomorphic encryption can all be used to ensure that service providers give customers the opportunity to utilise machine learning tools without revealing their data and maintaining control of trained models. Access to machine learning shouldn’t be conditional on handing overall data in plaintext and having no control or ability to monetize the trained model. We are already supporting Fetch.ai that has developed an entire system designed for software agents to perform machine learning as well as directly exchange value.

Web3 gives us the opportunity to break up the data monopolies and enable peer-to-peer machine learning in the same way we want to see all digital infrastructure become peer-to-peer. The customer would only have to reveal as much data as they wanted and could be rewarded for submitting resources like raw data, labelled data, compute and algorithms. Contributors are rewarded for adding value and receive a proportionate share of value that is delivered. Think of it as a machine learning value ecosystem in which value isn’t all captured by the platform in the middle rather it is distributed to all stakeholders based on the value of inputs. We think this is one of the most important parts of the Convergence stack and we are keen to support any project that is building a network for value-aligned machine learning.

UX

User experience in The Stack refers to a range of interfaces that make interacting with Web3 protocols and technologies more convenient and intuitive.

UX includes things like browsers and mobile apps that dominated Web1 and Web2 but also emerging interfaces that will become more important such as augmented reality (AR), virtual reality (VR), voice interfaces (VUI), conversational interfaces (CUIs) and NFC tags. Until recently, the vast majority of crypto products were not designed for the average consumer. They were for other developers, and mainly other crypto developers, which made them challenging to use for non-technical people. Web3 experiences have the additional UX challenge with the management of tokens. Many of the Web3 experiences will involve users having custody of their own identity, data and money. This is at odds with almost all other experiences in the modern digital world in which a third-party is responsible and liable for the users' identity, data and money. Designers have a task in helping educate users around this in a low-friction way. Very few people around the world are used to paying 0.0001 for something. Or making sure to add gas fees for transactions. Or remembering their passwords. Tools like Radical Address and ZenGo are showing the way here. Managing digital assets in an easy and seamless way from a dashboard is easy to picture. But what about interfaces to manage all your data? Or a dashboard to manage all your digital bots? How can we build interfaces to surface relevant information but without overwhelming?

Beyond making using token-based products easier to use, there is a more interesting opportunity to see how token products fit into new interfaces like conversational user interfaces and augmented reality for example. Browsers and mobile apps aren’t going anywhere for many years, but we expect to see a plethora of exciting new experiences supported by more advanced natural language processing and computer vision. Web3 is a web that needs to support all endpoints and so we want to see more projects exploring how chatbots can be leveraged to make Web3 apps easier. Or how augmented reality platforms can be designed to be peer-to-peer, decentralized supported by a token economy. Different interfaces will be more appropriate for different interactions and we want to see entrepreneurs thoughtfully use UX to make interactions seamless and ubiquitous rather than intrusive and addictive.

API

APIs are simply a set of specifications that make it easier for developers to build software.

APIs make it easier for developers to build apps, The Web is not useful if applications and protocols cannot communicate with each other. Imagine having a neighbourhood with 100+ houses and shops that are not interconnected. To make Web3 software useable, protocols and applications need to be able to request and update data in real-time. Developers need this to be happening without too much code on their side, there is no need to reinvent the wheel every time they’re coding an app. Every time a device connects to the internet, data is retrieved from a server. We have over 20,000 live APIs only developed in the past decade. We currently have over 100 blockchains and over 300 exchanges and wallets that are not connected limiting the usefulness of The Stack.

The ecosystem for APIs in Web3 is around three main areas: ledgers (including infrastructure like servers, nodes and integration points), exchanges and wallets, and market and trading data. Ledger API projects like Infura, Pocket, CryptoAPIs, Blockcypher, Moesif, and Infinito are jostling for market share. Most of the action is with exchanges, wallets, and market and trading data APIs because there is an immediate market for fast, reliable access to data in these systems. However, there is a general lack of consistent developer experiences across Web3 projects outside of trading and finance. Projects like SEED are developing specific APIs enabling frameworks for building chatbots. Others like Fission Codes are focused on developing APIs that connect Web2 software like Wordpress or Heroku with Web3 tools like IPFS and Ethereum. There is plenty of room for improvement, especially around unified gateways. We are still not seeing extremely narrowly specialising companies, but there is a good number focused only on Bitcoin or only on Ethereum. Eventually, we will start seeing dapps that are fully dependent on third party APIs as the ecosystem matures. Just like today, we are measuring dapp usage today, expect to see lists of startups using popular Web3 APIs soon. The Stack needs more APIs to connect all the parts together. There is an opportunity for a team to bring build Convergence Stack APIs that tie all the microservices together.

Middleware

Also known as service-layer protocols, middleware protocols are a complex bundle of APIs sitting below the interface layer.

Middleware provides a range of services to developers to make it easier and faster to develop and deliver applications and marketplaces. Middleware often abstracts away all the complexity of interacting with the lower level protocols and delivers tools that make it much easier to just focus on solving customer problems.

We are seeing an explosion of market and application-specific middleware. Aragon for DAOs, Loom for games, Ocean Protocol for data markets, and 0x for decentralized exchanges for example. Each benefit from the network effects surrounding the ledger like Ethereum or market as with the DeFi movement. MakerDAO benefits from the Ethereum community. And Dharma, Set, 0x, Compound and dydx benefit from MakerDAO and Dai. But equally, a lot of this middleware is currently constrained by the limitations of Ethereum in terms of performance, but as more and more layer 1 ledgers come online, we expect to see hundreds of middleware solutions that plug into different ledgers. There is going to be an interesting interoperability challenge for middleware as some projects move to WASM, others with direct LLVM exposure and others with customer state transition machines. What good is a DAO that only works on one ledger? Instead of regulation limiting global scope we might see a lack of interoperability limiting global scale.

We are particularly interested in projects that are developing middleware to solve problems that have gone untackled. Middleware can be seen as a coordination mechanism. In the same way organisations are a collection of people that are formed to solve customer problems and consortia is a collection of organisations formed to solve an industry problem, middleware is a collection of APIs formed to solve software problems. If this is the case, middleware is arguably the most important part of the Convergence Stack turning generalized protocols into application-specific tools. What does middleware for education look like? Or middleware for energy? Construction? Robotics? Virtual reality?

Authentication

Authentication is the process of determining whether someone or something is, in fact, who or what it declares itself to be.

This is related to the following process of authorization which is the act of checking if the authenticated person or software has permission to access a particular file, program or system. As everyone knows the Internet was never designed with identification in mind leading to the old adage "On the Internet, nobody knows you're a dog". Buy dogs aren’t allowed to do online banking and advertising to them is a waste of money. So either “trusted” third-party organisations and central verifiers sprung up to identify and authenticate users or every service provider had to check identity for every new user. This process is time-consuming, expensive and creates a honeypot for hackers for the verifier. And for the user, it has led to a situation in which users have lost control of their identity and personal data.

We believe digital identity cannot rely on a trusted third party. Individuals need to own their own identity and be self-sovereign. This means projects in the stack need to use open standards such as decentralized identifiers (DIDs) and the verifiable credentials standard. This belief led us to support the Sovrin network and Evernym. Identity will increasingly be needed for machines and agents as well as humans, and we believe the same identity systems can scale to include unique identifiers for non-human entities allowing new levels of automation and economic activity to take place.

Query

For decentralized systems to be useful, we need to get data in and out. Applications need to solve two problems: querying on-chain data, and querying off-chain data.

For web3 apps to compete with traditional web apps they need to be able to query data quickly. And to fulfil the promise of truly trustless applications we must get accurate and verifiable data into applications from off-chain sources. First, querying on-chain data. Today’s web applications already have robust querying capabilities from all the major SQL and NoSQL databases making it easy and fast for applications to provide information to users. Just use MongoDB or RethinkDB and have efficient, functional, and composable indexing and querying capabilities. But we currently lack the same tools so developers are forced to build their own fragile indexing server and relying on Infura which was never designed for indexing and query services. The Graph and Haja Networks are examples of projects aiming to solve this problem.

Second, querying off-chain data. Unlike centralized web services, decentralized applications can’t just pull data from anywhere. The data in blocks on the blockchain has been mutually agreed by network participants, so any external data must go through a verification process before being used in a smart contract or added to a block. This is the realm of oracles or trusted data feeds. Projects are utilising various cryptographic and game theory techniques to prove where data comes from and incentivise data to be correct. The market has already split into more general-purpose oracle networks like ChainLink, Shintaku and Oraclize, and networks designed to verify particular types of data like location as with FOAM, XYO and Fysicial. We expect to see plenty more experimentation in this field combining cryptographic tools with well-designed incentive systems to make submitting incorrect data sufficiently costly.

Compute

Compute refers to projects aiming to reduce the reliance on cloud computing platforms by offering peer-to-peer, privacy-preserving, decentralized, and open-source software.

Blockchains are a data structure, not a computer. Smart contract platforms like Ethereum offer limited ability to run smart contract programs but are limited in their capacity and capabilities. In order to run complex programs and be broadly useful, we are seeing experimentation at the smart contract layer and so-called “off-chain” compute. Instead of participants computing and verifying the results on the ledger, calculations are executed by resources outside of a blockchain. But now we have recreated the same problem off-chain that was solved on-chain by decentralized consensus algorithms. How can we run calculations in a way that is resistant to censorship and can be trusted?

You can take the Agoric approach and attempt to make smart contracts more secure by reducing the complexity of the contracts and using a more secure language. This doesn’t solve the broader trusted compute problem, but it helps make smart contracts more useful. The broader trusted compute problem is being tackled by lots of projects coming from various directions. Golem is offering a peer-to-peer computing platform. Enigma is creating secret nodes that can run computations over encrypted data. And you have Truebit using oracles to verify computational outcomes. Different projects are using a variety of cryptographic tools like homomorphic encryption, differential privacy, multi-party computation, zk-SNARKS, zk-STARKs, and bulletproofs, ultimately they are all aiming to deliver software that can run computations in a private and verifiable way.

Solving private and verifiable computer will require orchestration across numerous other services including query, learning, APIs, ledgers, storage, all the way down to the hardware layer with hardware processing. We expect to see flexible tools that can incorporate different privacy and security guarantees to customers depending on needs. We want to see more projects not just focusing on the technology, but having a clear narrative and go-to-market strategy articulating to consumers and business, exactly why they can’t trust their current cloud provider and the benefits that come from private and peer-to-peer compute.

Scaling

The ‘Scaling Trilemma’ refers to the trade-offs we make when we optimize for one or two blockchain properties: decentralization, security, and performance.

Scaling solutions are technologies that increase the performance of the blockchain ideally with only a small decrease in security or security. Some applications simply don’t need broad decentralization and therefore can focus on high performance and security guarantees as is the case with many private blockchain solutions like R3 Corda. But to support applications that require censorship-resistance and decentralized transaction validation, public chains need to find novel ways to increase performance. Some projects are tweaking the ledger itself by experimenting with non-proof-of-work consensus algorithms such as Ethereum and proof-of-stake, Chia with proof of space and time, or Tezos with delegated proof-of-stake. Others like Coda are using zk-snarks to reduce the size of the blockchain itself.

Some other interesting approaches are how to increase performance using scaling tools, or so-called Layer 2 solutions. This is where a second network is created which connects to the main Layer 1 blockchain and is configured to synchronise intermittently using a two-way peg. Transactions can be validated much faster and depending on the particular implementation can handle different levels of transactions complexity. The technical differentiation between different sidechain or state channel implementations vary and it is an open question as to which particular approach will win out, or indeed if different approaches will be required for different use cases. Lightning is becoming even more useful on the Bitcoin network and Skale, Loom and Elph, as well as many others, are pushing forward the usability of Ethereum layer 2. Bloxroute is another interesting scaling solution which aims to speed up transaction verification by propagating blocks faster using a network of servers and a token incentive scheme. Fundamentally though, to be globally useful, blockchains have to get faster. We want to support projects that are finding novel solutions to make this a reality.

Databases

Customers’ data are stored in centralized databases that face challenges such as security breaches, reduce fault tolerance and lack of interoperability.

Users do not own their data and it is very hard to move their data along to a different service. Decentralized databases just like ledgers, compute, and storage are also aiming to move to a peer-to-peer delivery model with no single database administrator and data being distributed across several databases. With the advent of Fog compute, the cloud extends to our personal devices and internet of things that perform parts of compute. We need to improve our database systems to facilitate easier storage and access of data across devices and databases in an error-free way.

Decentralized databases are not controlled by a single administrator which addresses the problem of data ownership effectively giving back control of the data to the user. They ensure structured data is stored and shared in an accessible, peer to peer way and the aim is to make the decentralized web faster and interoperable. By distributing data over multiple databases we benefit from increased fault tolerance and interoperability. OrbitDB is a building a decentralized database protocol which enables users to build decentralized databases, own, and control their data. The protocol integrates with existing databases, unlocking them to the decentralised web, and enables interoperability between structured data sources. It also provides a marketplace for decentralized database services that allow providers to monetize their software whilst data-hungry businesses have straightforward access to the widest range of services.

Bridges

Also known as interoperability, bridges are most commonly associated with Blockchain-to-blockchain communication but can refer to any technology that enables two or more systems or networks to communicate.

Bridges enable the breaking down of walled gardens and allow individuals to use multiple platforms or protocols with relative ease. For blockchains, atomic swaps are an active research area, but traction is coming predominantly from the likes of Cosmos and Polkadot. These networks are taking slightly different technical and governance approaches, but if successful, make it easier value and data to be shared across blockchains which should, in theory, make the entire convergence stack more efficient and usable.

Interoperability is always a tension in software and often incompatibility is the result of misaligned incentives, a lack of standards, or businesses actively deciding to create their own standards. Bridges are so important in the blockchain space so as to avoid building the same closed financial system that we already have and avoid replicating the Web 2 data silos that have emerged. Bridges help solve this problem by allowing blockchains and other networks to continue to innovate independently but benefit from the security and user base of a larger network of participants. We at Outlier are creating our own bridges between projects such as ANVIL (a bridge between Sovrin and Fetch) and H2O (a bridge between Ocean and Haja Networks). We expect to see bridges develop across the convergence stack making it easy for systems to communicate and transfer value up and down the entire stack.

Ledgers

Bitcoin and the Bitcoin network ushered in a renewed interest in distributed ledgers, peer-to-peer networking, and decentralized consensus algorithms.

Ethereum added a turing-complete scripting language to broaden the range and complexity of applications that could utilise blockchain functionality. From there we have had an explosion of experimentation designing for different performance, security and decentralization requirements. Projects like Ethereum, Dfinity, Hashgraph, Sovrin and Fetch.AI are differentiating on scalability, security and decentralization; making different trade-offs depending on the targeted applications. Some are prioritizing scalability like IOTA and Zilliqa, others security like Tezos and Cardano, and realistically only Bitcoin is committed above all else to maximising decentralization. Private chains like Corda, Quorum and other blockchain-as-a-service (BaaS) solutions focusing on enterprise customer requirements where with known participants decentralization is less of a concern than security and scalability. Cosmos and Polkadot are making it even easier for developers to build their own custom blockchain that is configurable but benefits from the security protections of a larger network.

We still expect to see a few new ledger projects over the next few years, but most of the innovation will be upgrading existing ledgers in terms of privacy, interoperability, security, and performance. The open-source nature of the industry will make technical differentiation increasingly difficult, instead, we expect network effects to begin to take hold as applications attract users which in turn attract developers. Ethereum has for a long time had the developers but has lacked a real consumer killer-app. Libra certainly has the customers, although appears more focused on financial services than the broader world computer market. We think in the short to medium term there will be a role for numerous ledgers that have technical, crypto-economic, and governance designs specific to the customers, applications and markets they are targeting.

Storage

Storage refers to how and where information is stored and the rules governing how it can be accessed.

Instead of data being stored in the ‘cloud’ on the servers of a select few companies, new storage architecture aims to store data on peer-to-peer decentralised networks. Instead of data being stored on a trusted third-party server, data is stored on a p2p network and private keys are retained by the owner. Blockchain-based storage system prepares the data for storage by dividing the data into segments, encrypting them, generating a unique hash and then creating extra copies of each segment. These segments are then distributed to the network of nodes. Utilising distributed storage can improve security, management, efficiency, uptime. Projects like Maidsafe, Storj and Sia have been building solutions for many years now, and obviously, the elephant in the room is Filecoin and it’s huge ICO back in 2017.

Storage unlike many other Web3 products today has a very clear value proposition: cheaper and faster. There is a huge amount of inertia to overcome, especially for enterprises that have spent the last five years moving to the cloud. The last thing they want to hear is, actually, now you need to put all of your data on other people’s devices. Not only are there issues of reliability and quality assurance, but there is a cultural shift required in the CIO office from client-server to peer-to-peer. The speed and costs will have to be 10x better to even consider the risks associated with p2p storage. That said as privacy moves higher up in the purchasing criteria for consumers, p2p storage could be even more attractive especially with token incentives. When marketplaces built on top of decentralized storage you insert the market dynamic in which providers and users compete ensuring economic rents are not charged. Web3 will connect all sorts of endpoints, as the variety of endpoints grows each will have storage and compute capacity. Connecting all this idle IoT storage in a network and finding novel ways to generate value is an area we believe is ripe for further experimentation.

Networking

As described with hardware networking, there is a need to ensure telecommunications networks do not become a point of trust in the convergence stack.

With open-source hardware projects working on networking equipment, the software that runs on the equipment in the next component that needs to be decentralized. Software-defined networking (SDN) and network function virtualization (NFV) are making networks more flexible and configurable, and so the obvious next question is how to make sure the software is both open-source and cryptographically secure.

Projects like OpenFlow, Facebook Open Switching System (FBOSS), OpenBTS, UPSat are all working to make network open-source as commonplace in networks and data centres as Linux is in servers. And then there are crypto-related projects that are bringing cryptographic techniques and in many cases, crypto token incentive designs to build fully decentralized wireless communication networks. Projects like Rightmesh, Althea, Helium, and Global Mesh Labs are all building peer-to-peer solutions that aim to reduce the reliance or even remove the need for mobile network operators. All of these providers will need to make decisions around how best to deliver useable products today and using as much open-source and peer-to-peer tooling as possible. We think there is a huge opportunity to support new applications that require network edge capacity as well as connecting the unconnected.

Hardware Processing

In the end, software is only as secure as the hardware it runs on.

The goal of the Convergence Stack is to avoid central points of trust in any technology or protocol and this extends to hardware too. Software should run on open-source and cryptographically-secure hardware. The cryptographically secure part is progressing with ARMs Trustzone or Intels SGX already deploying secure enclaves in its chips. These secure enclaves prevent the chips main computer from gaining direct access to sensitive data. But vulnerabilities persist and because the secure enclaves are not open-source, and designs lack transparency and verifiability is a problem.

To make secure computation a reality, design and implementation of hardware secure enclaves need to be open-source. Designs need to be RISC-V based with ideally no reliance on closed-source code. We want to see more projects utilizing open-source hardware principles and not just relying on AWS or Google Cloud, instead of thinking about security and confidentiality all the way down to hardware. Oasis Labs’ development of Keystone is a great example. If enterprise, public sector and personal data is to be moved onto blockchain-based systems, open and secure hardware processing is a foundational component of any successful solution.

Hardware Storage

Equally important as secure computation is secure hardware storage of private keys and digital assets.

Software-based storage will not be sufficient as the industry matures and the value of private keys and digital assets grows. We already have plenty of hardware wallet providers for consumers from the likes of Trezor, KeepKey and Ledger, and businesses from Anchorage, Coinbase, and BitGo. There is a trade-off in security and convenience especially for consumers, and we expect to see experimentation around balancing self-custody and user experience. Consumers are not used to self-custodianship and so novel insurance products and offline vault solutions will be required otherwise a new class of vendors offering private key custodianship services will emerge undermining the core value proposition of peer-to-peer solutions offering self-custody.

Ultimately, we want to get to a world in which hardware storage is integrated into smartphones and wearables in much the same way the Apple A7 secure enclave stores TouchID and FaceID encrypted biometric data. Potentially, we will see separate secure enclaves for storage and processing on the same device, combined with peer and social recovery like the HTC social key recovery feature in their Exodus phone. As with processing, we want to see projects think deeply about the hardware that is used for their products to avoid introducing a trusted service provider into the solution.

Hardware Networking

The final piece of hardware that could end up as a central point of trust is networking equipment.

This includes all equipment used for communication and interaction between devices on a computer network such as core and edge equipment like routers, gateways, switches and repeaters. Hardware networking is rarely seen as a crypto area but there is a rich heritage of open-source hardware projects in the space from NetFPGA and the Open Compute Projects work with open networking switches.

The current dispute between Huawei and the U.S. Government shows that there is an awareness that network equipment vendors have to be trusted to keep information that cross the network confidential and private. All open-source and cryptographically-secure networking equipment would mitigate these risks allowing for transparency and verifiability. In a general sense, networking hardware is beginning to look more like small servers and with software-defined networking making networks more dynamic and efficient, it’s crucial these ‘servers’ are use privacy-preserving cryptography and open-source code.

Work with us