Can you use ChatGPT to moderate your Web3 community?

Contents

At Outlier Ventures, we have been discussing the convergence of AI, blockchain and other technologies for many years. In 2018, we developed a refined version of our Convergence thesis, which looked ahead to a future where data would be captured by the Internet of Things, managed by blockchains, automated by artificial intelligence, and incentivised using crypto-tokens.

Within the wider Outlier ecosystem, projects such as Ocean Protocol and Fetch.ai have long been pushing the boundaries of this intersection of blockchain technology and AI.

In a ground-breaking series of blog posts, Trent McConaghy, founder of Ocean Protocol, envisaged networks of DAOs run by AI entities, autonomously collaborating for the good of humankind. In the meantime, recent news about AI has focused almost entirely on the headline-grabbing abilities of ChatGPT.

The recent media stories about ChatGPT and its potential to replace workers who carry out routine writing and communication tasks, and the ethics around political guardrails and use of copyrighted material for training have highlighted why we need to have conversations about open source, permissionless protocols, for the practical execution of AI.

Future articles in this series will explore the various intersection points of these two revolutionary technologies. But for this post, we focus simply on when and how it may be appropriate to use ChatGPT as a tool for moderation and engagement. 

Why do we need community managers – and what is ChatGPT?

For any successfully decentralized project, one task is key: nurturing an engaged, proactive community of smart people who understand your goals and want to help take your product forward.

Any great decentralized project benefits from the ideas and contributions of volunteer developers and ambassadors, who conduct their conversations in the open and inspire others in the community to contribute their own skills to the collective effort. Examples of this include the Aave and Compound communities, as well as IPFS – and, of course, Ethereum itself.

Community-building and moderation are different from, say, software engineering because they suffer from the perception that they are tasks that anyone can do – and cash-strapped startups will often economize  by hiring a junior into this role or outsourcing to a third party who will offer to do the job for cheaper. This easily creates  a vicious cycle as engagement drops and founders see poor results as a reason to cut back even further.

When OpenAI released ChatGPT, social media went wild over the possibilities of automating away millions of time-consuming tasks that are currently done by humans. Indeed, at Outlier Ventures, we recently published an in-depth post by our head of engineering Lorenzo Sicilia, who put ChatGPT through its paces by prompting it to write, refactor and interact with smart contracts written in Solidity.

One key community management task is to know how to automate routine chores on Discord or Telegram by using bots. Bots can issue warnings and post information, schedule events, welcome new users and integrate with other social channels. 

The conversational abilities of ChatGPT, as showcased in its user interface have become a major talking point. Not only does ChatGPT’s output sound natural, but its ability to sustain a conversation during multiple inputs and “remember” what it was asked previously distinguishes it from other, more rudimentary, chatbots. 

It is also important to remember that this is just the beginning of the ChatGPT story. It is currently available as a free beta preview, but at some time this year, the official API will be available and details of pricing will be revealed. When this happens, expect a free-for-all as startups and enthusiasts begin to build utilities and applications on top of it, which will undoubtedly include tools for moderation that simulate human interaction. ChatGPT4 is also in the pipeline, with capabilities that are even greater.

The challenges of using ChatGPT for Web3 moderation

Founders with an eye on their runway are more likely to view ChatGPT as a cost-cutting tool at this point, looking at their staffing budgets and weighing up how much community management they can hand off to an AI-powered moderation tool, rather than looking ahead to a future of AI-powered DAOs. 

This is fine as long as ChatGPT is something that is used to enhance a Web3 moderation toolkit instead of being a substitute for a real person. While the economic mechanisms in decentralized systems are designed to work in a trustless manner, building the energized and engaged community who will be sufficiently inspired by the idea of a project to take it forward themselves requires personality, purpose and passion.

An AI can provide information that it has learned from humans on the project, but it cannot inject its own personality, purpose or passion into the message.

Always be transparent about chatbot-generated content

Key to this discussion is how much “personality” an AI is supposed to demonstrate – and the answer is that an AI should always be clearly distinguishable from a human. 

In 1970, robotics professor Masahiro Mori proposed the hypothesis of bukimi no tani genshō, later translated as ‘uncanny valley’, to describe the negative response to objects or entities that closely resemble people. It surprised some researchers to discover that a robot or android that did not resemble a human at all could provoke a more empathetic response than one that was – superficially at least – hardly distinguishable from the observer.

In other words, when we are not sure whether we are talking to a human or not, it makes us uneasy. While ChatGPT’s natural-language capabilities and conversational style can in some cases mimic a human response, the distinction between bots and people in any community should always be made clear. 

AI alone cannot drive innovation – yet

While large language models can indirectly speed up technological innovation by speeding up the pace of research and other tasks, AI implementations cannot themselves yet make the leap of imagination that is needed to come up with new ideas. Often, the most innovative ideas occur as a result of random observations, or as a side effect of something entirely different.

Humans discussing different ideas involving nebulous concepts in a fast-changing environment such as Web3 tend to generate the type of creative chaos from which great ideas spring.

This is not to say that in the future, DAOs may operate entirely autonomously – but that day has not yet arrived.

Can ChatGPT mitigate human bias?

The TL:DR is “No.”

The Holy Grail of AI-powered decision-making is a process that is entirely rational and free of all the biases and emotional baggage that can sway human decisions. 

However, a large language model, like any other type of computer program, is only as good as its inputs – and if the data used to train the model is skewed towards one particular cultural mindset, we can end up in a situation where these biases are propagated and amplified, rather than smoothed out.

There has been a lot of conversation on Twitter over the last few weeks about ChatGPT’s ability to answer questions in multiple languages, or to translate from one language to another in a much more natural and free-flowing style than its predecessors.

However, these cross-border capabilities are misleading. In a recent post, Jill Walker Rettberg, professor of digital culture at the University of Bergen, described ChatGPT as “multilingual but monocultural”. Large language models are trained on vast quantities of data, from web content to social media and books – and in the case of OpenAI, the overwhelming majority of this has been generated from the Anglosphere – specifically the US. Why does this matter?

As Rettberg points out, there is a disconnect between answers that are written in, say, Norwegian, that do not reflect Norwegian sensibilities or culture. Web3 online communities are diverse, encompassing a wide range of countries and attitudes. Simply transposing one dominant culture into other languages can result in miscommunication and invisible barriers, even leaving aside the issues with such cultural hegemony. 

To give a concrete example of this, research by Pew found that Americans described themselves as more self-interested and less collectivist than many other cultures. 

AI is only as good as the inputs it consumes, and it is crucial to consider the content it has been trained on before allowing it to represent the values of your organization.

What does ChatGPT think?

Strangely, ChatGPT agrees with these reservations.

I asked whether it could manage a Web3 community, and it replied: “Yes, ChatGPT can help you manage your web3 community by providing automated responses to frequently asked questions, moderating discussions, and providing general information about your community. However, it’s important to note that ChatGPT is a tool and its effectiveness will depend on how it is configured and integrated into your community platform. Additionally, it’s important to note that ChatGPT is just a machine learning model, and it’s not fully able to replace human moderating.”

While the convergence of Web3 and AI is one of the most exciting developments in technology right now, and one that we will explore in future blog posts, we should remember that ultimately, communities are made up of humans – and unless you are happy to build a community of bots, the inspiration, nurturing and story-telling that are all part of a successful moderation team’s workload cannot yet be entirely devolved to AI.

Rhian Lewis is a software engineer and author who has been active in the cryptocurrency and decentralized technology space since 2013. Her book, The Cryptocurrency Revolution, was published in 2020 and she advises Outlier startups on developer relations and documentation best practices.

Related to this content

Discover more categories

The Atlas Report

Regular web3 insights, analysis, and reports to stay ahead of the game. Sign up to our newsletter.

Sign up to our newsletter