NFT for Eternity

Hadar Y. Jabotinsky & Michal Lavi*

* Hadar Y. Jabotinsky; Ph.D. (Law & Economics) Research Fellow at the Hadar Jabotinsky Center for Interdisciplinary Research of Financial Markets, Crises and Technology; Research Fellow, School of Law, Zefat Academic College.Michal Lavi Ph.D. (Law). Research Fellow at the Hadar Jabotinsky Center for Interdisciplinary Research of Financial Markets, Crises and Technology; Research Fellow, School of Law, Zefat Academic College.
We Thank Emily Cooper for helpful comments and excellent editorial work; special thanks are due to Makayla Okamura, Dana S. Florczak, Daniel Byrne, Wesley Ward, Jack Shapiro, Spencer Darling and to the Journal’s editors Amy O’Connell and Lauren Gallagher and their colleagues on the University of Michigan Journal of Law Reform staff for helpful comments and suggestions.
This Article is dedicated to the memory of Michal’s mother—Aviva Lavi—who died suddenly and unexpectedly. She will always be loved, remembered, and dearly missed.


Non-fungible tokens (NFTs) are unique tokens stored on a digital ledger – the blockchain. They are meant to represent unique, non-interchangeable digital assets, as there is only one token with that exact data. Moreover, the information attached to the token cannot be altered as on a regular database. While copies of these digital items are available to all, NFTs are tracked on blockchains to provide the owner with proof of ownership. This possibility of buying and owning digital assets can be attractive to many individuals.

NFTs are presently at the stage of early adoption and their uses are expanding. In the future, they could become a fundamental and integral component of tomorrow’s web. NFTs bear the potential to become the engine of speech: as tokenized expressions cannot be altered or deleted, they enable complete freedom of expression, which is not subject to censorship. However, tokenized speech can also bear significant costs and risks, which can threaten individual dignity and the public interest. Anyone can tokenize a defamatory tweet, a shaming tweet, or a tweet that includes personal identifying information and these tokenized expressions can never be deleted or removed from the blockchain, risking permanent damage to the reputations of those involved. Even worse, anyone can tokenize extremist political views, such as alt-right incitement, which could ultimately result in violence against minorities, and infringe on the public interest.

To date, literature has focused on harmful speech that appears on dominant digital platforms, but has yet to explore and address the benefits, challenges and risks of tokenized speech. Such speech cannot be deleted from the web in the same way traditional internet intermediaries currently remove content. Thus, the potential influence of NFTs on freedom of expression remains unclear. This Article strives to fill the gap and contribute to literature in several ways. It introduces the idea of owning digital assets by using NFT technology, surveys the main uses of tokenizing digital assets and the benefits of such practices. It aims to raise awareness of the potential of tokenized speech to circumvent censorship and to act as the engine of freedom of expression. Yet it also addresses the challenges and risks posed by tokenized speech. Finally, it proposes various solutions and remedies for the abuse of NFT technology, which may have the potential to perpetuate harmful speech. As we are well aware of the challenges inherent in our proposals for mitigation, this Article also addresses First Amendment objections to the proposed solution.


On March 5, 2021, about five years after tweeting his first tweet on Twitter, former Twitter CEO Jack Dorsey put the tweet up for sale as an NFT, a non-fungible token.1See Jay Peters, Please Do Not Give Billionaire Jack Dorsey Money for His Tweet, The Verge (Mar. 5, 2021) []. The tweet, which stated: “just setting up my twttr”, was offered on a platform called “Valuables” and was sold for about USD 2.9 Million to Mr. Sina Estavi.2 Valuables by Cent, (last visited Mar. 28, 2023) []. The buyer compared the tweet to the “Mona Lisa” and believed it was a wise investment.3See Justin Harper, Twitter: Buyer Defends Paying .9m for ‘Mona Lisa’ of Tweets, BBC News (Mar. 25, 2021) [] (“‘It’s a piece of human history in the form of a digital asset. Who knows what will be the price of the first tweet of human history 50 years from now,’ Malaysia-based Sina Estavi said.”).

Shortly afterwards, on March 25, a New York Times column on NFTs written by Kevin Roose was converted into an NFT4See Kevin Roose, Buy This Column on the Blockchain!, N.Y. Times (Jun. 30, 2021), []. and auctioned on a marketplace named Foundation,5 Foundation, (last visited Mar. 29, 2023 [].). with all proceeds going to the New York Times Neediest Cases Fund. The profile of the auction’s winner that bought the NFT containing the column “was linked to a Twitter profile belonging to a Dubai-based music production company, and to an Instagram account identified as that of Farzin Fardin Fard”.6See Kevin Roose, Why Did Someone Pay 0,000 for a Picture of My Column?, N.Y. Times (Mar. 26, 2021) []. It is however not clear if the winner is Mr. Fard or some other individual or multiple people. The winner bought the column for the sum of USD 560,000 and added it to his NFT collection.7See id.

Non-fungible tokens (NFTs) are unique tokens commonly produced using the ERC-721 standard, a free open token standard that describes how to build non-fungible tokens.8See ERC 721, []/; ERC721 Tokens (Non-Fungible) Explained, dist.0x Educ. Portal (last visited Jan. 29, 2023) []; Joshua Fairfield, Tokenized: The Law of Non-Fungible Tokens and Unique Digital Property 97 Ind. L J. 1261, 1272 (2022). The tokens allow their buyers to become owners of scarce and unique non-interchangeable digital assets, as the tokens link to a unique asset, that cannot be duplicated or replaced.9Qin Wang, Rujia Li, Qi Wang& Shiping Chen, Non-Fungible Token (NFT): Overview, Evaluation, Opportunities and Challenges, arXiv preprint arXiv:2105.07447 (2021). The uniqueness of the NFT is inherent in the fact that no two tokens have the exact same data.10See Elad Elrom, The Blockchain Developer : A Practical Guide for Designing 469, 470 (2019). Tokens containing the digital assets are stored on a technological digital ledger – the blockchain– and can be transferred within it. The blockchain connects users to each other through a chain of digital blocks. A public blockchain is usually maintained by its peers and is therefore considered a distributed ledger without a central authority.11Hadar Y. Jabotinsky, The Regulation of Cryptocurrencies: Between a Currency and a Financial Product, 31 Fordham Intell. Prop. Media & Ent. L.J. 118, 138 (2020); Roee Sarel, Hadar Y. Jabotinsky, Israel Klein & Roee Sarel, Globalize Me: Regulating Distributed Ledger Technology, 56 Vand. J. Transnat’l. L. 435 (2023). []. The programs that create and transfer NFTs are called smart contracts. These smart contracts are an additional layer of technology, basically comprised of command instructions such as “if X happens, execute Y”.12Jabotinsky, supra note 11, at 139. As such, these contracts are automatically executable.13Fairfield, supra note 8, at 49–50 (criticizing this terminology “The contract analogy for blockchain-based code has largely been an unfortunate false start. Nearly every legal analysis of smart contracts concludes that while the code might help execute a contract, smart contract programs are not themselves contracts.”). The blockchain thereby enables the transfer of value or assets without the help of intermediaries. It is, in its core, a system based on trust: in the technology of the blockchain and the peer-to-peer network that allows secured cryptographically record keeping,14See Fakhar ul Hassan, Anwaar Ali, Mohamed Rahouti, Siddique Latif, Salil Kanhere, Jatinder Singh, Ala Al-Fuqaha, Umar Janjua, Adnan Noor Mian, Junaid Qadir & Jon Crowcroft, Blockchain and the Future of the Internet: A Comprehensive Review, arXiv (Nov. 13, 2020) [] (“The original premise of blockchain is to establish trust in a peer-to-peer (P2P) network circumventing the need for any sort of third managing parties.”); see also Roman Beck, Beyond Bitcoin: The Rise of Blockchain World, 51 Computer 54 (Feb. 2018) (“Permanent record-keeping that can be sequentially updated but not erased creates visible footprints of all activities conducted on the chain. This reduces the uncertainty of alternative facts or truths, thus creating the ‘trust machine’ The Economist describes.”). guaranteeing authenticity and uniqueness of assets.15Id. See Beck, supra note 14 (“Data storage on the blockchain is secured by cryptographic hashes in which data being hashed return a fingerprint that verifies the authenticity of the data.”). NFTs offer full, real, digital ownership over the assets attached to the tokens.16Fairfield, supra note 8, at 1272–73. As the asset is attached to a token, when the asset is sold, the token it is attached to is transferred on the distributed ledger technology (“DLT”) and the transaction is recorded in a transparent manner. This is a way to ensure that all users of the DLT can keep track of registered transactions and to provide the owner with proof of ownership.17See Michal Lavi & Hadar Y. Jabotinsky, Speak Out: Verifying and Unmasking Cryptocurrency User Identity, 32 Fordham Intell. Prop. Media & Ent. L.J. 518, 549–50 (2022). NFTs are basically a way to allow the exchange of digital assets in return for more tangible ownership.

The benefits offered by blockchain technology make it possible to reduce the costs, risks, constraints, and fraud associated with traditional trading systems. This technology can also help improve security, reliability of originality verification, and traceability of digital assets.18See Ferdinand Regner, André Schweizer & Nils Urbach, NFTs in Practice–Non-Fungible Tokens as Core Component of a Blockchain-Based Event Ticketing Application 1, 11–13 (Fortieth International Conference on Information Systems, Munich, 2019), []. Given that “NFT” was the Collins English Dictionary’s 2021’s word of the year,19See David Shariatmadari, Get Your Crypto at the Ready: NFTs are Big in 2021, Collins Language Lovers Blog (Nov. 24, 2021), []. and that we hear this buzzword over and over again, it stands to reason that NFTs are worth noticing. This relatively new technology enables engagements that were previously impossible, disrupts traditional business models, and can be the impetus for new business models and markets, such as the market for digital artwork, collectibles, and “emotional assets” or “investments of passion.”20Noelle Acheson, Opinion, Crypto Long & Short: What NFT ‘Markets for Emotion’ Say About
Tech Business Models
, CoinDesk (Sept. 14, 2021, 8:26 AM), []; see Nyane Ezekiel Macdonald Mofokeng & Thapeli Kenny Matima, Future Tourism Trends: Utilizing Non-Fungible Tokens to Aid Wildlife Conservation, African J.Hospitality Tourism and Leisure (Sept. 2018);
In this way, NFTs can promote efficiency and innovation, as they offer entrepreneurs and artists an additional means of financial reward for their digital work.21See Matt Hougan & David Lawant, CFA Institute Research Foundation, Cryptoassets: The Guide to Bitcoin, Blockchain, and Cryptocurrency for Investment Professionals 8–9 (2021).

As our Article will further demonstrate,22See discussion infra Part I.2. NFT’s can have many uses. These tokens, however, are primarily used in the industries of art, music, and gaming.23See Fairfield, supra note 8, at 1273; see also Brian Elzweig & Lawrence J. Trautman, When Does a Non-Fungible Token (NFT) Become a Security?, Ga. St. L. Rev. 295, 295 (2023) (explaining that there are many uses of NFTs such as arts, sports and securities, and the preferred way of regulation should depend on its use since “it is the particular use of a given NFT that will determine its appropriate regulatory regime, since it may take the form of a collectible, data associated with a physical item, financial instrument, or a permanent record associated with a person, such as marriage license, or property deed”). For example, perhaps the most famous NFT sale occurred when Mike Winkelmann — the digital artist known as Beeple — put an NFT digital art photo titled “Everydays: The First 5000 Days” up for auction and sold it for USD 69 Million.24See Jacob Kastrenakes, Beeple Sold an NFT for Million, The Verge, (Mar. 11, 2021, 9:09 AM), []. One might wonder – why would anyone pay such a large sum for a photo which can easily be downloaded by anyone in a Google search? The answer relates to human psychology – people have an urge to own things and derive some sort of mental benefit from their possessions. In fact, this even plays a role in defining identity.25See Bruce Hood, Mine! Ownership of Objects Plays a Critical Role in Human Identity, Sci. Am. Mind, 56 (Sept./Oct. 2011); see also Rossenpavlov, Money, Happiness and Eternal Life : Greed, steemit (Nov. 3, 2017, 5:36 PM), []. Studies show that possession not only contributes to the sense of self but is also perceived as an extension of the self.26See Russell W. Belk, Possessions and the Extended Self, 15 J. Consumer Rsch. 139 (1988). Thus, buyers of NFTs know that they have become the owner of an authentic asset (as an NFT cannot be forged) and in turn are rewarded by the satisfaction they get from owning an original.27See Leena Kim, WTF Is an NFT—and Why Should We Care?, Town & Country (Mar. 25, 2021), []. Such ownership also helps the owner gain endorsement of the author and social recognition.28See Brian L. Frye, After Copyright: Pwning NFTs in a Clout Economy, 45 Colum. J.L. & Arts 341, 348–49 (2022).

NFTs are also starting to gain popularity in solving practical legal issues such as copyright. Registering copyrights on the blockchain helps ensure global protection for the copyrighted item on a system which is both transparent and open.29Felipe Erazo, Italian Copyright Agency Selects Algorand to Create Over Four Million NFTs to Represent Author Rights, (Mar. 26, 2021), []. Moreover, NFTs allow efficient management of rights in the music industry as they enable a Tokenized Music License (“TML”) and can be an efficient tool for distributing royalties for use of musical works or other copyrighted items.30See generally Charles Adjovu & Ewa Fabian, Blockchain-Mediated Licensing (2020),; []; Tim Ingham, NFTs for Copyrights: Why Non-Fungible Tokens Could Transform Who Gets Paid from Music Rights, and How, Music Bus. Worldwide (Mar. 15, 2021), []. For example, a new blockchain music streaming platform named ROCKI, recently launched “royalty income right music” NFTs.31Rocki, ROCKI, the New Music Streaming Platform on the Blockchain Just Launched Their First Royalty Income Right Music NFT (Nonfungible Token) for a Song, Which Sold for a Record 40 ETH (,800 USD at the Time of Writing)!, Cision (Dec. 16 , 2020), []. Soon, other streaming platforms followed suit, as they discovered the potential of NFTs in royalty collection.32See Jolene Creighton & Langston Thomas, How to Stream in Web3: A Guide to the Best NFT Streaming and Music Marketplaces, NFT Now (Oct. 21, 2022), [].

The gaming industry is another prominent NFT user. Cryptokitties is one example.33See CryptoKitties, (last visited Mar. 5, 2023) []. This is a blockchain-based game played on the Ethereum blockchain. Players can buy and sell pictures of cats tied to a particular NFT token. Each cat has unique features and a unique digital identity.34See Bryan Wilson, Blockchain and the Law of the Cat: What Cryptokitties Might Teach, 88 UMKC L. Rev. 365, 379 (2019). The game allows users to “breed, trade, and play with the virtual cats in a rich online community.”35Fairfield, supra note 8, at 1276.

NFTs are the driving force behind a new wave of crypto adoption.36See Cooper Turley, If You Haven’t Followed NFTs, Here’s Why You Should Start, TechCrunch (Feb. 27, 2021), []. People who were never interested in cryptocurrencies as a financial asset discovered a sudden interest in NFTs because they are interested in the assets attached to them.37Peter Carstairs & Lucy Sanderson, Non-Fungible Tokens and Digital Assets – What’s the Deal?, Minter Ellison (Oct. 6, 2021), [] (“Because NFTs are non-fungible, and because they are stored in a blockchain (more on this below), this has enabled an entirely new market in digital artworks to be created whereby digital works can be readily bought and sold.”). Just like any new technology, one can expect the use of NFTs to expand beyond early adopters.38See Mofokeng & Matima, supra note 20, at 14. Given past patterns of technological adoption, the late majority and the laggers can be expected to quickly follow suit,39See Everett M. Rogers, Diffusion of Innovations 27 (Free Press, 5th ed. 2003) (explaining the process of diffusion of innovation). ­­ and NFT use is likely to thrive.

Wide adoption might happen sooner than we think. This is because blockchain-based social media platforms and blockchain-based infrastructures are developing as well.40Currently the early adopters already use NFTs and we are at the early majority stage of adoption. See ul Hassan et. al, supra note 14, at 13. One such example is the blockchain-based social media platform known as “Steemit.”41 Steemit (last visited Jan. 3, 2023) []. For expansion, see Chao Li & Balaji Palanisamy, Incentivized Blockchain-based Social Media Platforms: A Case Study of Steemit, in 11th ACM Conference on Web Science 145 (2019). This platform is operated by a decentralized community in which users create content. The platform rewards creators of popular posts with its own generated cryptocurrencies.42Li & Palanisamy, supra note 41, at 150. If more mainstream platforms such as Facebook and Twitter follow, most of us would come to encounter and use decentralized blockchain platforms on a daily basis. Moreover, NFTs are expected to be integrated with the internet via API (“Application Programming Interface”).43Dinuka Ravijaya Piyadigama & Guhanathan Poravi, Exploration of the Possibility of Infusing Social Media Trends into Generating NFT Recommendations, in ACM 3 (2022), []. This set of definitions and protocols allows computers, products, or services to connect and communicate with other products and services.44See Petr Gazarov, What is an API? In English, Please., FreeCodeCamp (Dec. 19, 2019), []. The integration of NFTs with the internet is expected to accelerate as programmers develop more APIs that can be connected and comprehended by smart contracts on a blockchain.45ul Hassan et. al, supra note 14, at 11 (explaining that API can be programmed in a way that could allow blockchain infrastructure to “talk” with the internet infrastructure). For example, in the future, ticketing companies might create NFT concert tickets. Buying such a ticket would allow the owner to keep the digital ticket as a souvenir, add an attestation on social media to connect with other attendees, and of course sell it in the future if its value increases as a collectible.46See Loïc Lesavre et al., A Taxonomic Approach to Understanding Emerging Blockchain Identity Management Systems 42 (National Institute of Standards and Technology, 2020). DLT-based systems are likely to become a substantial component of the architecture of tomorrow’s Web 3.0, a decentralized blockchain based world wide web free of intermediation,47See Bobby Allyn, People are Talking about Web3. Is it the Internet of the Future or just a Buzzword? NPR (Nov. 21, 2021), []. leaping “forward to open, trustless and permissionless networks”.48Max Mersch & Richard Muirhead, What Is Web 3.0 & Why It Matters, Fabric Ventures (Dec. 31, 2019) []. If this indeed occurs, it would bring NFT use to the forefront of the digital stage.

NFTs open up a new world for legal literature and thought as well. Yet there is an absence of core research on the relevant legal issues pertaining to NFTs. Most studies on NFTs focus on smart contracts, property rights, and ownership.49See e.g. Fairfield, supra note 8; Brian L. Frye, After Copyright: Pwning NFTs in a Clout Economy, 45 Colum J. L & Arts 341 (2022); Brian L Frye NFTs & the Death of Art, SSRN (April 19, 2021), []. The literature currently neglects to address the expressive values of NFTs and the unique free speech implications of these tokens. NFTs are different from other types of speech published on internet websites and social networks. Both the NFT and the transactions associated with it are freely available on the digital platform for anyone to see, but the information attached to the token cannot be altered like in a regular database or like on an intermediated internet platform.50 Elrom, supra note 10, at 471; Po-Wei Chen, Bo-Sian Jiang, Chia-Hui Wang, Blockchain-Based Payment Collection Supervision System Using Pervasive Bitcoin Digital Wallet, Fifth International Workshop on Pervasive and Context-Aware Middleware, 141 (2017). This is because the blockchain has no single central authority which manages the system. The blockchain’s distributed ledger system is jointly administered by a network of communication endpoints, known as “nodes”, for transmission or redistribution of data.51See Unal Tatar, Yasir Gokce & Brian Nussbaum, Law Versus Technology: Blockchain, GDPR, and Tough Tradeoffs, 38 Comput. L. & Sec. Rev. (2020). See generally Ben Lutkevich & Kate Gerwig, Network Node, Tech Target (Oct. 2021) The connectedness of the entire chain of blocks protects the blockchain and the tokens it transfers.52Fairfield, supra note 8, at 1270. The permanency of the information on NFT tokens, the inability to remove it from the public blockchain, and the fact that the blockchain never forgets, all substantially limit the ability of governments53. See Zeynep Tufecki, Twitter and Tear Gas: The Power and Fragility of Networked Protest Ch. 2 (2017) (addressing governmental censorship). and private actors54See e.g., Sara Harrison, No One’s Happy with YouTube’s Content Moderation Policies, Wired (Aug. 28, 2019, 7:00 AM) [] (YouTube deletion of many Prager U. videos). to censor speech. Off the blockchain, currently most private actors respond to government regulations and guidelines and cooperate with their takedown requests.55See Tomer Shadmy & Yuval Shany, Protection Gaps in Public Law Governing Cyberspace: Israel’s High Court’s Decision on Government-Initiated Takedown Requests, LawFare (Apr. 23, 2021, 10:55 AM) [] (expanding of the practice of the Israeli government to request intermediaries to voluntarily remove content that criticizes public officials). In addition, most social media platforms also respond to takedown requests by private individuals, by removing legitimate content as a result of collateral censorship.56Such censorship happens collateral censorship “when a (private) intermediary suppresses the speech of others in order to avoid liability” for such speech. See Felix T. Wu, Collateral Censorship and the Limits of Intermediary Immunity, 87 Notre Dame L. Rev. 293, 295–96 (2011). Thus, they silence opposing opinions or expressions that are unsuitable to the agendas of governments and private individuals. Another aspect of removing content relates to content that does not comply with the platform’s terms of service. Yet sometimes platforms even remove content in what seems to be an arbitrary move.57See id. at 323-24; infra II.A.3 (as the article explains later this happens because of categorical classification that overlooks nuances of context). NFTs restrict the power of state actors and private intermediates to remove content. Thus, they can help foster a vibrant marketplace of ideas as they allow the free flow of information. In other words, NFTs have the potential to become the engine of speech.

The permanency of tokenized speech, however, jeopardizes efforts to combat harmful speech. If, for example, John Doe tokenizes a defamatory post on Jane Doe, or tokenizes a tweet with private information about her, and then becomes the owner of the tweet, it would be visible to all blockchain users. Such private information can include her home address, phone number, social security number, credit card, bank account numbers and more. Doxing can also expose her to physical danger, stalking, and on and offline harassment. Due to the decentralized structure of the blockchain and the lack of dependency on a central server, Jane would not be able to report it to a central intermediary and request to take it down. As mentioned, the structure of the blockchain system does not allow alteration of the information published on it.58See Tatar at el., supra note 51, at 3. A tokenized tweet could also contain information that can shame Jane in public and harm her dignity and reputation.

The potential harm of tokenized speech extends beyond personal reputations and can encroach on the public interest. Firstly, such permanent, tokenized speech is likely to infringe on the dignity of minorities and other marginalized populations. As such populations suffer from a higher-than-average degree of shaming, perpetuating speech that shames these groups could increase socio-economic gaps.59See e.g., Thomas H. Koenig & Michael L. Rustad, Digital Scarlet Letters: Social Media Stigmatization of the Poor and What Can Be Done, 93 Neb. L. Rev. 592 (2015). Secondly, it could cause data pollution that would contaminate the flow of accurate information to the general public and even erode democracy,60See Omri Ben Shahar, Data Pollution, 11 J. Legal Analysis 104, 105, 112–13 (2019) (treating “fake news” as “data pollution” that disrupts social institutions and public interests in a similar manner to environmental pollution). especially when a false rumor on a public official is tokenized. Worst of all, when incitement or hate speech is tokenized, it could even result in violence outside the digital sphere.

In addition, even though the possibility to keep data permanent and visible on the blockchain can mitigate governmental and private censorship, those very same features of permanency and visibility can backfire. For example, they might enable oppression by totalitarian governments, which might track and collect tokenized information on protestors and sentence them to prison.61It should be noted that even in democracies, governments might use records collected on opponents to blackmail them. See Neil Richards, Why Privacy Matters 132–133 (2021) (describing how the U.S. NSA wanted to surveil “radicalizers” who are not terrorists but merely radical critics of U.S policy. Richards explains that this practice “raises troubling questions about the government’s ability and willingness to blackmail its critics for noting more than sincerely speaking on core matters of political speech protected by the First Amendment”). NFTs are expected to allow governments enhance capabilities to collect information on its opponents and use the information to oppress them.

As NFTs become more integrated with social media platforms, tokenized speech is expected to gain more visibility and attract more attention.62See ul Hassan et. al, supra note 14 at 11. As a result, tokenized speech can exacerbate harm. Moreover, such tokens allow eternal memory of the tokenized speech, which in theory could live forever. In other words, tokenized harmful speech could be shackled to unwitting victims, and could have a long-term impact on their ability to live in peace.

Tokenized NFT speech changes the balance between free speech and the individual rights to dignity, reputation and privacy. This leaves a lacuna which the law has yet to address. Currently, tokenized speech remains forever on the blockchain, without any ability to remove it, and thus victims are likely to be left without any remedy. This Article aims to fill this void by proposing various solutions that would be available to victims in order to mitigate the harm caused by tokenized speech. It proposes ex ante solutions that should be adopted by NFT marketplaces and intermediaries that allow integration of NFTs in their infrastructure, in order to decrease visibility of the harmful token and mitigate any damage. It also proposes ex post solutions by developing remedies through direct legal action against the owner of the harmful tokenized speech, inter alia in order to minimize the profitability of owning tokenized harmful speech. Keeping these goals in mind, the Article is divided into the following parts:

Part I demonstrates the new possibility to fully own digital property. It explains what non-fungible tokens (NFTs) are and surveys their features. It illustrates the distinctions between NFTs and fungible tokens, as each type of token possesses a different value. It also elucidates the benefits of the decentralized blockchain-based system in providing transparency and protection against fraud. Subsequently, it provides an overview of the primary applications of NFTs in art, music, collectibles, gaming, and other industries,63Fairfield, supra note 8, at 1273–78. —demonstrating that NFTs provide a way of achieving the previously impossible, thus allowing for innovation and growth.64See Matt Hougan & David Lawant, CFA Inst. Rsch. Found., Cryptoassets: The Guide to Bitcoin, Blockchain, and Cryptocurrency for Investment Professionals 10 (2021).

Part II focuses on online content moderation. It explains that on the internet intermediaries control the flow of information. “While it seems as if everyone ‘can publish freely and instantly online,’ many intermediaries in fact ‘actively curate the content’ that their users post on their platforms”.65See Michal Lavi, Publish, Share, Re-Tweet, and Repeat, 54 U. Mich. J. L. Reform 441, 486 (2021). As private entities, intermediaries are not considered public forums and thus are not subject to judicial scrutiny under the First Amendment.66Prager Univ. v. Google LLC, 951 F.3d 991, 995 (9th Cir. 2020), Fyk v. Facebook Inc., No. 19-16232, 2020 WL 3124258, at *1 (9th Cir. June 12, 2020) (noting that § 230 protects intermediaries’ editorial discretion to moderate content); Divino Group LLC v. Google LLC, No. 19-cv-04749-VKD,2021 WL 51715 (N.D. Cal. Jan. 6, 2021); Lewis v. Google LLC, No. 20-16073, 2021 WL 1423118 (9th Cir. April 15, 2021); Brock v. Zuckerberg, No. 20-cv-7513-LJI, 2021 U.S. Dist. LEXIS 119021 (S.D.N.Y. June 25, 2021). Section 230(c) of the Communications Decency Act (“CDA”) titled “protection for Good Samaritan private blocking and screening of offensive material” encourages online intermediaries to use their editorial discretion and grants them immunity from liability for editorial decisions.67Communication Decency Act, 47 U.S.C. § 230(c); Danielle Keats Citron, The Fight For Privacy, Protecting Dignity Identity and Love in the Digital Age 86 (2022) (explaining that Section 230(c) of the CDA originally aimed “to incentivize private efforts aimed at combating ‘offensive’ material.” Section 230 (c ) (1) addresses immunity for under removal of content and Section 230 (c) (2) conversely addresses the immunity for over removal of content by intermediaries.) [hereinafter Citron, The Fight for Privacy]. Intermediaries do indeed remove harmful speech from their platforms. However, in some cases they may also remove legitimate content because it is not compatible with their agendas, or due to algorithmic misclassification,68Sal Bardo, YouTube Continues to Restrict LGBTQ Content, HuffPost, (Jan. 17, 2018),; Niva Elkin-Koren & Maayan Perel, Democratic Contestation by Design: Speech Governance by AI and How to Fix It, Fla St. L Rev. (forthcoming 2023) (manuscript at 67–68), []. or a the content was flagged as “false positive” for harmful content.69See Daphne Keller, Filtering Facebook: Introducing Dolphins in the Net, a New Stanford CIS White Paper —Or—Why Internet Users and EU Policymakers Should Worry About the Advocate General’s Opinion in Glawischnig-Piesczek, Ctr. for Internet & Soc’y (Sept. 5, 2019, 8:51 AM), []; Elkin-Koren & Perel, supra note 68, at 50; Jack M. Balkin Free Speech Versus the First Amendment, UCLA L. Rev. (forthcoming 2023) (draft at 39), []. Moreover, they can remove content in cooperation with government requests70Shadmy & Shany, supra note 55. or even due to requirements by authoritarian governments.71 The Nat’l Bureau of Asian Rsch, Social Media Platforms and Authoritarian Censorship in Asia (Dec. 23, 2020) [] (“authoritarian governments absolutely do try to find ways to enforce censorship on platforms, with or without the witting help of the platforms themselves…. Platforms have an incentive to avoid challenging local authorities or offending cultural sensitivities, even when that leads to suppression of political speech.”). Subsequently, the Article puts the spotlight on one prominent NFT feature—the fact that the blockchain never forgets, and the information carried by a token can never be deleted or altered. The fact that NFTs cannot be altered is a game changer for speech and offers great promise to the area of freedom of expression, as the unique features of NFTs can remove barriers to the flow of information caused by private censorship. In other words, NFTs can be an engine of speech.

Part III focuses on the flip side of tokenized speech. It explains that the very same features that make NFTs an engine of speech can also increase the potential of harmful tokenized speech to cause harm. In contrast to regular user-generated content in an intermediated system, where intermediaries moderate through moderators or algorithmic enforcement, the blockchain is truly decentralized and has no central intermediaries. Moreover, the features of NFTs– their place of trade on the blockchain, combined with the blockchain’s eternal memory – limit enforcement options against tokenizing harmful expressions. Subsequently, this part maps out the main types of harmful speech that can be tokenized and the types of harm that can be caused, such as dignitary harm, infringement of the public interest, and even violence and physical harm. It also surveys the types of remedies that are available to, and applied by, online intermediaries, which are not available on the blockchain. This roadmap illustrates how NFTs can exacerbate the severity of damage caused by various types of harmful content and takes the first step towards developing a framework for mitigating this harm.

Part IV proposes solutions to mitigate the problem of misuse of NFTs to perpetuate harmful expression. The first type of solution is aimed at NFT marketplaces and proposes that these marketplaces should adopt a safety-by-design concept. In other words, it proposes a technological design that would mitigate the damage caused by tokens that contain harmful information on victims ex ante.72On the concept of mitigating the harm of dissemination of defamation by design see Lavi, Publish, supra note 65, at 494. For expansion on a similar concept in the realm protection of privacy see Chris Jay Hoofnagle, Federal Trade Commission Privacy Law and Policy 190–91 (Cambridge Univ. Press, 2016). For example, the NFT marketplace could pre-screen the content of the speech before tokenizing it, or choose to store it outside the blockchain and only include a reference that points to it on the blockchain.73ul Hassan et. al, supra note 14, at 17; Fairfield, supra note 8, at 1272 (“[A]n NFT stands for ownership of something not directly stored on the blockchain . . . .”). Such solutions could be adopted voluntarily by NFT marketplaces, or imposed on them through regulation. The second type of solution focuses on obscuring harmful speech. It is aimed mainly at internet intermediaries that can reduce the visibility of tokenized expressions that have been integrated on the internet. The third type of solution is directed at the original seller of the token containing harmful speech and the buyers of such tokens. It argues that in some circumstances, especially when the token contains unprotected speech, the identities of the token’s original creator and subsequent buyers should be unmasked. This would allow the victim to file an action against them in court. Courts could apply the remedy of compensation and obligate the original creator and the buyers to compensate victims or apply a remedy of disgorgement. Under such remedy, creators and buyers would be deprived of profits gained from commercialization and sale of the token. Part IV also addresses First Amendment concerns with respect to the constitutionality of the proposed solutions.

I. NFT Tokens : Creating Unique Rival Assets

People want to own things and have a deep passion for possessions that also plays a role in defining their identity.74See Hood, supra note 25, at 56; Rossenpavlov, supra note 25. People can own different types of property. One type can be classified as “essentials”, i.e. things people consume and use which are necessary for survival. However, people also surround themselves with unreplaceable emotional goods that relate to their lives and experiences. These items carry with them sentimental and personal value. The fact that emotions are attached to possessions makes them pleasurable to consume.75Fairfield, supra note 8, at 1264. In the past, sentimental assets were unique and rivalrous. However, the shift to the digital era made it harder to preserve these features as “[e]very file sent is sent by making a copy.”76Id. The value attached to unique assets has vanished almost completely in the digital environment.77Id. The development of non-fungible tokens (NFTs), new unique blockchain-based tokens that have been in use since 2018 which can serve as verifiable proof of origin, has the potential to change this and resurrect the concept of unique, rivalrous assets in the digital sphere.

A. What Are NFTs? How Do They Work?

NFTs are unique tokens meant to represent digital assets that people can buy and sell. These tokens are characterized by special features that differentiate them from fungible tokens, or currencies. First, as each token carries specific data that is unique to it, NFTs are not interchangeable.78 Elrom, supra note 10, at 470. Thus, in contrast to fungible tokens such as USD or fungible cryptocurrencies, where any unit can be exchanged with another without affecting the holder, NFTs cannot be replaced with other non-fungible tokens of the same type.79Mofokeng & Matima, supra note 20, at 12; Diana Qiao, This is Not a Game: Blockchain Regulation and Its Application to Video Games, 40 N. Ill. U. L. Rev. 187 (2020)(“Each token carries ‘unique information and varying levels of rarity,’ meaning that the value of one NFT is not equivalent to the value of another.”). Second, they are unique and not uniform. Each token is distinct no one token is like another.80Mofokeng & Matima, supra note 20, at 11 (“A crypto collectable is non-fungible, a cryptographically unique, non-replicable digital asset …); ul Hassan et. al, supra note 14, at 6 (“NFTs is that every token is associated with a unique identifier, rendering each token unique to its respective owner.”). See also Lawrence J. Trautman, Virtual Art and Non-Fungible Tokens, 50 Hofstra L Rev. 361, 364 (2021) (“NFT technology leverages digital uniqueness in a way that makes a new social phenomenon possible. There is only one Mona Lisa in the Louvre: owning a copy doesn’t provide the same thrill.”). The uniqueness of NFTs makes them valuable.81Fairfield, supra note 8, at 1263 (“What makes these assets valuable is that they are one-of-a-kind.”). Third, they cannot be divided as the basic unit is one token only.82Thippa Reddy Gadekallu, Thien Huynh-The, Weizheng Wang, Gokul Yenduri, Pasika Ranaweera, Quoc-Viet Pham, Daniel Benevides da Costa & Madhusanka Liyanage, Blockchain for the Metaverse: A Review, Future Generation Comput. Sys’s., June 2023, at 404 ([E]ach NFTs are non-interchangeable and cannot be divided. “).

How does one create an NFT? Blockchain technology enabled the creation of NFTs and storage of their value, as tokens are stored on the blockchain and their uniqueness can be certified. The original premise of the blockchain is to establish trust in a network, circumventing the need for any sort of third managing parties.83See ul Hassan et. al, supra note 14, at 2. There are NFT exchanges, where anyone can create an account, name and offer an NFT for sale, either for a fixed price or in an auction.84See Frye, Death of Art, supra note 49, at 4. Because the blockchain eliminates the need for intermediaries and relies on transparency, it has been dubbed a “trust machine.”85See generally “The Trust Machine,” The Economist (Oct. 31, 2015), The blockchain is maintained by an online peer-to-peer network. It utilizes a distributed ledger technology that tracks transactions, maintains a complete history of verified transactions86Jabotinsky & Lavi, supra note 11, at 549–50. and contains “algorithms that ensure consistency of data across storage locations.”87Shaanan Cohney & David A. Hoffman, Transactional Scripts in Contract Stacks, 105 Minn. L. Rev. 319, 332 (2020). Thus, ownership is listed on the blockchain. To prevent fraud, blockchain technology uses a consensus mechanism which makes attempts to falsify the ledger tremendously expensive, or too risky.88Fairfield, supra note 8, at 1269. This structure makes the blockchain tamper-proof.89 Elrom, supra note 10, at 471.

The verification and permanency of information on the blockchain is made possible thanks to a “mathematical relationship called a hash, and a consensus mechanism for verifying hashes.”90Fairfield, supra note 8, at 1269. Each block on the blockchain includes, among other data, a hash which can be attached to anything: pictures, text, or lists of transactions and more.91Id. at 1270. The hash is what makes it possible to display who owns what.92See id. Once a transaction is made, it is recorded on a block on the blockchain. Each new block identifies the previous one through its unique hash.93Id. The process of “mining” creates these new blocks. In this process, several computers compete to solve a mathematical problem, and the computer that wins is given the opportunity to form a block and be rewarded (usually by receiving crypto).94Id. Hashing is the process that converts a grouping of digital data to a single “number,” a hash that serves as the unique identifier for the source data or a digital fingerprint of the source data that cannot be tampered with.95See Tonya M. Evans, Cryptokitties, Cryptography, and Copyright, 47 Aipla Q. J. 219, 239 (2019). This number is then used as the base for the mathematical problem presented to the next miner.96Fairfield, supra note 8, at 1278. When adding a subsequent block, it also mathematically references the hash from the previous block, and so on.97Id.; Evans, supra note 95. In order to alter one block, it is necessary to change every block that came before. Such connectivity of the entire chain protects the blockchain, enables it to remember, and makes it resistant to fraud and censorship.98Fairfield, supra note 8, at 1279; Evans, supra note 95, at 239.

Distributed apps (“Dapps”) and software known as smart contracts can be programmed by developers to run on the blockchain.99Fairfield, supra note 8, at 1280; Evans, supra note 95, at 240. Ethereum, which runs a self-executing software programming language, is one example.100Evans, supra note 95., at 239. Dapps can establish their own token systems by creating a smart contract. Such tokens “consist of a hash of the token’s transaction history, and a series of basic standard functions and features.”101Fairfield, supra note 8, at 1272. In the case of NFTs, the tokens consists of a hash containing the transaction history of the token, some basic standard features and functions and a URL which allows tracking the file related to the token (the image or text the token represents).102Id. The most prominent standard to create an NFT is the ERC-721 standard, which produces unique, non-fungible tokens.103Id.; Evans, supra note 95, at 248 (“ERC-721 is a finalized coding standard interface for non-fungible tokens, referred to early on as deeds, evidencing ownership of both wholly digital crypto assets and physical assets represented in token form (i.e., tokenized assets).”). Gains or losses in the value of each and every token are different, as every token is unique.104Fairfield, supra note 8, at 1283.

B. This is Mine! Main Uses of NFT Tokens

1. Crypto Art and Historical Documentation of Inventions

NFTs are a novel medium to distribute and exhibit digital art.105Elzweig & Trautman, supra note 23, at 306. They are a game changer, as they connect artworks to a digital file that only one person can own, thereby facilitating proof of ownership of unique, original digital artworks.106Frye, Death of Art, supra note 49, at 4 (criticizing the concept of NFT: “The problem with “owning” a “unique” copy of a digital artwork is there’s nothing to own”). See id. at 9 (“NFTs mean the art market doesn’t need art anymore”). Tokenizing art thus makes it possible for the owner to use their token and display it in various online social spaces, or even in real life utilizing novel physical display methods.107Fairfield, supra note 8, at 25; Dominic Bayley, How Can I Display My NFT Artworks in 2021?, PC World (Aug. 17, 2021) []. One method of display, which allows the artwork to adorn the owner’s physical living or working space, uses special screens that resemble picture frames (such as Samsung’s “The Frame”).108Id. If the art is displayed in public, this also fulfills the owner’s need to gain recognition as the owner.109Fairfield, supra note 8, at 1274 (“The desire to display motivates people to buy digital NFT art just as the desire for recognition motivates displays on social media or in a video game “).

Verifying the artwork on the blockchain helps authenticate the artwork with a unique identification code that distinguishes it from other tokens and protects creators. This protection is achieved in ways which were previously impossible through traditional copyright law and technological protection measures, by allowing proof of ownership through transfer of the token.110Evans, supra note 95, at 259. This new technological development allows artists to maximize profits and render intermediaries (such as art galleries, art dealers and online platforms) redundant.111Trautman, supra note 80, at 415.

Buying NFTs also empowers buyers, who gain different types of value from the purchase: (1) The first buyer gains satisfaction from financing a special art project, (2) all buyers gain pride and recognition from their connection to a work of art,112See id. at 364, 372 (“Collecting art is social.”). (3) buyers become the owner of a tradeable asset which can easily be traded globally.113Id. at 372 (quoting Jonathan Zittrain & Will Marks, What Critics Don’t Understand About NFTs, The Atl. (Apr. 7, 2021), []. However, as Professor Frye explains, purchasing NFTs is different from purchasing ownership in copyrighted items. As a buyer, one is basically buying unique access to a digital file.114Frye, Death of Art, supra note 49, at 3 (“[O]nly the owner of the NFT can access or transfer that particular unit of data. Essentially, an NFT is a unique “digital object” that someone can own, sell, or buy.”). In other words, buyers of an NFT own only the NFT.115Id. at 6 (“You aren’t buying a unique digital artwork, because it’s impossible. There’s no such thing. And even if the digital artwork associated with the NFT is protected by copyright, you aren’t buying copyright ownership or any kind of copyright interest. All you own is the NFT.”). As Frye explains, it is ownership without control of the art itself.116See Frye, After Copyright, supra note 49. And yet, many people buy and sell NFTs anyway,117Frye, Death of Art, supra note 48, at 7. perhaps because as mentioned,118See supra § I. See also Frye, Death of Art, supra note 49, at 9 (“It doesn’t really matter what you’re buying, all that matters is the demand to own it.”). NFTs are emotional goods and fulfill the individual’s inherent need to possess As such goods exist and are valued in a social context,119Trautman, supra note 80, at 364 (“humans value rarity and uniqueness particularly in a social context.”). ownership helps the owner gain social recognition, endorsement of the author,120Frye, After Copyright, supra note 49, at 341–42.(“I call this ‘pwnership,’” because it consists of ‘clout,’ rather than control. NFT owners don’t need copyright, because pwnership depends on the endorsement of the author, rather than control of the use of the work.”). and of society.

The hype of tokenizing artwork is driven by the buyer’s motivation to gain recognition as the owner of the artwork.121Fairfield, supra note 8, at 26 (“[T]he desire to display motivates people to buy digital NFT art just as the desire for recognition motivates displays on social media or in a video game.”). Thus, creators who possess the copyright for a work of art (such as a painting, photograph, or video) can gain huge sums of money from selling the tokenized crypto art. For example, Mike Winkelmann, the digital artist known as Beeple, sold his NFT digital artwork in an auction for USD 69,000,000.122Kastrenakes, supra note 24. The artwork “Sophia the Robot”, which was produced by collaboration of Sophia, the humanoid robot, and Andrea Bonaceto, an Italian artist, was tokenized and sold for USD 688,888.123Mike Ives, The Latest Artist Selling NFTs? It’s a Robot, N.Y Times (Mar. 25, 2021), []. The image of Zoe Roth, taken in January 2005 by her father when she was four years old, was sold as an NFT on April 2021 for a whopping USD 473,000. In the photograph, Zoe is standing in front of a burning house. The picture became an internet viral meme synonymous with disaster.124See Edward Helmore, Woman in Disaster Girl Meme Sells Original Photo as NFT for 0,000, The Guardian (Apr. 30, 2021), []. A viral video from 2007, showing the baby Charlie Davis biting his big brother’s finger and laughing,125Charlie Bit My Finger – Again !, YouTube (May 22, 2007) (the video achieved almost 900 Million views). []. was tokenized and sold for USD 760,999.126Christina Morales, ‘Charlie Bit My Finger’ Is Leaving YouTube After 0,999 NFT Sale, N.Y Times (May, 24, 2021), []. Even though the video was sold as an NFT on the blockchain, it remains on YouTube for public access.127Scott Nover, Charlie Bit Me Won’t Be Leaving YouTube After All, Quartz (May 27, 2021), [].

The phenomenon of tokenized art is expanding to inventions and tokens containing documents with historical value. For example, UC Berkeley recently offered the opportunity to purchase an NFT of the patent disclosures at the heart of two Nobel Prize-winning inventions from the university’s research labs, as well as an NFT that links to digitized internal forms and correspondence that document the initial research findings that led to highly significant biomedical breakthroughs: CRISPR-Cas9 gene editing and cancer immunotherapy. The NFT was sold for nearly USD 55,000.128Robert Sanders, First-Ever Auction of NFT Based on Nobel Prize Nets UC Berkeley ,000, Berkeley News (June 8, 2021), []. UC Berkeley intends to use the profits to fund research.129Robert Sanders, UC Berkeley Will Auction NFTs of Nobel Prize-Winning Inventions to Fund Research, Berkeley News (May 27, 2021) []; Trautman, supra note 80, at 373–74.

Similarly, Tim Berners-Lee, the founder of the World Wide Web, auctioned the original code for the web as an NFT, “[t]he NFT includes original time-stamped files containing the source code written by Berners-Lee, an animated visualization of the code, a letter written by Berners-Lee on the code and its creation, and a digital ‘poster’ of the full code”.130Sam Shead, The Web’s Source Code is Being Auctioned as an NFT by Inventor Tim Berners-Lee, CNBC (June 15, 2021), []. All of the components are digitally signed by Berners-Lee.131Id. Auction publications announced: “[e]ver thought about what it would be like to own the World Wide Web? Now you sort of can — well, a digital representation of its source code anyway,”132See Josie Fischels, The Father Of The Web Is Selling The Source Code As An NFT, NPR (June 17, 2021), []. this NFT of was ultimately sold for USD 5,400,000.133Tim Berners-Lee Sells Web Source Code NFT for .4m, BBC News (Jun. 30, 2021), [].

Another landmark auction is of the first text message that a Vodafone programmer Neil Papworth sent to Richard Jarvis, then a director of Vodafone, 29 years ago. Vodafone has turned the first text message into an NFT that was sold at a Paris auction house for about U.S.D 150,000. The company will donate the revenues to the United Nations Refugee Agency.134Kris Holt, The First Text Message is Now a 0,000 NFT, Engadget (Dec. 23, 2021, 12:05 PM) [].

Such recent landmark auctions and token purchases can raise awareness among art buyers, creators, and investors of the market for unique digital art represented by NFTs.135See Trautman, supra note 80, at 363.

2. Collectible Game Cards and Virtual Goods

Markets for collectibles, such as stamps, rely mainly on the rarity of goods.136See Jonathan E. Hughes, Demand for Rarity: Evidence from a Collectible Good, 70 J. Indus. Econ. 147, 160 (2022) [] (explaining that rarity is a prominent reason for collecting collectibles); Ezekiel & Mofokeng, supra note 20, at 2 (“[P]eople have traditionally placed their money in collectibles that were scarce or rare . . . “). By utilizing the features of the blockchain and the ERC-721 standard, NFTs enable a kind of rarity for digital assets which was not previously possible, as without NFTs digital assets can easily be replicated.137See Mofokeng & Matima, supra note 20, at 4–5. Thus, people may buy collectible NFTs as they would buy traditional collectibles. They value collectibles not only because they gain satisfaction from owning and displaying them, or due to the potential to profit from selling them at a higher price,138Id. at 4 (referring to displaying and gaining profits from trading as central reasons for owning collectibles). but also, and even mainly, because of the social context of collecting and trading. This same context also allows the use of collectible NFTs in the gaming industry.139See Selen Türkay & Sonam Adinolf, Friending to Flame: How Social Features Affect Player Behaviors in an Online Collectible Card Game, CHI Conference (May 4–9, 2019).

One of the first examples of such a collectible, NFT blockchain-based, unique and tradable digital asset is Cryptokitties.140 CryptoKitties, (last visited Dec. 29, 2022); ul Hassan et. al, supra note 14, at 6; Wilson, supra note 34, at 365; Fairfield, supra note 8, at 1276 (“Kitties’ are pictures of cats tied to a particular ERC-721 token.”). This game emerged at the end of 2017 utilizing the Ethereum blockchain-based platform. In Cryptokitties, tokens are attached to animated pictures of cats and are thereby secured against replication. Now, these pictures can only be transferred with their owner’s permission.141ul Hassan et. al, supra note 14, at 6; Fairfield, supra note 8, at 1276 (“When an owner wanted to sell a kitty, the token was traded from one wallet to another, and the ownership interest recorded by the smart contract.”). Players “buy and sell cats, breed them to make a new cat or rent them out to breed. . .Each cat has a unique digital identity, and from that it gets its unique features, like green eyes and spots”.142Wilson, supra note 34, at 379. (quoting VICE News, This Game Combines the Internet’s Favorite Things: Cats and Cryptocurrency (HBO), YouTube, at 00:26 (Dec. 5, 2017), []. Such features affect their price. Cryptokitties gained popularity because of its social context. It became a game for a rich, online community of users which interacted in breeding, trading, and playing with virtual cats, creating an entirely new market.143Fairfield, supra note 8, at 1276. Such a market streams revenue from trading Cryptokitties NFTs. The volume of sales testifies to the development of this new market, as over a period of approximately two years, there have been 538,043 sales of 431,680 unique Cryptokitties, for a total of USD 27,233,377.144Wilson, supra note 34, at 370.

Other collectible NFTs were developed for more idealistic goals. For example, “Honu” was developed with the objective of using auction sales profits to promote environmental goals. Created in 2018 by the same company which developed Cryptokitties (Axiom Zen), Honu is a sea turtle, and profits from its sales are used to raise money for sea turtle preservation in the Caribbean.145Mofokeng & Matima, supra note 20, at 12. Similarly, the Panda Project aims to preserve the endangered Panda. This blockchain-based App creates unique digital copies of real pandas. These tokens can be exchanged with other tokens in the project.146Id. at 13; Panda Earth, Dapp Radar [].

Collectible NFTs have also begun to attract attention in the sports field, where teams can use them as a source of revenue. One such example is NBA Top Shot,147 NBA Top Shot, []. a marketplace where basketball fans and collectors can buy, sell, and trade NFTs of the most memorable moments in NBA history.148See Trautman, supra note 80, at 386–87.

NFTs have also upgraded online virtual reality worlds, and are expected to gain more popularity with the development of the Metaverse as there is a gordian knot between the two. As such tokens are expected to be an integral part of the metaverse.149Metaverse refers to a 3D virtual shared world where all activities can be carried out with the help of augmented and virtual reality services. For further information see generally Muhammet Damar et. al, Metaverse Shape of Your Life for Future: A Bibliometric Snapshot,1 J.of Metaverse 1 (2021). Undoubtedly, the economy of the metaverse depends to a large extent on the possibility to authenticate visual assets. This authentication can be reached by creating NFTs which are blockchain based.150See NFTs: The Metaverse Economy, Financial Times NFTs allow real ownership of collectible assets, rather than a mere license that applies to one specific game only.151Trautman, supra note 80, at 400-01 if a person’s avatar earned the Sword of Admiration, the game creator’s database reflected that she owned that sword (and thus the avatar would display it, and the sword appeared in the avatar’s inventory), but no-one else would have such a sword available. But the player could not take the item out of the game, or into any other virtual environment.”). For example, the virtual sword Dragon Saber can only be owned in the virtual world Legend of Mir and cannot be used in other virtual worlds.152Justin M. Ackerman, An Online Gamer’s Manifesto: Recognizing Virtual Property Rights by Replacing End User Licensing Agreements in Virtual Worlds, Phoenix L. Rev. 137, 138 (2012). Moreover, since NFTs are eternal, they do not disappear when the virtual word shuts down.153Trautman, supra note 80, at 424. In the online world “Decentraland,” for instance,154 Decentraland, []. the virtual assets are NFTs, “from its virtual plots of land to the art on the walls in the virtual galleries.”155Alexandra Marquez, Welcome to Decentraland, Where NFTs Meet a Virtual World, NBC News (Apr. 5, 2021, 7:54 AM), []. This ownership allows players to interact with the digital asset. The NFTs also allow owners to influence how the virtual world operates and trade the virtual assets in the Decentraland Marketplace.156Investing in Decentraland in 2021, Republic (June March 29, 2021), [].

Lastly, collectible NFTs are used in the digital trading space. “Gods Unchained,”157 God Unchained, The Trading Card Game that Pays to Play (last accessed Mar. 20, 2023), []. for example, is a blockchain-based game that incorporates NFTs and gives players complete ownership over in-game items. In this game, players sell and collect playing cards, which can then be resold for potential profit.158Fairfield, supra note 8, at 1298–99; Shafin Rizvi, How to Buy and Play Gods Unchained Trading Cards, Trust Wallet (Mar. 29, 2021), []. The context of the game enhances the value of the card, as ownership is verifiable on the blockchain.159Rizvi, supra note 158

Thus, it might be said that the advantages of NFTs have opened new economic markets for investment in collectibles and other virtual assets.

3. Markets for Physical Items/ Distribution
of Resources and Ticket Brokers

Many NFTs are limited to activity in the digital sphere and function only as digital assets. However, this is not always the case. Some types of NFTs are indeed used for distribution or exchange for real-life items or resources.160See Fairfield, supra note 8, at 1277 (“NFTs have uses beyond digital assets as well. Zora, an NFT-enabled “Everything Exchange,” allows users to sell rare or unique items on their marketplace using tokens. Every item is given an NFT, and users are able to buy, sell, and trade the tokens on the exchange. Rather than being tied to a piece of digital art, the token is tied to a physical item. The token allows users on Zora to trade the item and capture its rise in value without ever having to ship the item. Once someone wants to buy the physical item, they can redeem the token and the physical item will be sent to them.”). For example, Zora is a marketplace to “buy, sell and trade limited-edition goods. All of these goods are launched as tokens.”161@OurZora, Twitter (Mar. 2, 2020 8:59 AM),; Fairfield, supra note 8, at 1277. These tokens are then tied to a physical item instead of a digital asset. Users can trade items without taking physical possession of them. If an owner wants to physically possess the item, they return the token to Zora, and the tangible item is sent to them.162Id. This saves costs and trouble associated with shipping, as users trade without taking physical possession of the item. Zora and other places like it allow integration of the physical and digital worlds, or example users might buy real baseball cards or other trading cards and play with them against others in online baseball games.163Id. Such NFT collectible products are tied to a rich social context in digital spaces.

Tokens can also be tied to real world experiences and activities. One of the most promising areas for NFTs is event ticket sales.164Lesavre et al., supra note 46, at 42. The owner of the world’s largest ticket marketplace, Ticketmaster, recently made predictions regarding the potential of NFTs in the concert market.165Ledger Insights, Ticketmaster Owner, Live Nation Envisages NFT Marketplace for Concert Video Moments, (May 7, 2021), Ticketing companies could use NFT tokens to create transferable tickets. Once an NFT ticket is created, it can be transferred without the involvement or approval of the ticketing company. Thus, ticket owners would be able to exchange the ticket for another date or event, transfer it to a friend, or sell it.166Id.; Onkar Singh, What is NFT Ticketing and How Does it Work? CoinTelegraph (Feb. 14, 2023),, (“the adoption of blockchain technology enables transparency and traceability, making it simpler to trace the ownership and origin of the ticket. NFT tickets can also be sold or exchanged on online exchanges, with their value depending on how much interest there is in the event.”) []. NFT systems could even be instrumental in creating an additional social dimension to event attendance. People who attended a play, movie, or rock concert could retain the tokenized ticket, post attendance on social media platforms and connect with others who attended the same event.167Ledger Insights, supra note 165.

4. The Music and Film Industries – Ownership
and Rights Management

NFTs are also starting to gain popularity in the music industry,168See Anne Steele, Musicians Turn to NFTs to Make Up for Lost Revenue, Wall St. J. (Mar. 23, 2021, 5:30 AM), []. as collectibles and as a tool for gaining royalties.169See Lawrence Wintermeyer, Lindsay Lohan On Why NFTs Are Destined For Hollywood, Forbes (Mar. 26, 2021) [] (“Tokenization through NFTs can help content creators and musicians actually own the property rights for what they create, and allow them to profit accordingly…Royalties are likely to be another avenue artists explore. Having been a Disney child star, Lohan is intimately familiar with the revenue model and its long-standing inefficiencies.”). For instance, 30-year-old electronic-musician Justin Blau, known as 3LAU, sold 33 NFTs, for varying prices,170Abram Brown, Largest NFT Sale Ever Came From A Business School Dropout Turned Star DJ, Forbes (Mar. 3, 2021), []. and obtained USD 17,000,000 from NFTs, “helped in part by a tokenized release of his three-year-old album ‘Ultraviolet,’ which grossed $11.6 million and briefly held the record for the highest price paid for a single NFT, $3.6 million.”171Trautman, supra note 80, at 409. Recently, Quentin Tarantino tokenized a collection based on his film “Pulp Fiction” which included “…high-resolution scans from his original handwritten screenplay of Pulp Fiction, plus a drawing inspired by some element of the scene…”172Adi Robertson, Miramax Sues Quentin Tarantino over Pulp Fiction NFTs, The Verge (Nov. 17, 2021), []. However, Miramax, the film’s production company, filed an action against him in a California court. Miramax claimed that Tarantino violated its copyright and trademark rights, as these NFTs don’t fall under Tarantino’s reserved rights for the film. Therefore, Miramax demanded that Tarantino halt the upcoming sale of these NFTs.173Id. In a recent motion, Terantino argues that he’s not infringing on any of Miramax’s copyrights since the NFTs will exploit the screenplay for Pulp Fiction and not the movie itself.174Winston Cho, Quentin Tarantino Tries for an Early Court Win in ‘Pulp Fiction’ NFT Legal Battle, Hollywood Reporter (June 23, 2022), [].

But why would artists want to buy or create an NFT? NFTs allow users to own the assets underlying the NFT, but they can also allow users to receive a license while ownership remains with the original owner or artist.175Fairfield, supra note 8, at 1295. Like most other blockchain applications, NFTs allow artists to circumvent the middlemen, such as record labels, that might dictate unfavorable terms when establishing contracts with most artists.176See Music and Blockchain: Introducing the Future of Sound, Near (Apr. 11, 2022), []. NFTs allows artists to create a license which protects their intellectual property rights while enabling them to monetize their work.177Evans, supra note 95, at 264; Adjovu & Fabian, supra note 30, at 8 (“The blockchain supply chain is expected to allow musicians to directly publish their music on a blockchain, then have their music reached by consumers on blockchain-based platforms.”). For instance, musical works can be tokenized and shared through Tokenized Music License (“TML”).178See Adjovu & Fabian, supra note 30. Thanks to blockchain-based platform characteristics such as immutability, transparency, programmability and decentralization, TML can provide innovative models for music distribution.179Id. at 70; see e.g., Rocki, supra note 31; Introducing Streaming Payments for Ujo with Connext Payments Channels and Dai, UJO (Mar 28, 2019), []. Two blockchain-based music streaming and downloading services, Roci and Ujo, operate based on a direct fan to artist model, where music is tied to the NFT and cannot be deleted. Under this system, musicians can quickly and efficiently receive royalty payments since “each time a music file is played, a transaction can be recorded on the blockchain with the applicable royalties due to rights holders”.180Adjovu & Fabian, supra note 30, at 36. Ujo Music was “one of the first blockchain-based music platforms to issue royalty payments to musicians on the Ethereum blockchain.” This allows artists to collect their royalties and enjoy financial compensation for their work within days or months, instead of what previously took years. Moreover, these platforms allow musicians to obtain information about royalties earnings in real time.181Id. at 10.

C. The Future: Tokenizing Speech – Sharing NFTs
as an Integral Part of the Internet

There is no doubt that more NFT uses will emerge in the future.182See Katya Fisher, Once Upon a Time in NFT: Blockchain, Copyright, and the Right of First Sale Doctrine, 37 Cardozo Arts & Ent. L.J. 629, 631 (2019). This part spotlights one NFT use that will undoubtedly expand and become commonplace: tokenizing speech. Speech is already tokenized today, as demonstrated by the examples detailed in the introduction to this Article: Jack Dorsey’s first tweet183Peters, supra note 1. and Kevin Roose’s New York. Times column, both of which were tokenized as NFTs.184Roose, supra note 6. However, to date, the aim of tokenizing speech has mainly been to sell such NFTs as collectibles for profit, due to their current or prospective historical or artistic value. In other words, the token was created to allow digital scarcity. The expressive value of the token as a unique form of communication was secondary. This Article predicts that NFTs will become a common means for communicating information.

As explained in Part I, blockchain transactions and data cannot be altered or deleted from the blockchain. The blockchain never forgets the information a token carries.185 Elrom, supra note 10, at 471; Po-Wei Chen & Bo-Sian Jiang, supra note 50 (“all the transaction records, which have been stored into the blockchain, cannot be modified and deleted”). Such unchangeable, eternal information can have highly significant implications for communication and freedom of expression, since it removes barriers and impediments to the free flow of information. This potential of NFTs will likely be realized quite soon, due to the anticipated incorporation of NFTs in social media. APIs have already begun to integrate NFTs with the internet,186See, e.g., ul Hassan et al., supra note 14, at 11 (explaining that API can be programmed in a way that could allow blockchain infrastructure to “talk” with the internet infrastructure). as tokenized tickets can be used as digital souvenirs and “add an attestation of it on social media to connect with other attendees and artist.”187Lesavre et al., supra note 46 at 42. NFTs are an integral part of the new addition of social media and in particular in the wake of the metaverse – a network of three-dimensional, virtual or augmented reality that will be Web 3.0 blockchain-based.188Jon M. Garon, Legal Implications of a Ubiquitous Metaverse and a Web3 Future 106 Marq. L. Rev. 163 (2022); Muhammet Damar et. al, Metaverse ,Shape of Your Life for Future: A bibliometric snapshot,1 J.Metaverse (2021). Such social networks are expected to develop and become a key player in an internet that will become open, without any central intermediary. 189See Mersch & Muirhead, supra note 48.

The widespread distribution of NFTs has tremendous implications and is a game changer for freedom of expression. NFTs make it possible to create eternal expressions that cannot be censored or forgotten. Because the expressions are recorded on the blockchain, such expressions would perpetually remain accessible to all Web 3.0 blockchain-based network users.190See Aurélie Bayle, Mirko Koscina, David Manset & Octavio Perez-Kempner, When Blockchain Meets the Right to be Forgotten: Technology Versus Law in the Healthcare Industry 789 (2018) IEEE/WIC/ACM International Conference on Web Intelligence (WI) Paper, 2018) (“Effectively, the blockchain immutability allows considering that by design, anything cannot be deleted from the ledger”). For more information on Web 3.0 as a decentralized version of the internet, see supra notes 46–47.

The next part of this Article will focus on the centralized nature of online websites that depend on intermediaries and the practice of gatekeeping. It will explain the main motives intermediaries have for censoring user speech and will explore these types of censorship. It will demonstrate that the law immunizes intermediaries from liability for their editorial decisions and allows them to shape the online discourse.

II. Public -Private and Private Online Censorship
and The NFT Engine of Speech

Contrary to what many people may believe, the internet is not a sovereign-free medium controlled by its users from the “bottom-up.” Although it may seem as if anyone “can publish freely and instantly online,”191Lavi, Publish, supra note 65, at 486. many intermediaries “actively curate the content”192Id. their users post on their platforms. For this reason, intermediaries have been dubbed the “New Governors of online speech.”193See Kate Klonick, The New Governors: The People, Rules, and Processes Governing Online Speech, 131 Harv L. Rev. 1598, 1603 (2018). Intermediaries use diverse strategies to moderate user-generated content. Unfortunately, there is still not enough transparency regarding these strategies.194Lavi, Publish, supra note 65, at 486. Yet there is no dispute over the fact that moderation is a fundamental aspect of any platform.195 . See Michal Lavi, Do Platforms Kill?, 43 Harv. J.L. & Pub. Pol’y 477, 496 (2020); see also Tarelton Gillespie, Custodians of the Internet: Platforms, Content Moderation and the Hidden Decisions that Shape Social Media 24–45 (2018). This type of intermediary intervention in content is needed for sound operation of the internet.196See Sarah T. Roberts, Behind The Screen: Content Moderation in the Shadow of Social Media 165 (2019). Without it, platforms would overflow with spammers, and this would interfere with the user’s ability to find real and relevant content and to engage with other users. Moreover, without moderation, platforms would be abused by “bad actors,” which as the next part demonstrates, has the potential to flood platforms with negative value content.197See James Grimmelmann, The Virtues of Moderation, 17 Yale J.L. & Tech. 42, 53-54 (2015); see also Roberts, Behind the Screen, supra note 196, at 165 (“[I]f you open a hole on the internet, it gets filled with shit”).

One main problem with platform moderation is the fact that there is not enough transparency as to what content is removed, when, and why. This can result in removal of legitimate content and censoring of specific types of agendas without accountability, ultimately impeding the marketplace of ideas. The following subsections will review several motives that could lead to censoring of legitimate content and provide examples.

A. Motivations for Removing Content

1. Removal of Harmful Content to Comply with the Law

The first motivation for intermediaries to remove content is legal rules. In such cases, the reason for removing content is collateral censorship that occurs “when a (private) intermediary suppresses the speech of others in order to avoid liability”198See Felix T. Wu, Collateral Censorship and the Limits of Intermediary Immunity, 87 Notre Dame L. Rev. 293, 295–96 (2011). for such speech. As this Article will explain in detail,199See Part II. B in the U.S., even if intermediaries refrain from taking action and removing illegal content on their site, they generally remain immune from liability. This is true even where there is knowledge of potentially illegal content.200See 47 U.S.C. § 230(c)(1) (2018). However, intermediaries might voluntarily take down content in response to reports that unlawful content has offended users. For example, intermediaries may remove a defamatory expression following a report, or a takedown notice, by a person who was defamed. They might do so because of a sense of corporate responsibility, or as an attempt to enhance their social standing and profits, if they think this will make users view them as family friendly.201See Lavi, Publish, supra note 65, at 512. They might also remove content as a preventive measure to shy away from murky legal areas and diminish the likelihood of claims against them, such as in the case of speech that constitutes a criminal offense. Although intermediaries are shielded from civil liability, such as in instances of defamation, they are not immune from liability for third party expressions that constitute federal criminal offenses.202The immunity for third-party content also applies to state crimes, but not to federal criminal offenses. See Jeff Kosseff, A Users’ Guide to Section 230, and a Legislators’ Guide to Amending it, or Not, 22 Berkeley Tech. L.J. 757, 770 (“Section 230 does not exempt state criminal laws, though in 2018, Congress amended the law to create an exception for certain state criminal prosecutions involving sex trafficking and prostitution, as well as some federal civil actions involving sex trafficking”); Eric Goldman, The Implications of Excluding State Crimes from 47 U.S.C. § 230’s Immunity 2 (Santa Clara Univ. Legal Studies Research, Paper No. 23-13, 2013), http://; Michal Lavi, The Good, the Bad, and the Ugly Behavior, 40 Cardozo L. Rev. 2597, 2636 (2019). Immunity also does not apply to copyright infringement that is subject to the Digital Millennium Copyright Act’s notice-and-takedown regime. Pub. L. No. 105-304, § 202, 112 Stat. 2860, 2877 (1998) (codified in scattered sections of 17 U.S.C.).

Outside the U.S. there is no overall immunity for intermediaries for content published by third parties. Many jurisdictions require intermediaries to remove illicit content. For example, the German government drafted the Network Enforcement Act (“NetzDG”) to target hate speech and fake news.203Netzwerkdurchsetzungsgesetz [NetzDG] [Network Enforcement Act], Oct. 1, 2017, (Ger.), trans. at []; Heidi J. S. Tworek , Fighting Hate with Speech Law: Media and German Visions of Democracy, 35 J. Holocaust Rsch. 106 (2021); id. at 113 (“Despite the law, it remains unclear whether NetzDG has made significant headway in stopping hate speech, which was one of its main stated aims “); id. at 118 (“Singapore, a state that has often used criminal law to suppress speech, stated that NetzDG served as an example in drafting its law to tackle fake news.”). The Act obligates intermediaries to delete content that is “evidently unlawful” within 24 hours of the filing of a complaint.204Lavi, Publish, supra note 65, at 476–77. Similarly, the European Commission has launched a legislative bill for a regulation designed to prevent the dissemination of terrorist content online. This proposed regulation is intended to complement Directive 2017/541 on combating terrorism. The bill requires intermediaries to remove terrorist content within “the first hours after it appears online because of the speed at which it spreads.”205State of the Union 2018: Commission proposes new rules to get terrorist content off the web, European Comm’n (Sept. 12, 2018), []. In addition, the proposed regulation would require platforms to “adopt more proactive measures to prevent the spread of terrorist content in the first place.”206See Hannah Bloch-Wehba, Content Moderation as Surveillance, 36 Berkeley Tech. L.J. 1297, 1313 (2021) (referring to Regulation (EU) 2021/784 of the European Parliament and of the Council of 29 April 2021 on addressing the dissemination of terrorist content online (“TERREG”)).

Another motive that may lead intermediaries to remove content is a safe haven provision. For example, Article 14 of the EU’s e-Commerce Directive dictates the framework for intermediary liability.207Directive 2000/31/EC, of the European Parliament and of the Council of 8 June 2000 on Certain Legal Aspects of Information Society Services, in Particular Electronic Commerce, in the Internal Market, 2000 O.J. (L 178) 1, Art. 14(1). It states that intermediaries are exposed to a ‘notice-and-takedown’ regime for hosting illegal content. Under this regime they are obligated to remove the illegal content in order to avoid liability.208Lavi, Publish, supra note 65, at 476. Although the e-Commerce Directive does not stipulate a general monitoring obligation for online platforms, the European Court of Justice (“ECJ”) has held Facebook and other intermediaries liable for content that was identical to existing content which had previously been declared unlawful.209Case C-18/18, Glawischnig-Piesczek v. Facebook Ir. Ltd., ECLI:EU:C:2019:821 (Oct. 3, 2019) (reviewing the decision of the Vienna Commercial Court). For further information on this case see Lavi, supra note 65, at 478. A similar regime was adopted by the E.U. in the context of online infringing content.210See Directive 2019/790, Article 17 of the Digital Single Market (DSM) Directive: Directive (EU) 2019/790 of the European Parliament and of the Council of 17 April 2019 on Copyright and Related Rights in the Digital Single Market and Amending Directives 96/9EC and 2001/29/EC, 2019 O.J. (L 130), Art. 17(DSM Directive) (Article 17 requires the platforms to make “best efforts” to either obtain a license for the material or block unauthorized content in order to avoid liability for infringing content uploaded by users). In order to meet this requirement, intermediaries will need to use screening technology to monitor their platforms.

Another aspect of the discussion of content removal relates to the “Right to be Forgotten.” In the EU, citizens benefit from a “Right to be Forgotten,” that was first established by the Data Protection Directive (“DPD”).211Council Directive 95/46, 1995 O.J. (L 281) 31 (EC). In Google Spain SL, Google Inc. v. Agencia Espaoolade Proteccion de Datos,212Case C-131/12, ECLI:EU:C:2014:317 (May 13, 2014). the ECJ held that search engines, such as Google, must remove search results that link to personal information. This includes removal of defamatory content found on third-party websites upon request.213 Meg Leta Jones, Ctrl + Z: The Right to be Forgotten 10, 41, 46 (2016); Lavi, Publish, supra note 65, at 479; Lavi, The Good, The Bad and the Ugly Behavior, supra note 202 , at 2630–34. The General Data Protection Regulation (“GDPR”) replaced the DPD in May 2018.214Regulation 2016/679, of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46, art. 17, 2016 O.J. (L119) 33. This Regulation includes a specific provision titled “right to erasure (‘right to be forgotten’)”2152016 O.J. (L 199) 4.5, Art. 17 [hereinafter “GDPR”]. that imposes data controller obligations to erase data relating to specific identified individuals.216Id. The GDPR imposes more specific obligations regarding information processing.217See Michael L. Rustad & Thomas H. Koenig, Towards a Global Data Privacy Standard,71 Fla. L. Rev. 365, 418 (2019) (“ The GDPR requires companies ‘to take into account the protection of the rights of individuals, both before and during their processing activities, by implementing the appropriate technical and organization measures to ensure that they fulfil their data protection obligations’”).

State regulation, as detailed above, does not focus on the direct wrongdoer but rather on digital infrastructure owners to encourage “them to regulate and surveil end-user speech according to the government’s purposes.”218Balkin supra note 69 at 8; id. at 37 (“[G]overnments seek to coopt or coerce infrastructure owners to govern speech in ways that governments like.”). This type of regulation was dubbed the new-school speech regulation219Id. at 8. and it is directed at online infrastructure. Such regulation is becoming increasingly important in the algorithmic society,220See id. at 8–9. in which individuals and organizations disseminate content at enormous speed and scale.221Id. at 5 (“The speed and scale of digital speech have transformed how speech is governed, regulated, and protected. Social media companies have developed an algorithmic-administrative system for governing speech that does not view speech in terms of rights.”).

In short, intermediaries can, and many times do, take down lawful expressions because of uncertainty regarding their liability. Liability creates an incentive to remove more content than necessary.222See Lavi, Platforms, supra note 195, at 537. Concerns regarding such collateral censorship increase with the use of algorithms for content moderation. Algorithms will likely fail to capture context accurately, resulting in “over-removal” or “false positives”: erroneous removal of lawful content.223Id. at 499.

2. Public- Private Cooperation in Content Removal

Removing harmful content published by private parties in order to obey the law is one thing, but what about removing content due to a direct request from government officials? In such cases, the motive for removing such content is cooperation with the government. In fact, platforms cooperate with governments on these matters extensively.224See Elena Chachko, National Security by Platform, 25 Stan. Tech. L Rev. 55, 107–130 (2021) (referring such practices as privatization of national security). Removal of content in accordance with government requests, even when the content at stake is beyond the legal definition of unlawful, is not uncommon around the world. In nations with authoritarian governments that control the internet, the government can directly censor speech.225See Tufecki, supra note 53, at 234 (“The Chinese government’s strategy for managing the internet is also centered on a deep understanding of the importance of attention and capacity to movements, rather than merely blocking information”). However, even in democracies, governments cooperate informally with intermediaries to remove content. Such informal cooperation has been dubbed “the invisible handshake.”226See Michael D. Birnhack & Niva Elkin-Koren, The Invisible Handshake: The Reemergence of the State in the Digital Environment, 8 Va. J. L. & Tech. 1, 14 (2003). The problem with this cooperation is that it allows governments to bypass constitutional constraints.227See Richards, supra note 61, at 139 (2021) (explaining that government agencies often collaborate with private actors in order to do things they would otherwise be constitutionally forbidden from doing). In the U.S, for example, intermediaries are not subject to the First Amendment and can censor speech.228Lavi, Publish, supra note 65, at 488.

A prominent example of such an invisible handshake is the cooperation between the Cyber Unit in Israel’s Office of the State Attorney and online platform intermediaries. A special voluntary enforcement track was developed for removal of allegedly unlawful content such as incitement or harassment of public officials.229Viki Auslender, Who Gets to Define Incitement?, Ctch By Calcalist (May 27, 2021, 2:42 PM),7340,L-3908694,00.html [] (“When Israeli security sees posts they do not like online, they send them to the cyber unit, which, in turn, selects those it believes violate a platform’s policy and asks, rather than demands, that they will be removed. Since the cyber unit’s inception in 2015, platform compliance rates have risen steadily”). Accordingly, it has been reported that the Cyber Unit conducts an internal procedure for reviewing the legality of allegedly unlawful content and requests the intermediaries to remove it.230Shadmy & Shany, supra note 55 (“The Cyber Unit then conducts an internal process for reviewing the legality of the content in question and the propriety of seeking its removal”). In practice, about 90 percent of requests made by the Cyber Unit to the platforms are accepted.231Id. In Adalah v. Cyber Unit,232HCJ 7846/19 Adalah v. Cyber Unit PD (2019) (Isr.). two petitioners claimed that the Cyber Unit’s activities infringed on their constitutional right to freedom of expression.233For further information on this case see Shadmy & Shany, supra note 55. However, Israel’s High Court of Justice held that no infringement of constitutional rights had occurred, and thus, de facto legitimized this practice. This type of private-public cooperation is also becoming common in other parts of the world. For example, police agencies in Europe have formed special “Internet Referral Units” to report and flag violations of platform content rules for takedown.234See Brian Chang, From Internet Referral Units to International Agreements: Censorship of the Internet by the UK and EU, 49 Colum. Hum. Rts. L. Rev. 114, 120–22 (2018); see also Bloch-Wehba, Content Moderation, supra note 206, at 1300. In the U.S., “[e]ven with Section 230’s liability shield intact, government agencies often engage in efforts to coerce, compel, or convince intermediaries to take down harmful content ….”235See Bloch-Wehba, Content Moderation, supra note 206, at 1305. Thus, even though U.S. law does not obligate intermediaries to take down content and even shields them from liability for editorial decisions, they often obey government agencies’ requests and take down user-generated content from their platform.

3. Removal of Content that Does not Comply with the
Terms of Service or the Intermediary’s Agenda

As private entities in the U.S, intermediaries are not constrained by the constitutional limitations of the First Amendment.236Balkin, supra note 69, at 18 (“In general, U.S. free speech doctrine has a difficult time dealing with the conflicting speech rights of private parties”). They are entitled to set “community guidelines” limiting certain types of speech on their platforms, and they can enforce these restrictions, even if the content concerned is a legitimate expression. Thus, intermediaries are able to remove speech, even speech that is protected by the First Amendment, while they themselves have no legal obligation to remove harmful speech.237Eric Goldman & Jess Miers, Online Account Terminations/Content Removals and the Benefits of Internet Services Enforcing Their House Rules, 1 J. of Free Speech L. 191, 201 (2021) (“the Internet services probably did not have a legal obligation to take action, but they did so anyway to protect their community or their interests”). In some instances, intermediaries act out of a sense of corporate responsibility: they take down content they believe to be harmful to the public good. In other cases, they may believe that a removal policy can help improve their image and increase profitability.238Lavi, Publish, supra note 65, at 512. They may also restrict content to distance themselves from specific agendas. Intermediaries outline the terms under which users and platforms interact in their “terms of service.” These address “not just appropriate content and behavior but also liability, intellectual property, arbitration, and other disclaimers.”239 . Gillespie, supra note 195, at 46. Intermediaries also outline “community guidelines” that present the platform’s expectations of what kinds of content users should be allowed to share, clarifying the platform’s principles and setting forth a list of prohibited types of content.240Id.

One common example of such prohibited content is sexual content, such as nudity or photographs of sexual activity.241Id. at 52. Another type of prohibited content proscribed by nearly all social media intermediaries is violent or obscene content.242Id. at 54 (referring to the policy of platforms not to allow violent content) for example, violent Terrorist videos ; See Kate Klonick, Inside the Team at Facebook that Dealt with the Christchurch Shooting, The New Yorker (Apr. 25, 2019), []. A third prohibited category is harassment and threat, such as exposing a person’s private information, address, or location.243Gillespie supra note 195. at 56–57. Fourth, most social media platforms prohibit hate speech such as racism, white supremacy, Holocaust denial and other hateful expressions against groups in society. Although some platforms fall short of naming hate speech explicitly, it can be included within umbrella categories such as “personal attacks.”244Id. at 58. Intermediaries also proscribe posts that encourage illegal activities such as “instructional bomb making, choking games, hard drug use, or other acts where serious injury may result.”245Id. at 60. Another category restricted by some social media intermediaries is discussion of self-harm, such as goading users to injure themselves, commit suicide, or develop an eating disorder such as anorexia or bulimia.246Id. at 61. Some social media platforms, such as Facebook, have a “real name policy” and remove content or accounts of users that do not use their real names.247See Jeff Kossef, the United States of Anonymous 235–39 (2022) (expanding on Facebook’s “real name policy”); Tal Z. Zarsky & Norberto Nuno Gomes de Andrade, Regulating Electronic Identity Intermediaries: The “Soft elD” Conundrum, 74 Ohio St. L.J. 1335 (2013). Other intermediaries prohibit commercial content, such as self-promotion, in an effort to prevent platform exploitation for commercial traffic.248 Gillepsie, supra note 195, at 63–64. Lastly, intermediaries encourage users to post high-quality content and refrain from transmitting fake news.249Id. at 64.

Intermediaries must moderate content, as explained above. In fact, moderation is a necessary part of any platform, without which platforms could be abused by malicious users who might flood them with negative content, or overload them with cacophony, making it difficult for users to weed out irrelevant content.250See Lavi, Platforms, supra note 195. Such moderation, however, is conducted without transparency and due process. Often it is even executed through opaque algorithms,251See Frank Pasquale, The Black Box Society: The Secret Algorithms that Control Money and Information 142–43 (2015); Hannah Bloch-Wehba, Access to Algorithms, 88 Fordham L. Rev. 1265, 1295 (2020) (referring to the problem of opaque algorithmic processes in general). that determines “what speech to take down and leave up, amplify, and demote”.252Balkin, supra note 69, at 33. Social networks may adopt a categorical approach to content removal and fail to differentiate between nuances of context in the content published on their platforms. Moreover, although they may aim to remove negative value content and harmful expressions, due to “false positives” they could remove legitimate speech instead. Finally, intermediaries may remove legitimate content that runs counter to their own agendas, essentially chilling legitimate speech. The following subsections will demonstrate these practices.

(a) Categorical Classification that Overlooks Nuances of Context

In some cases, intermediaries limit specific types of content in their terms of service and community guidelines. For example, most intermediaries restrict the publication of nude pictures on their platforms. In many cases, removing nude images has benefits, as such content can be offensive. Yet some intermediaries do not approve any kind of nudity. In cases where the approach is to ban all nudity, intermediaries categorically remove all nude images from their platforms, without considering the image’s context.253See Evelyn Douek, Governing Online Speech: From “Posts-as-Trumps” to Proportionality and Probability, 121 Colum. L. Rev.759, 774–75 (2021) (“The contestation around nudity on Facebook has been about definition of the category. Activists have had some success in getting Facebook to create limited exceptions to the general ban, but they remain narrow. The classification of a post as nudity is generally outcome-determinative: It comes down.”). Thus, even images that have historical, cultural, or artistic value are prohibited on their platform. The following three examples demonstrate this:

(1) Nudity with Historical Value – “Napalm Girl”

In 1972, several photographers managed to capture photos of children fleeing from napalm bombing during the Vietnam War. The most famous picture was of Kim Phuc. The picture caught her naked, with napalm burns over her back, neck, and arm.254 Gillespie, supra note 195, at 1. The photo won the Pulitzer Prize award. As this photo had a tremendous effect on the history of warfare, Tom Egeland, a Norwegian writer, included it in a September 2016 article he uploaded to Facebook.255Id. Facebook’s moderators then deleted Egeland’s post, as it included nudity and graphic suffering. Egeland refused to accept the deletion of his post and reposted the image, criticizing Facebook’s decision to take it down. Facebook then reacted by suspending him. But the story does not end here: A daily Norwegian newspaper reported Egeland’s suspension and published the photo. In response, Facebook instructed the newspaper to remove the photo, then went ahead and deleted it.256Id. The Norwegian newspaper then republished the picture and criticized Facebook on their front page. The incident further escalated when the Prime Minister of Norway posted the story on his page and Facebook removed his post as well. Only after extensive global news coverage criticizing the decision, Facebook finally reinstated the photo.257Id. at 3. In this case, Facebook preferred to remove the photo, as it depicts images of fully naked children screaming in pain, which would fall under the category of nudity. However, it would appear that Facebook did not grant enough weight to the photo’s context and its great historical and emotional value.

(2) Breastfeeding

Similar to the “Napalm Girl” case, the breastfeeding case also demonstrates categorical removal of content without addressing nuances of context. In this case, breastfeeding mothers posted photos on their personal profiles, or posted them in support groups for motherhood or breastfeeding. These support groups provide mothers with a forum where they can find advice about the challenges involved in breastfeeding and can belong to a community with other mothers. Facebook removed photos of mothers breastfeeding their babies, arguing that such photos violate their general policy restricting nudity.258. Id. at 147. Some of the mothers whose photos were removed by Facebook wrote angry blog posts and turned to the local press in criticism of Facebook. The mothers felt humiliated that the platform removed photos depicting their experiences, judging these photos as obscene.259Id. at 148. Following these incidents, women set up a group called “Hey Facebook, Breastfeeding Is Not Obscene!” which drew more and more members.260Id. at 158. In response, Facebook stated that it would only take down photos containing fully exposed breasts.261See id. at 160. The controversy expanded until finally, in June 2014, Facebook changed its policy and clarified that the company’s intention is to allow images of breasts for medical or health purposes, such as to raise awareness of breast cancer, and allow photos of women actively engaged in breastfeeding.262See id. at 168.

(3) Artistic Nudity

Another prominent example of content impacted by the bans in platform terms of service and community policies, that neglect nuances of context is artworks such as paintings and sculptures. Here too, the no nudity policy has been implemented without making any distinctions.263See id. Even after Facebook changed its policy regarding nudity in June 2014, declaring it would allow artwork depicting nude figures,264Id. controversy remains and Facebook still removes artwork containing nudity in some cases. For example, Elisa Barbari posted a picture of a naked statue of the Roman god Neptune, which is located in front of a church in Bologna’s Piazza del Nettuno and was created in the 1560s by a Flemish sculptor. The statue has an interesting story: it is believed that after the church forbade the artist from making the sculptures’ genitals bigger, as it was deemed not respectable enough to be placed in front of a church, as a retaliation, the artist redesigned the statue so that Neptune’s hand would point in a certain direction. Consequently, visitors who stand at a specific angle might think that the statue has a huge erection.265See Silvia Donati, The Story Behind Bologna’s Fountain of Neptune, Bologna Uncovered (Jan 7, 2017) [ ]. (“[F]rom a certain angle, the outstretched thumb of his left hand seems to stick out from the lower abdomen, similar to an erect penis.”); Facebook Under Fire After it Censors ‘Explicitly Sexual’ Statue of Neptune, RT (Jan. 3, 2017), []. Facebook removed an image of this famous statue on the grounds it was “sexually explicit” and thus violates the platforms’s guidelines.266See Facebook Removes Famous 16th Century Statue for Nudity, Times of Isr. (Jan. 3, 2017), []. Facebook added in its statement that “the use of images or video of nude bodies or plunging necklines is not allowed, even if the use is for artistic or educational reasons.”267Gianluca Mezzofiore, Facebook Apologizes for Removing Photo of Nude Neptune Statue, Mashable (Jan. 3, 2017) []. In response, Barbary wrote on her Facebook page “yes to Neptune, no to censorship.”268Edward Helmore, Facebook Blocks Photo of Neptune Statue for Being ‘Explicitly Sexual’, Guardian (Jan. 2, 2017), []. Facebook admitted that the picture did not violate its policy and apologized.269Id. (“A Facebook spokesperson later said in a statement that the censorship was a mistake.”). Although in the end Facebook referred to the removal as a mistake, it is a consequence of categorical removal, neglecting nuances of context.

In another incident, Facebook suspended the account of a French teacher, Durand-Baïssas, after he posted a photo of The Origin of the World, a famous 19th century painting depicting a naked woman’s vulva and abdomen.270Douek, supra note 253, at 774 (referring to Amar Toor, 19th Century Vagina Sparks French Lawsuit Against Facebook, Verge (Mar. 6, 2015), []). In response, the teacher filed a legal claim against Facebook for closing his account. About three years ago, a Paris civil court issued a ruling against Facebook over the censorship claims and concluded that the social media company should not have suspended the account. However, the court dismissed all damages.271Victoria Stapley-Brown, French Court Makes Mixed Ruling in Courbet ‘Censorship’ Case,’ Art Newspaper (Mar. 16, 2018), []. According to Durand-Baïssas’s lawyer, Facebook agreed to make an undisclosed donation to the arts.272See Gareth Harris, Long-Running Facebook Battle Over Censored Courbet Painting Gets Happy Ending, Art Newspaper (Aug. 6, 2019), [].

Such removals are not a mere mistake, they represent an ongoing negotiation with the platform, or a public reckoning about society’s shared values.273See Gillespie, supra note 195, at 142.

(b) Removing Content That Is Inconsistent with the Intermediary’s Agenda

Intermediaries can remove content that is inconsistent with the platform’s agenda and in doing so silence free speech. The case of Prager University (“PragerU”) serves as a good example.274See Prager Univ. v. Google LLC, 951 F.3d 991 (9th Cir. 2020). In this case PragerU, a non-profit organization, shared conservative ideas by publishing short videos on YouTube. The videos gained popularity and were viewed by millions of people. YouTube categorized the videos as containing potentially mature content. Following this categorization, it limited access to the videos, and PragerU could no longer monetize them. As a result, PragerU lost advertising revenues. PragerU challenged YouTube’s subjective classification in court on a constitutional basis, arguing that this classification separates PragerU’s videos from other videos and violates its First Amendment rights.275Id. at 995. For expansion on this case, see Chase Edwards, Developments in Intermediary Liability, 76 Bus. L. 339, 344 (2021) and Lavi, Publish, supra note 65, at 489. The Ninth Circuit rejected PragerU’s claims, explaining that the “‘First Amendment prohibits the government—not a private party—from abridging speech’ and neither YouTube nor its parent company Google are state actors.”276Edwards, supra note 275, at 344 (quoting Prager, 951 F.3d at 991); see Isobel Asher Hamilton, YouTube Isn’t Bound by the First Amendment and Is Free to Censor PragerU Videos, a Court Ruled, Bus. Insider (Feb. 27, 2020), []. The court ruled that Youtube’s internal policies are not subject to First Amendment considerations.

4. False Positives

As explained above, various motives drive intermediaries to intentionally remove content. It should, however, be noted that in many cases intermediaries remove content by mistake. This occurs when they believe it belongs to one of the categories above, when in fact it does not belong to any of these categories. Such removals are different from a policy of categorical removal, which represents negotiation with the platform regarding shared social values. These removals are mere mistakes and did not concern any type of content that the intermediary intends to police.

When discussing mistakes, we must first address different types of moderation. There are two different kinds of moderation: ex ante moderation, which moderates content before it is published on the platform, and ex post moderation, which interferes with content that has already been uploaded to the platform.277Kate Klonick, The New Governors: The People, Rules, and Processes Governing Online Speech, 131 Harv. L. Rev. 1598, 1635 (2018). Moderation can react to user notices of offensive content278Moderation can remove content ex-post following a user notice and can prevent the re-upload of the content ex-ante following such notice. See id. or seek such content out proactively. It can be executed automatically using learning algorithms and artificial intelligence, or manually, by human moderators.279See Klonick, supra note 193, at 1625–30; Lavi, Platforms, supra note 195, at 497. All methods of moderation may lead to removal of false positives.

Humans make mistakes, and human moderators sometimes make moderation mistakes. There are different types of relationships between human moderators and platforms. Many moderators are contract workers who toil long hours for low wages, for example, outsourcing workers in Nairobi Africa,280Billy Perrigo, Inside Facebook’s African Sweatshop, Time (Feb. 17, 2022, 10:04 AM), [] (“workers in this Nairobi office are among the lowest-paid workers for the platform anywhere in the world, with some of them taking home as little as .50 per hour”). and outsourcing workers in the United States.281 Roberts, Behind The Screen, supra note 196, at 71; Casey Newton, Bodies in Seats, The Verge (June 19, 2019, 10:37 AM), []. These workers have to work quickly to complete their task on time and satisfy their employer’s platform requirements.282See Roberts, Behind The Screen, supra note 196, at 71–72 (describing how different types of low-wage human contractors moderate content) (“twenty- first century configurations of labor are undergoing a globalized race to the bottom in search of ever cheaper, faster more human and material resources to compete in the 24/ 7 networked marketplace…The postindustrial labor economy has demanded great geospatial rearrangements and migrations of people whose ‘flexibility’ is often a synonymous with ‘instability’ , ‘precarity’ or ‘marginality’”). Gillespie noted that “[b]ecause this work is distributed among different labor forces, because it is unavailable to public and regulatory scrutiny, and because it is performed under high pressure conditions, there is a great deal of room for slippage, distortions, and failure.”283 Gillespie, supra note 95 , at 117. Moderators can thus make mistakes and fail to remove content that the intermediary is interested in removing according to their terms of service (“false negatives”), or remove content that is compliant with the platform’s terms of service and community guidelines, and does not conflict with the intermediary’s agenda (“false positives”).

The problem of moderation errors increases with the use of algorithmic tools designed to moderate content. Algorithm sensitivity to the context of speech is still insufficient, thus, automatic decisions by algorithms are often inaccurate.284See Keller, Filtering Facebook, supra note 69. Since algorithms are not great at understanding indications of context, such as tone or intended audience, using algorithms to detect harmful speech is likely to result in far more false positives than false negatives.285Danielle Keats Citron & Neil M. Richards, Four Principles for Digital Expression (You Won’t Believe #3!), 95 Wash. U.L. Rev. 1353, 1362 n.53 (2018); Danielle Keats Citron, Extremist Speech, Compelled Conformity, and Censorship Creep, 93 Notre Dame L. Rev. 1035, 1054-55 (2018). See also Gillespie, supra note 195, at 99, 107. False positives have extensive implications, especially due to technological advances that allow intermediaries to identify content that has previously been removed and prevent users from reposting it, or take it down immediately. Prevention of dissemination of such content can extend beyond a single platform. Voluntary cooperation between intermediaries “allows them to share unique digital fingerprints,286For further discussion on such technology and its uses, see Rafal-Kuchta, The Hash—A Computer File’s Digital Fingerprint, NewTech. L. (Oct. 9, 2017), []; Susan Klein & Crystal Flinn, Social Media Compliance Programs and the War Against Terrorism, 8 Harv. Nat’l Sec. J. 53, 79–81 (2017)). that they automatically assign to videos or photos of offensive content that they have removed from their web- sites”.287Lavi, Platforms, supra note 195, at 567. This allows peer intermediaries to duplicate their moderation (i.e., to identify the same content on their platforms and prevent re-posting or initiate removal). Such detection trolls, however, are inaccurate in interpreting the context of expressions. This is especially problematic for content which is text-based, as opposed to images, as the algorithms are insufficient at textual interpretation.288 Gillespie, supra note 195, at 98–108. As these inaccuracies could lead not only to removal of the original expression by mistake, but also to the removal of shares and replications throughout the web, the end result would be extensive censorship of legitimate speech.

B. CDA Section 230’s Immunity –No Intermediary Liability for Content Removal, No Liability for Failing to Remove Content

Section 230 of the Communications Decency Act28947 U.S.C. § 230; Notably that this Section was exported to Canada and Mexico through the United States -Mexico- Canada Trade Agreement which went into force on July 1, 2020. See Article 19/17 of the U.S. Mexico -Canada Agreement; Citron, The Fight For Privacy, supra note 67, at 89 (referring to the agreement and stating that “It pledges signatories to refrain from measures that treat interactive computer services as liable for content created by others.”). represents “the mindset of internet exceptionalism, differentiating the internet from the media before…”290Lavi, Publish, supra note 65, at 472. This Section of United States’ legislation generally blocks lawsuits against online intermediaries for any harm caused by content posted by users.291 Citron, The Fight For Privacy, supra note 67, at 86 ( “Section 230 (c ), adopted in 1996 has two key provisions . Section 230 (c )(1) addresses the under removal of content…Section 230 (c )(2) , conversely, concerns the over-removal of content”).Jeff Kosseff, The Twenty-Six Words that Created the Internet 64-65(2019); Lavi, Publish, supra note 65, at 472. Section 230(c)(1), titled “Protection for ‘Good Samaritan’ blocking and screening of offensive material,” dictates that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”292Communication Decency Act, 47 U.S.C. § 230(c)(1). In passing this section of the CDA, Congress basically “declared that online service providers could never be treated as publishers of material they did not directly develop.”293Lavi, Publish, supra note 65, at 472; Anupam Chander, How Law Made Silicon Valley, 63 Emory L.J. 639, 651 (2014). Section 230(c)(1) absolves online service providers, including website operators, of primary and secondary liability for a host of claims.294Id. Section 230(c)(1) encourages online publishers to exercise their editorial discretion, guaranteeing that online actors will favor some types of content over others.295See Eric Goldman, Per Section 230, Facebook Can Tell This Plaintiff To Piss Off–Fyk v. Facebook, Tech & Mktg. L. Blog (June 14, 2020). []. The CDA takes immunity from liability for third-party content, stipulated in Section 230(c)(1), one step further in the ensuing subsection, Section 230(c)(2), which directs that online services are not liable for blocking or removing third-party content.29647 U.S.C. § 230(c)(1)-(2); Kosseff, Users’ Guide, supra note 202, at 769 (“Section (c)(2) provides further protection for moderation, as well as for providing tools, such as website blockers, that allow users to control harmful content.”). Despite this, defendants in court have tended to rely more on the overall immunity in Section 230(c)(1), since Section 230(c)(2) requires that blocking and removal decisions be made “in good faith.”297See Kosseff, Users’ Guide, supra note 202, at 778 (“In a 2020 review of more than 500 Section 230 decisions over two decades, the Internet Association found only 19 that involved Section (c)(2).”); Eric Goldman, The Ten Most Important Section 230 Rulings, 20 Tul. J. Tech. & Intell. Prop. 1, 6–7 (2017) ; Eric Goldman, Online User Account Termination and 47 U.S.C. Sec. 230(c)(2), U.C. Irvine L. Rev. 659, 666 (2012) (“Several § 230(c)(2) cases have held that good faith is determined subjectively, not objectively. In that circumstance, courts should accept any justification for account termination proffered by the online provider, even if that justification is ultimately pretextual.”) [hereinafter Goldman, Online User Account Termination]. However, even when relying on Section 230(c)(2), courts have provided extensive protections to different internet services.298E.g., Goldman, The Ten Most Important Section 230 Rulings, supra note 297, at 6 (referring to Zango, Inc. v. Kaspersky Lab, Inc., 568 F.3d 1169, 1176-78 (9th Cir. 2009)); Danielle Keats Citron, How to Fix Section 230, B.U. L. Rev. (forthcoming) (manuscript at 15) (proposing to leave Section 230(c)(2) as is). Furthermore, a leading scholar on Section 230, Professor Goldman, has stated that no online provider has lost Section 230(c)(2) immunity through failure to make a good faith filtering decision.299Goldman, Online User Account Termination, supra note 297, at 665.

The broad discretion Section 230 offers to social media platforms to screen content and block material has come under scrutiny. President Trump’s Executive Order on “Preventing Online Censorship” pertaining to online platforms,300Exec. Order No. 13,925, 85 Fed. Reg. 34,079 (May 28, 2020). aimed to narrow Section 230’s immunity. The Executive order declared that “[i]t is the policy of the United States that large online platforms, such as Twitter and Facebook, as the critical means of promoting the free flow of speech and ideas today, should not restrict protected speech.”301Id. at Sec. 4(a). However, the Biden Administration recently revoked this Order and it is no longer in force.302See Revocation of Certain Presidential Actions and Technical Amendment, Exec. Order No. 14,029, 86 Fed. Reg. 27,025 (May 14, 2021); Eric Goldman & Jess Miers, Online Account Terminations/Content Removals and the Benefits of Internet Services Enforcing Their House Rules, 1 J. Free Speech L. 191, 193 n.5 (2021). A 2021 U.S. Supreme Court opinion also voiced criticism of the broad discretion enjoyed by intermediaries and their power to screen content and block material. The court even made an analogy between private platforms and public accommodators,303Biden v. Knight First Amend. Inst. at Columbia Univ., 141 S. Ct. 1220, 1221 (2021) (Thomas, J., concurring) (criticizing the discretion of social media platforms in Section 230 to screen content) (“[I]t seems rather odd to say that something is a government forum when a private company has unrestricted authority to do away with it.”). Justice Thomas concluded that the Supreme Court will have to address current positions in which few digital platforms have unprecedented control over large amounts of speech, and determine how to apply legal doctrines such as common carrier status and public accommodation to “highly concentrated, privately owned information infrastructure.” Id. at 1221; see also Michal Lavi, Targeting Exceptions, 32 Fordham Intell. Prop. Media & Ent. L.J. 65, 140–142 (2021). a stance echoed by legal scholars.304E.g., K. Sabeel Rahman, The New Utilities: Private Power, Social Infrastructure, and the Revival of the Public Utility Concept, 39 Cardozo L. Rev. 1621, 1668 (2018) (proposing to apply public utilities concept on online platforms); see generally Orit Fischman-Afori, Online Rulers as Hybrid Bodies: The Case of Infringing Content Monitoring, 23 U. Pa. J. Const. L. 351 (2021) (proposing that online platforms should be treated as hybrid bodies and subject them to public law standards). But see Jack M. Balkin, Fixing Social Media’s Grand Bargain (Hoover Working on Nat’l Sec., Tech. & L., Aegis Series Paper No. 1814Grp. 1814), [] (explaining that imposing the full spectrum of public forum obligations on intermediaries is undesirable and would actually make things worse). Moreover, it should be noted that recently, the U.S. Supreme Court agreed to hear a case over the interpretation of Section 230. The case presents the argument that “international technology companies no longer shirk responsibility for online terrorist content.”305Ariel Kahana, Israeli NGO Gets U.S. Supreme Court Nod in Bid to Hold Social Media Accountable for Terror, Israel Hayom, (Oct. 10, 2022) []; Ex-Israeli Intel Officials to SCOTUS: Social Media Platforms Aid, Abet Terrorism, Jerusalem Post, (Dec. 12, 2022) []; Michal Lavi, Manipulating, Lying, and Engineering the Future, 33 Fordham Intell. Prop. Media & Ent. L.J. 221,328 (2023). The case is Gonzalez v. Google LLC (No. 21–1333). This case could lead to narrowing of the immunity granted in Section 230. Yet at present, unless the intermediary is a governmental website or is operated directly by governmental officials,306See Biden v. Knight First Amend. Inst. at Columbia Univ., 141 S. Ct. 1220, 1222 (2021) (Thomas, J., concurring). it will continue to enjoy broad immunity for editorial choices without being subjected to constitutional scrutiny.

C. A World Without Censorship – NFTs as The Engine of Speech

Intermediary abilities to govern speech and define the standard of free speech globally can lead to a chill on legal and legitimate expression. As outlined above,307See supra Part II.A. intermediaries have various motives for taking down content. Beyond harmful content, private-public cooperation can lead intermediaries to take down political speech or protest posts against the government. Frequently, intermediaries remove content categorically, neglecting nuances of context, or mistakenly remove legitimate content that does not violate any community guideline. As elucidated above, the problem of “false positives” becomes more acute as monitoring by algorithm increases.308See supra Part II.A (4). “These dynamics may transform online intermediaries into engines of unaccountable private censorship”.309See Bloch-Wehba, Content Moderation, supra note 206, at 1305. Consequently, there is a danger that extremely valuable speech could be removed, including political speech, as well as speech that has artistic, historical, or cultural value. This is precisely where NFTs step in and change the rules of the game.

As NFTs create eternal, irreversible, and unalterable expressions which cannot be deleted from the blockchain, they could become a new communications tool and an engine of speech. No more secrets, no more censorship. NFTs could be a means to achieve real freedom of expression, enabling anyone to upload protest posts against governments, to act as a whistleblower and uncover corruption, or to post images of provocative art or images that express their cultural values, without worrying about censorship or being silenced. Because speech cannot be removed from the blockchain, it is immune to censorship. Anyone can create an NFT containing a special expression they want to remain eternally accessible to all blockchain users, thus making their expression unforgettable. Moreover, the more NFTs become integrated with social media310For example, NFTs can be integrated to social media by API. See ul Hassan et. al, supra note 14, at 11 (explaining that API can be programmed in a way that could allow blockchain infrastructure to “talk” with the internet infrastructure). the greater the potential for a censorship-free world, with a multitude of important individual expressions. This will become possible as more blockchain-based social networks develop and as Web 3.0 becomes a mainstream, integral part of social media. When this happens, an open, trusting internet without any central intermediary could be created. This could assist in allowing the free flow of information.311See Mersch & Muirhead, supra note 48. Thus, as stated above, NFTs could truly become the engine of speech and perhaps allow a utopia of free expression advocators – a world without censorship. But, as we very well know from other technological advances, every positive advance has its pitfalls.

III. The Flip Side of NFTs: Permanency of Abusive Speech

NFTs have many beneficial uses and the innovative use of blockchain promises to be a game changer for promoting freedom of expression, functioning as an engine of speech. As illustrated in the previous section, by creating a permanent expression that cannot be altered or deleted from the blockchain, NFTs could lead to the realization of a utopian world without barriers or limitations on speech. This utopia could however easily turn into a dystopia. Indeed, a system that makes it impossible to delete the information published on the web can promote freedom of expression. Yet the inability to delete information has a flip side. It could also enable the dissemination of abusive speech, without any ability to remove it from the blockchain. Thus, it also has the potential to exacerbate existing types of harm resulting from dissemination of harmful expression.

Although NFTs aren’t yet integrated into search engines like Google, the permanency of tokenized abusive speech could become a major problem when blockchain-based systems become a substantial component in the architecture of tomorrow’s web (Web 3.0). This could cause tremendous harm to victims, of an even more severe type than currently exists. Publication of permanent, unalterable, and unerasable speech could even infringe upon the victim’s own free speech—once their good name has been tainted by an NFT, they may self-exclude from online and even offline forums.312See Lavi, Publish, supra note 65, at 469; Jack M. Balkin, How to Regulate (and Not Regulate) Social Media, 1. J. Free Speech L. 71 (2021 First Amend. Inst. Colum. U. (Mar. 25, 2020); See Danielle Keats Citron & Benjamin Wittes, The Internet Will Not Break: Denying Bad Samaritans §Sec. 230 Immunity, 86 Fordham L Rev. 401, 420 (2017) (“Individuals have difficulty expressing themselves in the face of online assaults.”); See Danielle Keats Citron, Cyber Mobs, Disinformation, and & Death Videos: The Internet as It Is (and as It Should Be), 118 Mich. L. Rev. 1073, 1083 (2020) (“Online falsehoods, privacy invasions, and threats imperil targeted individuals’ life opportunities, including their ability to express themselves.”). Consequently, their ability to engage with others as equals would be curtailed and the public debate suppressed. Furthermore, victims might suffer continuous severe harm to reputation, economic harm, and even emotional distress. In some cases, they may even suffer harassment and violent physical attacks, which might have a vast potential to become an ongoing problem, as the expression is perpetuated on the blockchain. Furthermore, when the tokenized abusive expression concerns a public representative or a public matter, the damage would not only effect the individual, but the public interest as well.313See Omri Ben-Shahar, Data Pollution, 11 J. Legal Analysis 104, 105, 112–13 (2019) (treating “fake news” as “data pollution” that disrupts social institutions and public interests in a similar manner to environmental pollution).

As stated above, courts have ruled that social media companies are not state actors and are not bound by the First Amendment.314 U.S. Const. amend. I (“Congress shall make no law . . . abridging the freedom of speech, or of the press”); Prager Univ. v. Google LLC, 951 F.3d 991 (9th Cir. 2020); Prager Univ. v. Google LLC, 2022 WL 17414495, at *12 (Cal. App. Ct. Dec. 5, 2022) (“State action is absent here, because social media platforms are generally permitted to decide for themselves what content to publish.”). Therefore, they can remove speech that is protected by the First Amendment, and not just unprotected speech,315For the categories of unprotected speech see Victoria L. Killion, Cong. Rsch. Serv., IF11072, The First Amendment: Categories of Speech (Jan. 16, 2019), []. and overcome structural constitutional constraints.316Chachko, supra note 224, at 111 (“Given the constitutional and institutional constraints on government, government is not as well placed as platforms to directly manage platform-enabled threats and geopolitical challenges. “). Moreover, as explained in Part II.C., Section 230 of the CDA31747 U.S.C. § 230; see Eric Goldman, Why Section 230 Is Better than the First Amendment, 95 Notre Dame L. Rev. 33 (2019). provides online intermediaries with immunity from civil liability for content uploaded by third parties. Therefore, they are immune from liability for failing to remove harmful content and are also immune for removal of legitimate content. In other words, intermediaries are free to remove expressions, even when the expressions are protected by the First Amendment.318See supra Part II.B; Citron, Fix Section 230, supra note 298, at 15 (proposing to leave Section 230(c)(2), as is); Kosseff, Users’ Guide, supra note 202, at 12 (“courts would soon interpret Section (c)(1) to mean that platforms are not responsible for the content that their users post, whether or not they moderated user content . . . Section (c)(2) provides further protection for moderation, as well as for providing tools, such as website blockers, that allow users to control harmful content. The provision states that interactive computer service providers cannot be liable due to “‘any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected’” or enabling or providing “‘the technical means to restrict access’” to such material.”). As demonstrated, intermediaries have various motives for removing user content from their platforms.319See Part II, supra. Indeed, there is a danger that intermediaries could censor speech in cooperation with governments, or to promote a commercial agenda. Yet, they also play an important role in removing offensive expressions and harmful communication, in order to allow civil communication which is not violent or insulting. In fact, one of the main functions of social media is to enforce community standards and by doing so change the public opinion.320See Jack M. Balkin, To Reform Social Media, Reform Informational Capitalism, in Social Media, Freedom of Speech and the Future of our Democracy 101, 111 (Lee Bollinger and Geoffrey R. Stone eds., 2022). As explained in Part II, outlining such norms and enforcing them are crucial to the operation of any platform, which would otherwise be filled with negative value content that is offensive to users and third parties.321See supra notes 193–94. Thus, even though in practice, content is often removed inaccurately and categorically, and there are frequent “false positives”; existing types of moderation are preferable to the alternative of no moderation at all.322Ashutosh Bhagwat, The Law of Facebook, 54 U.C. Davis L. Rev. 2353, 2354 (2021) (proposing that the right path to address the inaccuracies and fallacies of moderation is to support “sensible, narrow reforms” and leave it to social media companies to correct the flaws).

NFTs are relatively new, so we do not know all the ways they could be abused to aggravate harmful speech. The following subsections demonstrate in detail the types of harm that can be caused by different kinds of offensive content, which are projected to intensify should tokens be abused to disseminate offensive content without mitigation. Current content removal methods would be unsuccessful in moderating blockchain content.

A. Tokenized Speech Abused – A Roadmap of Aggravated Harm

1. Harm to Reputation

a. Defamation

Imagine John Doe tokenizes a social media post on Jane Doe accusing her of yelling racist insults, even though she did nothing of the sort.323See La Liberte v. Reid, 966 F.3d 79 (2d Cir. 2020) (plaintiff filing a defamation suit because social media posts of a T.V personality suggested that she was a racist). In addition, and in order to make his claims seem real, John also tokenizes a deep-fake video of Jane yelling racist insults,324Deep-fakes are believable videos, photos, and audios of people doing and saying things they never did. See Robert Chesney & Danielle Citron, Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security, 107 Calif. L. Rev. 1753, 1759–60 (2019) (explaining the emergence of machine learning through neural network methods that increase the capacity to create false images, videos, and audio and that generative adversarial network can lead to the production of increasingly convincing and nearly impossible to debunk deep fakes). and the video seems believable, even though Jane did not yell racist insults of any kind. Furthermore, imagine that the post goes viral and generates media attention. It is likely that if the post had not been tokenized, social media platforms would remove it upon Jane’s request, despite the lack of any legal platform obligation to do so under U.S. law.32547 U.S.C. § 230. For example, Facebook’s community standards forbid posting expressions that degrade others.326Community Standards, Facebook, [] (last visited Jan. 28, 2023) (calling on users to “respect” and “not harass or degrade others”). Moreover, Facebook aims to remove misleading and manipulated media such as deep-fakes.327Facebook Community Standards: Misinformation, Facebook, (last visited Mar. 4, 2023) [] (“In other cases, the manipulation isn’t apparent and could mislead, particularly in the case of video content. We aim to remove this category of manipulated media when the criteria laid out below have been met.”). Because social media platforms often use embedded link technology for sharing posts, all the replications generated by users that click “share” would also be removed from the platform,328Lavi, Publish, supra note 65 at 511. and consequently, be obscured in search queries of Jane’s name. Thus, in the long-term, Jane’s reputational harm would be mitigated.

However, in the case described above, John has abused NFT technology to disseminate defamation. Because the post was tokenized it will remain on the blockchain for eternity, even after it has been refuted. The result would be an ongoing attack on Jane’s reputation, potentially curtailing her professional and economic opportunities. To make matters worse, the post would likely gain extensive exposure with the advance of Web 3.0 blockchain-based social media. Indeed, Jane could file a lawsuit against John under defamation law.329 Restatement (Second) of Torts § 558 (Am. L. Inst. 1977). To succeed in the defamation suit Jane has to prove that John posted a “false defamatory statement concerning” her; the publication does not benefit from defamation law privilege; the publisher bears at least negligent fault for the publication; “either actionability of the statement irrespective of special harm or the existence of special harm caused by the publication.” . However, the damage to her reputation would likely persist as the defamatory expression would remain on the blockchain. Moreover, if the identity of the person posting was anonymous, Jane would have to meet requirements for unmasking their identity.330See Nathaniel Gleicher, John Doe Subpoenas: Toward a Consistent Legal Standard, 118 Yale L. J. 320, 344 (2008) (explaining the consideration and standards that U.S. courts apply when considering whether to order John Doe subpoenas). In the context of Blockchain unmasking can become more complicated. For a proposal to allow to unmask cryptocurrency users in a related context see Jabotinsky & Lavi, supra note 11. Such requirements would make it more difficult to obtain monetary redress.

b. Shaming

In the previous section, we addressed an instance of defamation on the blockchain. However, another possibility is shaming – John Doe might tokenize a true statement in order to shame Jane Doe and cause her tremendous reputational and dignitary harm. Shaming aims to punish individuals who have deviated from social norms or violated the law. It makes such behavior public, criticizes, and denounces it.331See Jacob Rowbottom, To Punish, Inform, and Criticise: The Goals of Naming and Shaming, in Media And Public Shaming: Drawing the Boundaries of Disclosure 1 (Julian Petley ed., 2013). Yet, shaming is subjective and is based on individual values. It can target certain segments of society just because they are different. Furthermore, shaming can cause disproportionate harm to human dignity, particularly when it reaches a wide audience. In addition, the shameful information is disseminated without due process.332Lavi, Good, Bad, Ugly, supra note 199, at 2613. The shamed person is often not given the opportunity to justify their behavior and clarify that it does not violate norms as if might have first seemed.333Id. at 2621 (“A person may share videos without knowing the context or the full picture. This may lead to misinterpretation and condemnation of a nonreprehensive behavior”); Daniel J. Solove, The Future of Reputation: Gossip, Rumor, and Privacy on the Internet 83 (2007) (referring to shaming a person for stealing a cell phone, even though he was not the thief; instead, he might have received it as a gift without knowing that it was stolen). Finally, shaming can be taken out of context, and defamatory comments can be added to an existing text. As soon as shaming snowballs, it can even lead to mob justice, harassment and violence.334Lavi, Good, Bad, Ugly, supra note 202, at 2613.

For example, if John had posted the shaming expression on a social media website without tokenizing it, the social media website might have removed it due to a user report or Jane’s request. Shaming generally does not violate the law, and as long as it is not defamatory or does not belong to an unprotected category of speech, shaming is protected under the First Amendment. Even so, social networks such as Facebook perceive shaming as uncivil and consider it to be a violation of their community standards. Thus they can, and indeed do, mitigate shaming’s potential reputational and dignitary harm.335See Community Standards: Bullying and Harassment, Facebook, (last accessed Mar. 3, 2023) [] (Facebook will “remove content that’s meant to degrade or shame, including, for example, claims about someone’s sexual activity.”). Moreover, in the EU there is a legal “Right to be Forgotten.”336See Case C-131/12, Google Spain SL., Google Inc. v. Agencia Española de Protección de Datos, ECLI:EU:C:2014:317 (May 13, 2014). Accordingly, a person who does not feel comfortable seeing a post published on him appear in search results can claim that the post is irrelevant and request to delete it.337Lavi, Good, Bad, Ugly, supra note 202, at 2663. As mentioned,338See supra notes 209–211. the GDPR includes a specific provision titled “Right to Erasure (‘Right to be Forgotten’)”3392016 O.J. (L 199) 4.5, Art. 17. which imposes obligations on the data controller to erase the requested data.340Id. The EU’s GDPR expanded this right, imposing more specific obligations regarding information processing, which are interpreted broadly by the European Court of Justice (ECJ). However, when shaming is tokenized, it remains on the blockchain and can never be erased or altered. Without deploying appropriate cryptographic techniques that will help to minimize the risk to data subjects, NFTs could violate the EU’s “Right to be Forgotten”341See Anne Rose, GDPR Challenges for Blockchain Technology, 2 Interactive Ent. L. Rev. 35, 39 (2019). because “the core immutable ledger technology of blockchain prevents data subjects from exercising these rights”.342Anisha Mirchandani, The GDPR-Blockchain Paradox: Exempting Permissioned Blockchains from the GDPR, 29 Fordham Intell. Prop. Media & Ent. L.J. 1201, 1204, 1224 (2019) (“It may be argued that an immutable blockchain ledger inherently contradicts the pivotal right to be forgotten, or right of “erasure,” under the GDPR, which is too significant of a right granted to data subjects to be compromised for blockchain technology.”). In other words, if we view the Right to be Forgotten in light of blockchain technology, the current regulations fit poorly to the technology of the blockchain. This is a clear case where legal regulation lags behind evolving technology.343See id. at 1204. Thus, shaming cannot be forgotten if it is tokenized as an NFT, and the harm caused to the shamed individual through NFT abuse would be aggravated, in comparison to the harm that might have been caused by shaming in a traditional social media post.

2. Harm to Public Interest – The Case of Fake News

NFTs can be abused to spread fake news stories and increase their harmful effects. Such harm is not necessarily reputational or dignitary harm, as the fake story could also praise a person. For example, during the 2016 U.S. election campaign, fake news was posted on social networks, reporting that the Pope had endorsed Donald Trump. This post was shared a million times.344Zeynep Tufekci, Opinion, Mark Zuckerberg Is in Denial, N.Y. Times (Nov. 15, 2016), []. As fake stories spread across global cyberspace, they pollute the flow of information,345See Omri Ben-Shahar, Data Pollution, 11 J. Legal Analysis 104, 105, 112–13 (2019) (treating “fake news” as “data pollution” that disrupts social institutions and public interests in a similar manner to environmental pollution). and make it more difficult to distinguish between true and false information and conduct a truthful dialogue on matters of public importance. Thus, fake news can be detrimental to politics, democracy, and public interest.346Lavi, Publish, supra note 65, at 444. Beyond the general “truth bias,” that causes people to believe what they hear,347See Cass Sunstein, Liars: Falsehoods And Free Speech in an Age of Deception 73 (2021). when fake news stories circulate, they come to feel so true that people believe them, even after being presented with evidence of their falsity.348Lavi, Publish, supra note 65, at 445 (referring to Whitney Phillips, The Toxins We Carry, Colum. Journalism Rev. (Fall 2019)). Often, efforts to correct fake news by publishing the truth not only fail to counter the falsehood’s impact, they can even increase its credibility.349See id. Even when attempts to correct a falsehood reduce its influence, the effectiveness of such corrections is limited because they are often less widely viewed than the original falsehood.350Id. at 445. Thus, the best remedy for fake news stories is to remove them from the platform entirely.

Even though fake news constitutes protected speech as long as they do not reach the point of defamation,351See United States v. Alvarez, 567 U.S. 709, 718–19 (2012); see also Louis W. Tompros et al., The Constitutionality of Criminalizing False Speech Made on Social Networking Sites in a Post-Alvarez, Social Media Obsessed World, 31 Harv. J.L Tech 65 (2017); but see Sunstein, supra note 347 at 48. many social media companies prohibit disseminating them on their platform,352See, e.g, Terms of Service, Meta, (last visited Jan. 29, 2023) (prohibiting users from using the company’s products to do or share anything “unlawful, misleading, malicious, discriminatory or fraudulent.”); Gillespie, supra note 195, at 66 (“Then Google and Facebook barred fraudulent news sites from profiting from their advertising networks Facebook also partnered with fact-checking organizations like Snopes, Politifact, and to mark some stories as ‘disputed,’ and added a way for users to flag stories they believe to be false.”). and encourage users to create high-quality content.353 Gillespie, supra note 195, at 35 (giving the example of YouTube as a platform that “want[s] everyone to post, but it also partners with professional content producers and offers incentives to high- quality and popular amateurs.”). Intermediaries usually prefer to decrease the visibility of misleading content by labelling it354Id. at 66. or fact checking in cooperation with fact-checking organizations,355Id. so as to avoid playing the role of arbiters of truth. However, they can and do delete content, or suspend accounts that repeatedly spread such content, thus impairing the access to it.356See Rachel Lerman, Facebook Takes Down Three Coordinated Networks Using Fake Accounts, Wash. Post (Oct. 27, 2020), []. In contrast, when a fake news story is tokenized, it remains on the blockchain for eternity. Social media companies can label it as false or reduce its visibility by algorithmically downgrading it in searches or newsfeeds, but the most efficient tool – taking the content down – is unavailable.

3. Privacy Harm

a. The Problem of Doxing- Violence, Economic Harm and Emotional Distress

An additional problem might arise when personal information is tokenized, in a practice known as “doxing.” Take, for example, a case where John Doe tokenizes Jane Doe’s personal information, such as home address, phone number, social security number, and even her credit card and bank account numbers. Posting such personal details about an individual could pose a threat to the victim’s safety and expose them to physical danger, burglary, stalking, and on and offline harassment.357On doxing see generally Danielle Keats Citron & Daniel J. Solove, Privacy Harms, 102 B.U L. Rev. 793, 833–34 (2022). Such violations of privacy could lead to economic harm because identity thieves could misuse such personal information to conduct fraudulent transactions in the victim’s name.358. See Carissa Véliz, Privacy is Power: Why and How You Should Take Back Control of Your Data 107 (2020); Citron & Solove, supra note 357, at 834-35. Finally, such privacy violations could have emotional repercussions which impact the victim’s relationships and everyday life.359Citron & Solove, supra note 357, at 859.

To date, the U.S. still lacks a comprehensive federal data privacy law.360Daniel J. Solove & Paul M. Schwartz, ALI Data Privacy: Overview and Black Letter Text, 68 UCLA L. REV. 1252 (2022). Moreover, constitutional rights are characterized by negative rights vis-a-vis the state, and there are very few constitutional rights that apply against private actors.361Neil M. Richards & Woodrow Hartzog, Privacy’s Constitutional Moment and the Limits of Data Protection, 61 B.C. L. Rev. 1687, 1727–39 (2020). However, although in the U.S. there is no general legal obligation to remove private identifying information, many platforms derive their privacy rules from their community guidelines document.362 Gillespie, supra note 195, at 52. For example, according to Facebook’s community guidelines, the platform will remove content that “shares, offers or solicits personally identifiable information or other private information that could lead to physical or financial harm . . . .”363See, Privacy Violations, Facebook Community Standards, Meta, (last accessed Mar. 7, 2023) []. Thus, if John Doe had posted the private identifying information on a social media website without tokenizing it, the social media website might have removed it in response to a user report or Jane’s request.

As mentioned above, in contrast to U.S. law,364See Jones, supra note 213. in the EU citizens benefit from the “Right to be Forgotten.” The GDPR somewhat expanded this right in a specific provision titled “Right to Erasure (‘Right to be Forgotten’)”365GDPR art. 17 Regulation 2016/679 of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (EU). that imposes obligations on data controllers to erase data.366Id. Thus, in the EU citizens have a right to request the erasure of private information that is irrelevant, and are less dependent on voluntary social media community standards to remove such content.

However, when private identifying information is tokenized, it remains on the blockchain and cannot be removed. Thus, victims can find no relief or mitigating factors for posts that violate their privacy and they continue to be exposed to doxing, violence, economic harm, and social distress.

b. Sexual Privacy and Intimacy

Yet another problem could arise in the area of sexual privacy and intimacy. John Doe could tokenize a nude photo or video of Jane Doe without her consent and distribute it on the blockchain. The practice of distributing nude pictures without consent has been dubbed “revenge porn.”367See Citron, The Fight For Privacy, supra note 67, at 105–30 (expanding on the right to intimate privacy); Danielle Keats Citron, Sexual Privacy, 128 Yale L.J. 1870, 1918 (2019); Mary Anne Franks, “Revenge Porn” Reform: A View from the Front Lines, 69 Fla. L. Rev. 1251, 1261-70 (2017); Zak Franklin, Justice for Revenge Porn Victims: Legal Theories to Overcome Claims of Civil Immunity by Operators of Revenge Porn Websites, 102 Calif. L. Rev. 1303 (2014). Beyond the violation of privacy caused by revealing private information, nonconsensual “revenge porn” is also a violation of the victim’s intimate privacy and particularly sexual privacy.368See Citron, Sexual Privacy, supra note 367 at 1918; Danielle Keats Citron, Privacy Injunctions, 71 Emory L.J. 955 (2022) (explaining in details how non-consensual dissemination of nude photos violates the autonomy of victims, leads to distrust in others and makes it difficult for victims to form intimate relationships, deprives the victim’s social and economic opportunities, causes emotional harm and distress and infringes on the victims’ identity.).

U.S. law currently lacks the tools to address some of today’s sexual privacy violations.369Citron, Sexual Privacy, supra note 367, at 1929. If the victim wants to sue the person who posted their nude photos or videos in a civil court, they will need to invest significant resources. Moreover, victims are sometimes reluctant to sue as they fear that the case will cause the perpetrator to upload even more photos or videos and attract more unwanted attention. Many states have passed laws making nonconsensual porn a crime.370. See Richards, supra note 61, at 74 (2021). Criminal proceedings are possible, but they require law enforcement to invest resources and time, which only happens when the case is very clear-cut and evidence is strong.

If the nude photos or videos are uploaded to social media platforms, these platforms can somewhat mitigate the harm by taking them down and locking the IP address of the publisher or disseminator.371 Citron, The Fight For Privacy, supra note 67, at 88 (“…Grindr [is] in the best position to minimize the harm by fixing its software so that the app could block IP addresses …”). Indeed, as explained previously,372See supra Part II.C. Section 230(c)(1) exempts online service providers, including website operators, from primary and secondary liability for content published by others. However, websites are still exposed to federal intellectual property claims.373Citron, Sexual Privacy, supra note 367, at 1935. Therefore, “copyright law can provide an effective tool in sexual privacy cases involving the distribution of intimate images created by victims.”374Id. As a result, when victims submit a request, social media platforms are likely to promptly remove images or videos, as their only other alternative could be to face pecuniary damages under the Digital Millennium Copyright Act (DMCA).375Id.; Digital Millennium Copyright Act, 17 U.S.C. § 512. Moreover, many platforms voluntarily forbid pornography or nudity in their policies.376 Gillespie, supra note 195, at 52–54. For example, Facebook’s terms of service agreement has prohibited the upload of nonconsensual sexual content to the platform since 2014.377Citron, Sexual Privacy, supra note 367, at 1955. Platforms that use embedded link technology, such as Facebook, can indeed remove all the shares of such photos when the original post is removed.378Lavi, Publish, supra note 65, at 511–12.

Intermediaries can further mitigate harm by using learning algorithms to remove images that are identical, and even potentially similar to, content which has previously been recognized as offensive, and thus remove all replications of such content. Since revenge porn is based on images rather than text, technology identifies less “false positives” in this context.379 Daphne Keller, Stanford Center For Internet And Society, Dolphins in the Net: Internet Content Filters and the Advocate General’s Glawischnig-Piesczek v. Facebook Ireland Opinion 18–19 (2019); Lavi, Publish, supra note 65, at 478. Although intermediaries can and do remove revenge porn images, abusers can and sometimes do repost them. Technology however, has a solution for this problem as well – using hash techniques to prevent the cycle of reposting. Thus, following user reports, if social media determines that the image violates its terms of service, it designates a unique digital fingerprint – a hash.380Citron, Sexual Privacy, supra note 367, at 1955 (describing hashing as “a mathematical operation that takes a long stream of data of arbitrary length, like a video clip or string of DNA, and assigns it a specific value of a fixed length, known as a hash. The same files or DNA strings will be given the same hash, allowing computers to quickly and easily spot duplicates.”). Such technology can block such images from reappearing on any of its platforms, and holds great promise for mitigating harm caused to victims of revenge porn.381Id. at 1956.

However, when a nonconsensual pornographic image or video is tokenized, most of these technological mitigation efforts are rendered useless. The tokenized image remains on the blockchain and cannot be removed. As the original post cannot be removed, embedded links cannot promote removal of replications. Algorithmic techniques might allow removal of replications, but only if the replications themselves are not tokenized and the social media platform is not blockchain-based. Consequently, the original token remains on the platform and in the case of permissionless blockchain-based social media, equivalent NFTs can be created even if hash technology has been implemented. Thus, when NFTs are involved, victims of tokenized revenge porn are left without any meaningful legal recourse or practical tools to combat the wrongdoing, as the content cannot be removed from the blockchain and continues to inflict harm.

4. Extremist Speech that Encourages Physical Violence,
arm to Autonomy and Democracy

a. Hate Speech

Now assume Jane Doe is an alt-right white supremacist who wants to spread hate speech. Using NFTs, Jane can tokenize hate speech382See Hate Speech, Cambridge Dictionary (“public speech that expresses hate or encourages violence towards a person or group based on something such as race, religion, sex, or sexual orientation (the fact of being gay, etc.)”, [] (last visited Jan. 15, 2023). For more attempts to define hate speech by academics and legal frameworks see generally Andrew F. Sellars, Defining Hate Speech 15–20 (2016), []. against African Americans, Hispanics, Arabs, Jews, or other groups, insulting their ethnic backgrounds and calling for them to leave her country.383Such racist nationalist calls are common online and offline. See, e.g., Jason Murdock, West Virginia Woman Tells Mexican Restaurant Manager: ‘Get the F*** out of My Country. . . You Need to Speak English, Newsweek (Feb. 20, 2019), []. Hate speech targeted at assaulted groups can affect all aspects of their lives and infringe on their autonomy and sense of safety. Expressions such as these can promote stereotypes that then become part of the culture. They cause economic harm by promoting discrimination against members of a particular group, depriving them of opportunities for housing, employment, and services.384See Kathleen Mahoney, Hate Speech Equality and the State of Canada Law, 44 Wake Forest L. Rev. 321, 326 (2009). By denying such opportunities, hate speech can help curtail the group’s participation in democracy and in democratic culture. Finally, hate speech can result in violence and even genocide.385Id. For example, in 2018, the military in Myanmar (Burma) launched a violent hate speech attack on Facebook against the Rohingya minority, inciting violence against them. This online attack resulted in a violent reaction leading to physical attacks that took the lives of thousands of people, including 730 children under the age of five, while 700,000 other Rohingyas saved themselves by fleeing to Bangladesh.386 Facebook Admits It Was Used to ‘Incite Offline Violence’ in Myanmar, BBC News (Nov. 6, 2018), [].

Under Section 230(c)(1), online service providers, including website operators, are immune from primary and secondary liability for content published by others.387See supra Part II.B. Moreover, U.S. law does not ban hate speech as long as the speech does not specifically incite immediate violence or a true threat of harm.388See Evelyn Mary Aswad, The Future of Freedom of Expression Online, 17 Duke L. & Tech. Rev. 1, 43 (2018). However, in many countries outside the U.S. hate speech is punishable.389These countries include France, Germany, and Canada. See e.g., Rotem Medzini & Tehilla Shwartz Altshuler, Dealing with Hate Speech on Social Media: Towards an Interoperable Model for Addressing Racism and Strengthening Democratic Legitimacy, in Reducing Online Hate Speech: Recommendations for Social Media Companies and Internet Intermediaries 25, 44–45 (Yuval Shany ed., 2020). In Europe, for example, the European Court of Human Rights has often upheld bans on hateful speech.390Id.

Although the protection U.S. law provides against hate speech is very narrow, social media companies have voluntarily adopted policies to reduce hate speech on their platforms. “All social media platforms prohibit hate speech; the few that don’t mention it in a specific rule include it under a broader category like ‘personal attacks.’”391 Gillespie, supra note 195, at 58. For example, Facebook’s community standards forbid attacks which are based on “race, ethnicity, national origin, religion, sex, gender, sexual orientation, disability or medical condition.”392Hate Speech, Facebook, visited Mar. 30, 2023) []. Twitter’s hate speech prohibitions “appear under the ‘hateful conduct’ and ‘hateful imagery and display names’ headings of its speech code.”393Aswad, supra note 388, at 45; The Twitter Rules, Twitter, []. With respect to hateful conduct, users may not promote “violence against, threaten, or harass other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.”394The Twitter Rules, supra note 393. Platforms can and do remove hateful content when reported. Moreover, some platforms use artificial intelligence to detect hate speech and flag it for human review or removal.395 Gillespie, supra note 195, at 97 (explaining that most types of hate speech are removed, except for holocaust denial, which is deblocked from some countries). Another tool that platforms use to mitigate the harm of hate speech is to make it less visible using algorithmic tools, which hide posts or tweets.396Aswad, supra note 388, at 46 (focusing on Twitter). Moreover, platforms learn lessons from past events of harmful speech and improve moderation in the future. For example, following the mass atrocities against the Rohingya minority, Facebook created a Strategic Response team to tackle escalation to violence in conflict areas; “[t]he team has been described as the ‘latest evolution in the Silicon Valley’s culture: less ‘move fast and break things,’ and more thinking through the harm they are adding to half a world away.’”397Chachko, supra note 224, at 80.

The permanency of NFTs, however, is a major problem where hate speech is concerned. As the expressions cannot be removed, they are likely to continue to spread, exacerbate hate and infringe on the autonomy of minorities and entire races without any mitigation. This could give rise to a risk of perpetuating racism, discrimination, or distorted values. Consequently, the autonomy and dignity of such groups could be diminished. Moreover, such groups could suffer from stigma, economic harm and loss of opportunities, and their right to participate in democracy could be curtailed. Even worse, when hate speech is tokenized and cannot be removed from the blockchain, incitement to violence triggered by such expressions could intensify. Ultimately, this could expose groups to ongoing suffering, violence, physical harm, and even death.

b. Incitement to Terrorism

Yet another potential detrimental use of NFTs is incitement to terrorism. We already know that terrorists sometimes use anonymous cryptocurrencies to fund attacks.398Jabotinsky & Lavi, supra note 17, at 526. Now any aspiring terrorist can incite blockchain users to commit acts of terrorism. Suppose, for example, that Jane Doe is now a member of an alt-right group. Imagine that she has also managed to tokenize a video of a terrorist attack filmed in real time and even massacres.399Gamifying terror attacks is already something that occurs in real life. See, for example, the Christchurch mosque attacks. Associated Press, New Zealand Shooting: More than 200 Users Watched Live Stream Video of Christchurch Mosque Attacks, but Nobody Reported it, Says Facebook, S. China Morning Post (Mar. 20, 2019, 10:07 AM), []. On the role of gamifying terror attacks in radicalization, see Robert Evans, The El Paso Shooting and the Gamification of Terror, Bellingcat (Aug. 4, 2019), [] (referring to the live-streaming of the Christchurch attack, “Brenton Tarrant livestreamed his massacre from a helmet came in a way that made the shooting look almost exactly like a First Person Shooter video game. This was a conscious choice, as was his decision to pick a sound-track for the spree that would entertain and inspire his viewers.”). Gamifying of terror attack is a common practice among white supremacist terrorists. See also Lizzie Dearden, Germany Synagogue Shooting: Suspect ‘Broadcast Attack Livestream on Twitch’ and Ranted about Holocaust, Jews and Immigration, Independent (Oct. 9, 2019, 6:12 PM), []. To this token, Jane has also attached a manifesto inciting more terrorist attacks. In recent years, terrorist organizations have exploited social media to spread fear and disseminate propaganda that reaches potential recruits, incites sympathizers and others, and inspires people who are inclined to radicalization to take part in organizing and committing deadly attacks. The terrorist attack in El Paso, Texas serves as an example of the grave consequences of online incitement. The perpetrator of the El Paso attack, Patrick Crusius, announced the start of his rampage on 8chan’s board, a racist alt-right message board,4008chan is an image board site popular with extremists. See Ian Sherr & Daniel Van Boom, 8chan Is Struggling to Stay Online After El Paso Massacre, CNET (Aug. 7, 2019, 6:21 AM), []. through a post that included a four-page manifesto.401See Tim Arango, Nicholas Bogel-Burroughs & Katie Benner, Minutes Before El Paso Killing, Hate-Filled Manifesto Appears Online, N.Y. Times (Aug. 3, 2019), []. “The manifesto and posts on 8chan demonstrate Crusius’s radicalization and turn towards white supremacy.”402Lavi, Publish, supra note 65, at 483. Bellingcat, an investigative journalism website, reviewed and analyzed the 8chan posts and reached the conclusion that Crusius drew profound inspiration from an earlier manifesto by a Christchurch, New Zealand gunman and the video of his attack.403See id. at 483 (referring to Evans, supra note 399).

As discussed previously, in the U.S. website operators are immune from primary and secondary liability for content published by others.404See supra Part II-C. With respect to the person who incites terrorist attacks, certain forms of incitement are not protected by the First Amendment. These First Amendment exceptions include: promotion of “imminent lawless action”, “true intentional threats against individuals or groups, or posts that seek to cooperate, legitimize, recruit, coordinate, or indoctrinate on behalf of groups listed on the State Department’s list of designated terrorist organizations.”405Lavi, Publish, supra note 65, at 529. Thus, in traditional social media many expressions of incitement can be regulated by the government. Outside of the U.S., in many other states online platforms do not benefit from an overall immunity and are exposed to litigation for failing to remove posts that incite terrorism.406See id. at 506–07 (reviewing laws that impose obligations on intermediaries regarding inciting content); see, e.g., the U.K Terrorism Act, 2006 c.11 (Eng). Under this law, platforms have only two days to comply with a takedown request; otherwise, they are deemed to have “endorsed” the terrorist content. In addition, the Counter-Terrorism and Border Security Act of 2019 updates terrorism offences for the digital age and grants the authorities more power to tackle the threat posed to the U.K. by terrorism. See also Eur. Comm’n, Press Release, State of the Union 2018: Commission Proposes New Rules to Get Terrorist Content Off the Web, European Commission (Sept. 12, 2018), [] (describing the European Commission proposal that would require hosting service providers to remove terrorist content online upon an order); Gillespie, supra note 195, at 37.

Despite platforms’ broad immunity from material support claims, social media websites prohibit content that incites terrorism and remove content under their own policies out of an abundance of caution.407Chachko, supra note 224, at 45. For example, Facebook bans such content in its community standards, under the general subtitle “violence and incitement[,]”408Facebook Community Standards: Violence and Incitement, Facebook, (last visited Mar. 30, 2023) []. and under the subtitle “dangerous individuals and organizations.” Facebook also removes such content when it appears on the platform.409Dangerous Individuals and Organizations, Meta, (last visited Mar. 30, 2023) [] (“In an effort to prevent and disrupt real-world harm, we do not allow any organizations or individuals that proclaim a violent mission or are engaged in violence to have a presence on Facebook. This includes organizations or individuals involved in the following: Terrorist activity… We also remove content that expresses support or praise for groups, leaders, or individuals involved in these activities. Learn more about our work to fight terrorism online here…We do not allow the following people (living or deceased) or groups to maintain a presence (for example, have an account, Page, Group) on our platform: Terrorist organizations and terrorists…”).

Beyond removal of such content due to users reports or take down requests, intermediaries also take proactive steps to detect such content and prevent it from being uploaded again. These efforts are voluntary, or preventive due to concerns about potential legal repercussions.410See Lavi, Publish, supra note 65, at 508. For example, technology companies are currently cooperating to establish a database to detect banned violent content. This database would include unique digital fingerprints of banned content, and files could be flagged and removed instantly.411Id. (referring to Citron, Extremist Speech, supra note 285, at 1043–45. Some technology companies have issued guidelines limiting use of such database to only the most extreme terrorist images that violate the content policies of all companies. Regardless of its final characteristics, these are important steps to create an industry database “to help prevent the spread of violent terrorist imagery.”412See id. at 1045 (explaining that according to the guidelines the removal of hashed material would not be automatic but rather subjected to a review by the tech company according to its own specific policies). Another developing tool to mitigate terrorist content is the use of AI to detect such content.413See Lavi, Publish, supra note 65, at 568–69. These measures can help mitigate the dissemination of content that incites terrorism. Thus, extremist content would be reduced, and less people would draw inspiration to commit terrorist attacks from live-streaming or online manifestos.

However, when terrorist content is tokenized, it is a totally different story as the content cannot be deleted or altered. This means that inciting content could potentially reach larger audiences and goad them on to commit acts of violence. Being exposed to such content might even lead individuals who might not otherwise conceive of committing a terrorist attack to adopt extremist views and act on them.

In summary, when speech is tokenized, there is no Right to be Forgotten and no escape. As the innovation of NFTs is relatively new, a systematic understanding of how they can be abused to aggravate the implications of harmful speech has not yet been developed. The above section endeavored to propose such a systematic analysis as to why existing online content removal methods are ill-equipped to confront the challenges of the blockchain.

IV. Towards a Comprehensive Framework of Solutions and Remedies for Abusive Tokenized Speech

Abusive NFTs are created, sometimes even without thinking of the consequences,414For example, Lithuanian designer Erikas Mališauskas sold a non-fungible token (NFT) of a digital collage made from homophobic online comments for 00 with the proceeds going to Lithuanian LGBTQ+ charities. Mališauskas had good intentions, but by creating this NFT he in fact perpetuated hate speech. See Ian Smith, Lithuanian Designer Turns Homophobic Hate Into NFT and Sells It for 00, GCN (May 6, 2021), that would become more severe with the advance of Web 3.0 blockchain-based social media and the development of the Metaverse. The industry is taking first steps to respond. Thus, Opensea, the main market for trading tokens, is making efforts to delist them from their listings. For example, recently, it delisted NFTs of cartoon characters with allegedly right-wing interpretations, to make it difficult to trade them on this specific marketplace, although the creator can seek other markets.415See Lachillan Keller, Does Content Moderation on Platforms Like Opensea Amount to Censorship? Forkast (Dec 21, 2021), The industry is only starting to realize the problem. The law however lags further behind. To date, there is no comprehensive framework of legal and technological solutions for abusive NFT use. This part proposes mechanisms to bridge this gap, and endeavors to provide a comprehensive framework for mitigating the harm caused by such abuse.

This part also addresses free speech concerns raised by the framework, since the First Amendment protects freedom of speech against government censorship, including even raw data and source code.

A. Safety-by- Design – Ex Ante Prevention

In recent years, technological solutions are being developed and deployed in order to prevent harm inflicted by the free flow of information. The way online platforms are designed can help prevent and mitigate harm, as well as shape attitudes towards violations of law and norms ex ante.416See Daniel Solove, The Digital Person: Technology and Privacy in the Information Age 100 (2004); see also Lavi, Publish, supra note 65, at 493. There are currently ongoing debates regarding the morality and ethics behind coding of online platforms. Decisions made by engineers can unleash new technology, which may affect fundamental rights such as free speech and the right to privacy.417See Lavi, Publish, supra note 65, at 494. Literature on the influence of technology and its potential to protect privacy is already gaining momentum.418See, e.g., Chris Jay Hoofnagle, Fed. Trade Comm’n Privacy Law & Policy 190–91 (2016); Kenneth A. Bamberger & Deirdre K. Mulligan, Privacy on the Ground: Driving Corporate Behavior in the United States and Europe 32, 178 (2015); Ann Covoukian, Privacy by Design: The 7 Foundational Principles 1, 2 (2009); Ira S. Rubinstein, Regulating Privacy by Design, 26 Berkeley Tech. L.J. 1409, 1420–21 (2011); Frederic Stutzman & Woodrow Hartzog, Obscurity by Design, 88 Wash. L. Rev. 385, 395, 402–417 (2013). The concept of privacy by design advocates regulation of technological design ex ante, rather than providing ex post remedies to victims of dissemination of harmful information.419Hoofnagle, supra note 418; Bamberger & Mulligan, supra note 418; Covoukian, supra note 418. Researchers have described how privacy-protective features can be designed to play a central role in functionality and address threats to privacy.420See, e.g., Rubinstein, supra note 418.

Regulators around the world have discovered the benefits of privacy by design and set forth guidelines that include privacy by design, alongside efforts to incentivize stakeholders to adopt this approach within their business models.421See, e.g., Fed. Trade Comm’n., Protecting Consumer Privacy in an Era of Rapid Change: Recommendations for Businesses and Policymakers (2012), []; Hoofnagle, supra note 418 at 191 (“The FTC is embracing [privacy by design].”); see also Communication from the Commission on a Comprehensive Approach On Personal Data Protection in the European Union 12, Eur. Parl. Doc. (COM) (2010) 609 final, []. A central example is Article 25 of the EU GDPR, which addresses “data protection by design and by default,”422Regulation 2016/679, of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and of the Free Movement of Such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation), art. 25, 2016 O.J. (L 119) 48 (EU). building systems meant to protect and advance privacy starting at the beginning of the design process.423See Lilian Edwards & Michael Veale, Slave to the Algorithm? Why a ‘Right to an Explanation’ Is Probably Not the Remedy You Are Looking For, 16 Duke L. & Tech. Rev. 18, 77 (2017) (explaining that by doing so, it recognizes that “a regulator cannot do everything by top down control, but . . . controllers must themselves be involved in the design” of systems that minimize invasion of privacy). Accordingly, at the stage of system development, controllers must implement “appropriate technical and organizational measures”424GDPR Recital 78 Regulation 2016/679, of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and of the Free Movement of Such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation), 2016 O.J. (L 119) 15 (EU)(“The protection of the rights and freedoms of natural persons with regard to the processing of personal data require that appropriate technical and organisational measures be taken to ensure that the requirements of this Regulation are met. In order to be able to demonstrate compliance with this Regulation, the controller should adopt internal policies and implement measures which meet in particular the principles of data protection by design and data protection by default.”). in order to protect data subjects’ rights. “Data protection by default” is required to ensure that data that is unnecessary for processing is not collected.425Edwards & Veale, supra note 423 (quoting GDPR, art. 25).

The principle of privacy by design can be applied to NFTs, learning lessons from concepts rooted in privacy, providing ex ante protection against the harm of abusive tokenization, promoting privacy and allowing “safety- by- design” which can mitigate this harm.

1. Minimizing What is Stored on the Blockchain

As reiterated above, tokenized information cannot be altered or deleted and remains on the blockchain for all to see. However, the expression itself does not have to be stored on the blockchain. Instead, the actual expression could be stored off-chain while the token would only store hashes, which are considered pseudonymized data.426Hassan et. al., supra note 14, at 17; Aurelie Bayle, David Manset, Octavio Perez Kempner & Mirko Koscina, When Blockchain Meets the Right to be Forgotten: Technology Versus Law in the Healthcare Industry, IEEE/WIC/ACM International Conference on Web Intelligence 788, 789 (2018); see, e.g., Fairfield, supra note 8, at 1272–73(“Often, an NFT stands for ownership of something not directly stored on the blockchain—a piece of digital art, for example. The token contains a pointer to find the digital art file, and a hash of the file as proof. So a token representing digital art might contain a URL pointing to the art and a hash of the art file. In this way, an NFT might convey an ownership interest in a piece of digital art, an asset in an online game, a card in a collectible trading card game (think rare baseball cards here), or a plot of land in a virtual world.”). In other words, the digital item itself would be stored in a centralized server, while the token would merely keep a record of who owns that item and a pointer that indicates that the token is tied to a specific digital item.427Fairfield, supra note 8, at 43. Focusing on the design of the tokenization process of marketplaces would allow protection against harmful speech. Storing the information off-chain would allow deletion of the information. As the information is off-chain, it would not be eternalized. Deletion of the information would be possible by “deleting the off-chain stored record while keeping the hash of it intact on blockchain.”428Hassan et. al, supra note 14, at 17. If the data is stored off-chain, then a data subject could request erasure of the data or file a lawsuit. To delete the information, all one would have to do is “erase any off-chain data that could be used to identify the subject, if linked to the on-chain hash.”429Rose, supra note 341, at 41.

This solution has benefits as it is the best possibility to mitigate the damage of abusive NFTs. However, this solution also has shortcomings which should be considered. First, storing the data off-chain might eliminate some of the benefits of using the blockchain.430Mirchandani, supra note 337, at 1229 (2019). Storing the data on an off-chain database would separate the hash that is stored on the blockchain from the original data. When a person exercises the “Right to be Forgotten” (in the EU), or when a judicial decision orders removal of the information, the hash and corresponding personal data in the off-chain database would be destroyed.431Id. Such a design would impede the benefits of immutability and integrity of information and accuracy that can be achieved through storage on the blockchain, and would contradict the very rationale of storing information on the blockchain in the first place.432Id. One could, however, argue that deletion should not be possible in every circumstance, but rather should require a judicial decision declaring that the abusive NFT violates the law, and a judicial order to remove it. Requiring a judicial decision could restrict this technical solution’s potential to lead to censorship.

Another shortcoming is the problem of cybersecurity and data protection. Using a third-party centralized entity to store the data could result in loss of the security achieved when information is stored on a decentralized blockchain.433Id. at 1230 (“By storing data subjects’ personal data in the off-chain database, the database still has the same cybersecurity issues as a regular database”). Storing information outside the blockchain might also interfere with the transparency achieved when information is stored on the blockchain, since off-chain storage would make it impossible to know who is accessing the database off-chain.434Id. Moreover, separating pointers from information would increase the risk of errors, cyberattacks, and data breaches.435Id. There is also no guarantee that the company that stores the data will survive – should such a company cease to exist, the result might be a loss of data. Indeed, storing the information off-chain would compromise some of the benefits of storage on the blockchain and would raise cybersecurity and data protection concerns.436See Fairfield, supra note 8, at 1283 (“When a digital item is stored on-chain, the art itself is hashed directly into the token. On-chain storage allows for the item to continue existing even if the original company hosting the item on its servers no longer exists. On-chain storage thus provides greater security for a purchaser because the value of the NFT is no longer tied to the continued existence of any one particular server, or company”). Such concerns could, however, be partially mitigated by investing in securing the systems, and obligating marketplaces to use only storage companies that comply with adequate cybersecurity standards and guarantees regarding the company’s stability.

Alternatively, policy makers could create incentives to use only trusted data storage companies. For instance, fiduciary intermediaries could grant a stamp of approval (such as a trust mark) to marketplaces that use data storage companies that comply with adequate security standards and guarantees regarding company stability.437See Lavi, Publish, supra note 65, at 503 (information on trust marks on a related context). Potential owners that consider tokenizing speech or other items would consider whether the marketplaces have a trust mark, and would prefer to tokenize expressions with a reputable company, to make sure that the information the token points to is less vulnerable to breaches. Such marketplaces would be superior to marketplaces that do not comply with adequate security standards, thus allowing them to gain a competitive advantage.

Finally, storing off-chain draws criticism with respect to notions of property rights and ownership of NFTs that do not store information directly on the blockchain. In such cases, owners have less control over the context in which their own property is used and less control over the property itself.438Fairfield, supra note 8, at 1283-84. Such a method of storage conflicts with traditional notions of ownership and could lower the value of such tokens. Yet in any case, NFTs have been criticized for upending notions of ownership and uniqueness, even when they are stored on-chain.439Frye, supra note 84, at 4 (criticizing the concept of NFT). Thus, one might say that off-chain storage does not change much. Moreover, it is likely that over time, new social norms and conceptions will come into being to address new types of digital property,440New norms regarding digital property could focus on gaining social recognition rather than control. For expansion, see Frye, After Copyright, supra note 49. even if these new norms might challenge conventional notions of property. Moreover, one might argue that off-chain data storage is necessary anyway in cases where the transaction data exceeds “the current storage limitations of the Ethereum blockchain.”441Fairfield, supra note 8, at 1284 n.104 (referring to Devin Finzer, The Non-Fungible Token Bible: Everything You Need to Know About NFTs, Opensea (Jan. 10, 2020),

This design would enable deletion of the information on the central server, leaving the NFT with a token that leads nowhere. Deletion would only occur following a court order, when there is prima facie evidence that the speech the token points to is unprotected within the First Amendment, or upon request by the token owner themselves. Since deletion of information would be subject to a court order, it is likely that only the worst types of speech that clearly violate laws would be deleted. Thus, such a solution would enable mitigation of the harm of abusive tokens by allowing the technical possibility to delete the information that is stored outside the blockchain. However, as explained the deletion would be limited only to rare circumstances and truly abusive NFTs, thus avoiding the extensive continuing moderation and even public-private censorship, that traditional intermediaries engage.442On private–public censorship by traditional intermediaries, see discussion supra Part II.

One might argue that this proposal infringes on the marketplace’s right to free speech, and its right to conduct business, because marketplaces would be forced to change their technological code, and code is considered speech. However, the proposed framework does not specifically forbid storing information directly on the token. Marketplaces can choose between minimization of what is stored, prescreening, or any other method that would prevent the perpetuation of abusive speech on the blockchain. We believe that many marketplaces might prefer to adopt this proposal even without specific regulation, out of a sense of corporate responsibility,443See Danielle Keats Citron & Helen Norton, Intermediaries and Hate Speech: Fostering Digital Citizenship for Our Information Age, 91 B.U. L. Rev. 1435, 1456 (2011) (explaining that voluntary regulation can be justified by doctrines of corporate law that allow the managers of corporation to consider public interests); Kate Klonick, The New Governors: The People, Rules, and Processes Governing Online Speech, 131 Harv. L. Rev. 1598, 1627 (2018). and sensitivity to the potential harm that could be caused by permanent expressions in the absence of a Right to be Forgotten. Another reason marketplaces might elect to adopt this proposal might be a desire to distance themselves from murky legal areas and minimize the potential for legal claims such as fines for non-compliance with the provisions of the GDPR, or liability for their role as marketplaces, as outlined above in Part IV. A.1.

One might, however, argue that the very possibility of imposing liability for negligent design would violate the marketplace’s free speech right to shape its systems as it sees fit, even if they may still have a choice between several options. Yet, even if technological tools are treated as speech,444See generally Jane Bambauer, Is Data Speech?, 66 Stan. L Rev. 67 (2014) (arguing that even raw data can enjoy First Amendment protection when it promotes the right to create knowledge). the value of such speech is not absolute. Programing a tool that shapes NFT marketplaces is not direct participation in the marketplace, rather it is a form of market behavior that utilizes “speech.”445In a related context of algorithmic speech, see Dennis D. Hirsch, From Individual Control to Social Protection: New Paradigms for Privacy Law in the Age of Predictive Analytics, 79 Md. L. Rev. 439, 502 (2020). Due to the commercial nature of code, as part of a product, or a tool in the marketplace, possible liability for negligent design should not be subject to strict scrutiny. Rather, it should only be subject to intermediate scrutiny. Due to the importance of regulation to reduce unsafe and abusive design that could result in irreparable harm, there are substantial interests in allowing liability. This liability would be content neutral, independent of the nature of speech tokenized and would not discriminate between viewpoints. Storing NFT information off the chain would also not interfere with the overall operation of the system. As such, it is narrowly tailored to pass the intermediary test.

Although the solution of storing information off-chain has its shortcomings, we believe it is more efficient than other possible solutions that will be presented in this Article, and that the benefits outweigh the risks and costs. Many marketplaces may choose to adopt this solution voluntarily, preferring to store information off the blockchain and keep only a pointer to the NFT on-chain. This would be due to a sense of corporate responsibility446See Citron & Norton, supra note 443. and sensitivity to the possible dangers of permanent expressions without the Right to be Forgotten. Another reason why some marketplaces might choose to adopt such a solution is a desire to shy away from murky legal areas and minimize legal claims. First, as marketplaces conduct business and tokenize expressions of EU citizens, inter alia, they are often subject to the EU General Data Protection Regulation (GDPR) which regulates controlling and processing personal data.447See 2016 O.J. (L 119) 1, art. 17. The GDPR applies “wherever the use of personal data by a business relates to the offering of goods or services to individuals in the EU, irrespective of whether a payment is required, or monitoring of those in the EU.” Mirchandani, surpa, note 337, at 1219. The GDPR applies “…anytime a business, or data controller, collects personal data,”4482016 O.J. (L 119) 1, art. 4. namely data that relates to an identified or identifiable natural person. As NFTs store personal data on the blockchain, the GDPR would appear to apply. Data subjects therefore have the right to request rectification of privately identified information,449Id. art. 16. and have a “right to erasure,”450Id. art. 17. namely they can request that their “personal data be erased from a business’ storage without undue delay.”451Mirchandani, supra note 342, at 1221. Furthermore, under the GDPR, data subjects have rights to data portability4522016 O.J. (L 119) 1, art. 20. and they can request that “their data be transmitted from one data controller to another, without any hindrance from the original data controller.”453Mirchandani, supra note 342, at 1222. Theoretically, the entire blockchain ecosystem is subject to the GDPR. However, the blockchain’s structure of immutability makes GDPR compliance impossible. Moreover, due to the decentralized nature of the blockchain, enforcement raises difficulties.454Rose, supra, note 341, at 38 (explaining that if every node on the blockchain was classified as controller, enforcement of GPDR would be virtually impossible, due to the sheer volume of nodes in a permissionless blockchain). Marketplaces that strive to reduce the risk of fines due to GDPR violations will therefore opt to store NFT content off the blockchain.

Indeed, it might be argued that hashed data can still be considered personal data, because if an entity possesses such a hash, “it might be able to reconstruct the information and decipher what was stored on blockchain in the first place”.455See Hassan et al., supra note 14, at 17. However, it is possible to work around this problem by using “hash peppering whereby a random and secretly kept nonce is appended to the blob of data before taking its hash and storing it on blockchain.”456Id. As explained,457See discussion supra Part IV.A. this method has its shortcomings, yet it increases the likelihood that marketplaces will voluntarily comply with the GDPR.

The second murky area that might incentivize marketplaces to store information off the blockchain is the possibility of lawsuits. If courts were to treat abusive tokens which contain unprotected speech as defective products, the marketplace could be held responsible for the damage. It’s true that Section 230 of the CDA provides immunity for content created by other content providers.45847 U.S.C. § 230. However, as scholars argue, the internet has developed beyond a mere communication medium for speech. Therefore, Section 230 should be redefined to meet the reality of marketplaces, which can be the cheapest cost avoiders459See Catherine M. Sharkey, Products Liability in the Digital Age: Online Platforms as “Cheapest Cost Avoiders”, 73 Hastings L.J. 1327 (2022). and should be held liable for their own conduct, or for their contributory liability.460See Agnieszka McPeak, Platform Immunity Redefined, 62 Wm. & Mary L. Rev. 1557 (2021); Gregory M. Dickinson, Rebooting Internet Immunity, 89 Geo. Wash. L. Rev. 347, 347 (2021) (referring to online marketplaces and arguing that “Where a claim is preventable other than by content moderation—for example, by redesigning an app or website—a plaintiff could freely seek relief, just as in the physical world. This approach empowers courts to identify culpable actors in the virtual world and treat like conduct alike wherever it occurs.”).

In the context of marketplace liability, different courts currently disagree both about the scope and substance of immunity. Many courts have applied immunity and dismissed a variety of claims against online marketplaces.461Dickinson, supra note 460, at 377. For example, claims of negligence in Dyroff v. Ultimate Software Grp., Inc., 934 F.3d 1093, 1097–101 (9th Cir. 2019), unfair competition in Marshall’s Locksmith Serv. Inc. v. Google, LLC, 925 F.3d 1263, 1265–66, 1272 (D.C. Cir. 2019) and product liability in Gartner v., Inc., 433 F. Supp. 3d 1034, 1045 (S.D. Tex. 2020). Yet, recently some courts have allowed such claims to progress beyond preliminary stages. For example, in Bolger v., Inc.,462Bolger v., LLC, 53 Cal. App. 5th 431 (Cal. App. 2020). a defective laptop battery that the plaintiff purchased from a third-party seller through Amazon marketplace exploded, causing the plaintiff physical harm. The court held that Amazon could be held strictly liable because of its role in the transaction. The court did not apply Section 230, because the claim was for product liability, not publication of third-party speech.463Id.; see McPeak supra note 460, at 1610. Similarly, in Loomis v. LLC,464Loomis v. LLC, 277 Cal.Rptr.3d 769 (Cal. App. 2021). the California Courts of Appeal recently held Amazon liable for marketplace items. In this case, a hoverboard was sold from a third-party retailer via Amazon’s website. After plugging it in to charge the battery, a fire broke out in the plaintiff’s bedroom, causing the plaintiff burns. The court held that since Amazon is in the “vertical chain of distribution” between the seller and the buyer, it can be held strictly liable.465Eric Goldman, California Court Holds Amazon Strictly Liable for Marketplace Items Amazon Didn’t Fulfill–Loomis v. Amazon, Tech. & Mktg. Law Blog (Apr. 28, 2021), []. In the same way that marketplaces bear liability for the products they list on their websites, even when they are not the sellers, NFT marketplaces could bear liability for abusive tokens.

The third murky area is liability for unsafe design that encourages violations of law and harmful behavior.466See Michal Lavi, Evil Nudges, 21 Vand. J. Ent. & Tech. L. 1, 16 (2018). In Lemmon v. Snap Inc.,467Lemmon v. Snap, Inc., 995 F.3d 1085, 1088 (9th Cir. 2021). three boys lost control of their vehicle and were killed in a fatal accident. They had been speeding at 123 MPH, after using Snapchat speed filter, an app designed to calculate the user’s speed and show it in a photograph. The boys’ parents argued that the speed filter was an unsafe, negligently designed product, and brought a claim against Snapchat.468Id. at 1089. A civil suit was allowed to proceed in the U.S. Court of Appeals for the Ninth Circuit as an independent negligent design claim is independent of Snapchat user messages.469Id. at 1092 (“That Snap allows its users to transmit user-generated content to one another does not detract from the fact that the Parents seek to hold Snap liable for its role in violating its distinct duty to design a reasonably safe product.”). Thus, the court differentiated it from claims that focus on content published by other content providers.470See id.; see Eric Goldman, The Ninth Circuit’s Confusing Ruling Over Snapchat’s Speed Filter–Lemmon v. Snap, Tech. & Mktg. L. Blog (May 12, 2021) []. On remand from the Ninth Circuit’s Section 230 denial, the Lemmon district court denied Snapchat motion to dismiss and decided that the Plaintiffs have adequately alleged that the design of the Speed Filter itself encouraged Plaintiffs to engage in reckless driving.471Lemmon v. Snap, Inc., CV 19-4504-MWF, 2022 WL 1407936, *1 (C.D. Cal. March 31, 2022); In a similar case Maynard v. Snap, Inc., 2022 WL 779733 (Ga. Sup. Ct. March 15, 2022), the Georgia Supreme Court ruling did not even address Section 230 and concluded that “manufacturer has a statutory duty to ensure that products it sells are not effectively designed…” This case demonstrates that Section 230 cannot protect online marketplaces from all types of claims. This exposure may lead them to seek ways to reduce litigation and possible liability. In the context of NFT marketplaces, ex ante storage of information off the blockchain might help provide them the protection they seek.

2. Prescreening Tokens and Preventing NFTs Containing
Unprotected Speech from Entering the Blockchain

A second solution by design is prescreening abusive tokens that contain speech that is unprotected within the First Amendment472 Killion, supra note 315. before they enter the blockchain. Such a solution would make it difficult for abusive speech to enter the blockchain in the first place and would reduce the likelihood of perpetuating abusive speech. Unlike the previous design-based solution, this solution would not curtail the benefits of blockchain immutability and integrity. As it would enable full storage of NFTs that have not been prescreened out, it does not raise cyber security concerns and it conflicts less with traditional notions of property. The blockchain itself is unable to pre-learn works before adding them. The blockchain can safeguard against data tampering, but it cannot guard against inaccurate or unverified data.473See Adjovu & Fabian, supra note 30. Marketplaces that allow the creation of NFTs, however, are able to do so and have the ability to prevent the creation of abusive tokens.

Despite this, prescreening has more shortcomings than the previous solution. First, it does not provide an overall solution for the problem of perpetuating speech. Once a pre-screened NFT enters the blockchain it still cannot be deleted at a later stage. Thus, this solution still does not provide a Right to be Forgotten474GDPR Art 17 Commission Regulation 2017/679, 2016 O.J. (119) 1 (EU) (General Data Protection Regulation Art 17). and would therefore be incompatible with the provisions of the GDPR and unsuitable where EU citizens are involved in the transaction. Moreover, as the context changes the meaning of an expression can change too. A statement that was true at the time of tokenizing could constitute defamation at a later stage, when time passes, or when new facts are revealed.475See Lavi, Good, Bad, Ugly, supra note 202, at 2640. Thus, such a solution could still potentially allow the perpetuation of defamatory, and other unprotected speech, as the decision to allow the token to enter the blockchain is based on the context and facts known at a specific point of time.

Prescreening also curtails the benefits of the blockchain as a decentralized system: by assigning the roles of mediation and moderation to the marketplaces, it in fact creates new gatekeepers.476See Rory Van Loo, The New Gatekeepers: Private Firms as Public Enforcers, 106 Va. L. Rev. 467 (2020) (referring to the rise in regulation that gives a prominent role to the administrative state’s newest gatekeepers). Even though marketplaces are external to the blockchain system, one might argue that gatekeeping by marketplaces, making them the arbiters of what does and does not enter the chain, would conflict with the idea of crypto-democracy, which is based on the absence of intermediaries. Such a solution would replace traditional intermediaries and shift power to marketplaces, which would come to control the type of information included on the blockchain.

Beyond the conceptual conflict with the idea of crypto-democracy, assigning marketplaces the role of gatekeeper could de facto result in chilling legitimate expression and private censorship. Much as described in Part II with respect to online intermediaries, marketplaces, as private entities, are likely to pre-screen protected speech, to elect to avoid murky legal areas and comply with laws both in and outside the U.S.477See supra Part II.A.1. It is reasonable to assume that they would be likely to do so in cooperation with the government478See supra Part II.A.2. and to bar speech that is not compatible with their agendas or their community standards.479See supra Part II.A.3. Indeed, like intermediaries, different marketplaces would adopt different policies regarding barring expressions from being tokenized, and a degree of diversity would exist. One cannot, however, ignore the possibility that such a practice could chill legitimate protected speech, make it difficult to perpetuate it, and undermine the benefits of NFTs as the engine of speech. Moreover, even if marketplaces aim to weed out only unprotected speech, just like online intermediaries, marketplaces would not have the capacity to prescreen the high volume of content tokenized manually and would need to use algorithms. As explained, this method of automatic prescreening results in inaccuracies in identifying the context of messages, leads to many “false positives”, and can chill protected speech.480See supra Part II.A.4.

In short, both design-based approaches proposed above have a certain degree of potential to reduce harm caused by perpetual, abusive NFTs ex ante. Storing all information off the blockchain and using pointers, or prescreening only NFTs that contain unprotected, abusive expressions, are potential solutions to the problem of perpetual tokenized abusive expressions. Each of these possibilities has its benefits and shortcomings. We, however, believe that the solution of storing all information off of the blockchain is preferable, as it would allow deletion of information at any given point of time and not only at the date of tokenization. And yet, it remains compatible with the decentralized crypto-democracy idea, preserving the function of NFTs as the engine of speech.

It should be noted that similarly to the solution of storing information off-chain, the solution of prescreening tokens also give rise to free speech concerns. One might argue that legislation imposing liability for a failure to prescreen abusive NFTs is not content neutral and should be subject to strict scrutiny. However, this proposal is directed only at unprotected low-value speech that does not benefit from First Amendment protections.481See Lavi, Do Platforms Kill, supra note 195 at 528–29 (on the categories of unprotected speech); Geoffrey R. Stone, Privacy, the First Amendment, and the Internet, The Offensive Internet: Privacy Speech, and Reputation 174, 177 (Saul Levmore & Martha C. Nussbaum eds., 2010). Yet, it could still be argued that even so, mandating prescreening of only unprotected speech raises constitutional problems, as such algorithmic prescreening could erroneously weed out NFTs that include legitimate protected speech as well. 482See Ashcroft v. Free Speech Coal., 535 U.S. 234, 237 (2002) (“The overbreadth doctrine prohibits the Government from banning unprotected speech if a substantial amount of protected speech is prohibited or chilled in the process.” (citing Broadrick v. Oklahoma, 413 U.S. 601, 612 (1973)). Algorithms are inaccurate in identifying the context of messages, resulting in “false positives,”483Id. which could infringe on the freedom of expression of NFT owners. All that may be true, but if this indeed occurs, courts would be able to instruct the marketplaces to reinstate the content as an NFT where needed. Indeed, we are well aware of the fact that the solution of prescreening could raise more First Amendment concerns than our first proposal if it is imposed on marketplaces through mandates. However, if marketplaces elect to prescreen tokens, this is their choice and they would be responsible for conducting risk management. This is not very different from moderation of intellectual property violations, that can also be seen as a type of speech, but are legally subject to a type of notice and takedown regime.484See Digital Millennium Copyright Act, 17 U.S.C. § 512 (imposing a notice and takedown regime with respect to intermediaries liability regarding intellectual property infringements). Nor is it different from moderation in states that do not grant immunity to intermediaries.485See e.g., Network Enforcement Act, Netzwerkdurchsetzungsgesetz [NetzDG] [Network Enforcement Act], Oct. 1, 2017, at § 3(2)(4) (Ger.).

Marketplaces can choose not to prescreen tokens and adopt the first proposal instead. Marketplaces that store the information elsewhere, separate from the NFT, and include only a pointer on the NFT,486Marketplaces might prefer this option over storing the information off the blockchain and using a pointer because storing on the blockchain does not require them to secure a separated database, and it allows a structure that is more in line with traditional conceptions of ownership. See discussion supra Part IV (explaining the security problems of storing information outside the blockchain and on the benefits of doing so from a property law doctrine perspective); see also Mirchandani, supra note 342; see also Fairfield, supra note 8. would be exempt from the proposal to prescreen. Potential NFT owners could also choose between marketplaces that use different mitigation methods: prescreening or storage off the blockchain.

B. Ex Post Mitigation- Obscuring Harmful NFTs

1. “Superseding” Old, Inaccurate Data with New Data on the Blockchain

This Article focuses on technological mitigation directed towards NFT marketplaces that can store all content off the blockchain, or at least prevent the tokenization of abusive speech. Such ex ante solutions are preferable to ex post solutions. However, although we believe that ex post solutions are less efficient, they can still provide a level of mitigation when ex ante steps have not been taken. The following subsections will focus on ex post mitigation through obscuration.487Hartzog & Stutzman, supra note 418, at 388 (“[T]he concept of online “obscurity,” defined here as a context in which information is relatively difficult to find or understand, is a much more defined and attainable goal for social technology designers”).

Although tokenized information on the blockchain cannot be deleted or altered, the victim can still obscure it. In other words, they can make the information “relatively difficult to find, or understand”.488Id.; see Woodrow Hartzog & Frederic Stutzman, The Case for Online Obscurity, 101 Calif. L. Rev. 1 (2013). This concept of obscuration was first defined by Hartzog & Stutzman, regarding harmful online information. The scholars understood that the risks posed by harmful information cannot always be completely eliminated: they can, however, be mitigated if the information becomes less accessible.489See Hartzog & Stutzman, supra note 418. Obscured information lacks essential factors which are necessary for discovery or comprehension. Literature has identified four such factors which can be obscured: (1) search visibility, (2) unprotected access (free, universal access to the information), (3) identification (the ability to connect the information to a specific person), and (4) clarity (complete information that is not vague).490Id. at 32–39. Limiting the visibility of information, restricting access, obscuring the identity of the data’s subjects, or publishing partial information can all reduce the potential harm of exposure to the information.

Obscuration can help mitigate the harm caused by information that has entered the blockchain. As data cannot be removed from the blockchain, victims of harmful data cannot request removal of their personal data, even when the blockchain is a permissioned or private blockchain, meaning one would need to register and gain permission to become part of the network.491See Melody Moh, David Nguyen, Teng Sheng Moh & Brian Khieu, Blockchain for Efficient Public Key Infrastructure and Fault-Tolerant Distributed Consensus, 79 Blockchain Cybersecurity, Tr. and Priv. 69, 83 (Kwang et. al. ed, 2020); see also Horst Treiblmaier, Toward More Rigorous Blockchain Research: Recommendations for Writing Blockchain Case Studies, in Frontiers in Blockchain 2, 3 (May 2019) (explaining that permissioned blockchain can limit who can publish on the blockchain and controls the access to information as configuration is performed by a trusted user who also controls runtime network reconfigurations). Such private blockchains are distinct from permissionless networks that are open to the public and allow anyone to participate.492See Dirk A. Zetzsche, Ross P. Buckley, & Douglas W. Arner, The Distributed Liability of Distributed Ledgers: Legal Risks of Blockchain, 2018 U. Ill. L. Rev. 1361, 1372 (2018) (“[P]ermissioned systems require an organization and governance structure regulating at least who is permitted to participate and usually the basis upon which they may participate…In contrast, permissionless blockchains such as Bitcoin operate on public domain software and allow anyone who downloads and runs the software to participate.”). On both permissioned and permissionless blockchains, the information cannot be deleted or altered.

On permissioned blockchains however, NFTs can be obscured and made less visible or accessible. This can be done by “superseding”: a process that replaces old, inaccurate data with new data.493Mirchandani, supra, note 342, at 1223 (“The most that a permissioned blockchain node can do is supersede old, inaccurate data, with new data. It cannot alter or delete previously entered data. Even though old data is not physically deleted from the chain, it becomes a part of an older block and is not added upon directly and monitored by nodes.”). This process works as follows: Permissioned blockchains record transactions with a timestamp.494See Randhir Kumar & Rakesh Tripathi, Secure Healthcare Framework Using Blockchain and Public Key Cryptography, 79 Blockchain Cybersecurity, Tr. and Priv. 185, 196 (2020). Every blockchain has a special algorithm that scores different versions of the history. When one of the participants on the blockchain receives a higher scoring version “they extend their own database and transmit the improvements to their peers.”495 Vikrant Lahanpal et al., SPE International, Implementing Blockchain Technology in Oil and Gas Industry: A Review 2–3 (2018). As blockchains are built to add the score of new blocks onto old blocks, older entries are likely to be superseded. In such cases, the old data physically remains on the blockchain. However, it becomes part of an older block and is “not added upon directly and monitored by nodes”496Id. When the blockchain is permissioned, superseding abusive NFTs by adding new information to the blockchain upon the victim’s request could be an option to reduce the abusive token’s potential for harm. Although this would not eliminate the damage of harmful speech, it could however substantially reduce it, as the expression would remain local. Superseding the information would obscure the information, reduce its visibility and accessibility, and decrease the likelihood that the information would spread to a wider audience.

This solution can be likened to the practice of reputation management systems and search engine optimization, which obscure harmful information by bombarding the internet with information that downgrades the page-rank of the harmful information.497See Lavi, Good, Bad, Ugly, supra note 202, at 2624. Yet, this method of mitigation will only work if regulation is put in place, instructing search engines to collect data only from the most recent blocks on the blockchain. Such obscuration would not raise First Amendment concerns, as superseding is the victim’s own self-help initiative and it is not in any way limited by state or federal legislation.

2. Harnessing Online Intermediaries to Obscure Abusive Tokens
on the Level of The World Wide Web

Right now, tokenized speech posted on the blockchain might result only in limited harm, because it is exposed only to a limited audience – blockchain users. Indeed, blockchain-based search engines are developing,498See What is a Blockchain Explorer?, CoinMarketCap, [] (“Blockchain explorers facilitate one of the most important features of blockchain technology — browsing through the records of the distributed ledger.”). yet they are directed towards blockchain users and the results do not yet appear on prominent search engines. When NFTs integrate into the internet via API, that allows computers and applications to communicate with one another,499On API, see generally Gazarov, supra note 44. NFTs start to appear on social media, and will also begin to appear in the search results of general search engines. This means that everyone will be able to access and see them. Abusive NFTs will become easier to search, access and view if Web 3.0 blockchain-based social media becomes more popular. Such a situation would make it easier to share NFTs with a wide audience and gain vast circulation and credibility.500See Whitney Phillips, The Toxins We Carry, Colum. Journalism Rev. (Fall 2019) [].

The practice of obscuring, introduced in the previous section, can be useful not only for obscuring information on the blockchain, but for making it less visible, accessible, or searchable on the level of the World Wide Web. Such a solution would not eliminate the potential damage of harmful speech. However, it could substantially mitigate it, as the expression would remain local. Superseding the information with new information would reduce the likelihood that the older information would spread to a wider audience. Making information “relatively difficult to find or understand” in fact obscures it.501Hartzog & Stutzman, supra note 418.

Spreading information has many benefits, as explained previously. However,502See supra Part III. dissemination can also be devastating when harmful information is spread and cannot be deleted. Yet, deleting content is not the only available remedy.503See Eric Goldman, Content Moderation Remedies, 28 Mich. Tech. L. Rev. 1 (2021) (introducing the toolkit of moderation that include much more than taking down context). Much like abusive speech that has been tokenized, online intermediaries can obscure abusive speech, reduce its visibility, and make it harder to search. There are various ways to do this. First, intermediaries can “place a ‘no-index’ tag on a page so that the page does not appear in external search indexes like Google, even though it remains fully accessible on the service.”504Id. at 32–33. This remedy can be likened to the Right to be Forgotten, which obscures access to the information.505See Finn Brunton & Helen Nissenbaum, Obfuscation: A User’s Guide for Privacy and Protest 87 (2015) (expanding on practices for obfuscating information); Woodrow Hartzog, The Public Information Fallacy, 99 B.U. L. Rev. 459, 515 (2019) (referring obscurity as an essential component of modern notions of privacy). Another option is to downgrade the visibility of the integrated token on the internal search results page of the website with which it was integrated, block auto suggestions and completion of search queries with the token’s name in this search engine.506 Brunton & Nissenbaum, supra note 505, at 44. Intermediaries can also use AI algorithms to reduce the visibility of abusive NFTs on newsfeeds, or reduce virality, thus reducing the ability to disseminate them widely.507Id. at 45.

Obscuring is not optimal, as harmful expressions remain on the blockchain. However, this practice can reduce their exposure and provide some relief.

One might argue that this obscuration could chill free speech and infringe on the token owner’s right to free expression and the public’s right to receive information. Indeed, mandating intermediaries to obscure abusive tokens on the internet raises more challenges as Section 230 immunizes intermediaries from liability.508See 47 U.S.C. § 230(c)(1). Although obscuring is different from removing content altogether, and would lead to a milder chilling effect,509See generally Part II.B (on obscuring in a related context of the right to be forgotten). such proposed mandates could conflict with Section 230, since they impose liability on editorial discretion exercised by intermediaries regarding third-party content. However, we believe that the interpretation of the law should adapt to a changing reality which includes emerging technologies and their implications. In light of the unprecedented damages abusive tokens could potentially cause, a narrow interpretation of Section 230 should be adopted, in order to allow the imposition of a nuanced obligation to obscure. This is a relatively moderate practice, since it does not involve deleting content altogether, but merely reducing its visibility.

It should be noted that outside the U.S. this solution could be made mandatory. Obscuration obligations already exist under current regulation in related contexts throughout the world, as in most states freedom of expression does not take precedence over the right to privacy.510Neil Richards & Woodrow Hartzog, Privacy’s Constitutional Moment and the Limits of Data Protection, 61 B.C. L. Rev. 1687, 1729–30 (2020) (“In Europe, free expression is safeguarded by Article 10 of the European Convention and Article 11 of the EU Charter. Like other European fundamental rights, these provisions are subject to proportionality analysis – where they conflict with another fundamental right such as the right to privacy or to data protection, courts must balance the rights on an equal footing. By contrast, in the United States, the fundamental right of free expression protected by the First Amendment is not subject to proportionality analysis. . . .”). In fact, a similar solution has already been applied to search engines regarding information on EU citizens.511See supra Part II(1). In the EU, citizens benefit from the “Right to be Forgotten”. Upon request, search engines such as Google are required to delist search results that link to information that is personal, inaccurate, or irrelevant, as well as excessive data found on third-party websites.512See Case C-131/12Google Spain SL, Google Inc. v. Agencia Española de Protección de Datos, ECLI:EU:C:2014:317 (May 13, 2014). Such model is a model of obscuration, since the data remains on the website of origin and is only removed from search results. Despite differences between the EU and the U.S. with respect to the status of freedom of expression, in the context of the “Right to be Forgotten,” scholarly work advocates the adoption of nuanced obscuration obligations even in the United States.513See Jones, supra note 213, at 164 (“Even in the US there are ways to make the digital age a forgiving era. But the ways must be within the boundaries of what makes Americans most free.”). Such obligations become particularly critical in the context of NFTs.

Indeed, delisting search results that link to information that is personal, inaccurate, or irrelevant, can curb free speech, even though it only obscures the expression without removing it. A limited right to delist does, however, provide a proportional balance between values and can even promote speech. First, an abusive NFT could cause the victim to seclude themselves from conversations and delisting it could in fact achieve a balance between the free speech of the owner and the victim. Second, obscuring information does not delete it. The information would remain on the blockchain, and even on the internet, and would not be removed completely. As a result, there would be less of a chill on the token owner’s speech. Finally, although NFTs do indeed contain speech, there could be some debate about whether the value of this speech should be treated as absolute. “With time, data may express fewer elements of free speech” and can be removed from the original context.514Lavi, Good, Bad, Ugly, supra note 202, at 2644. Therefore, the law should legislate nuanced protection of speech which accounts for the extended period of time that may have lapsed since the token’s creation, as well as the differences between deletion and obscuration.

As explained above, since it is impossible to remove expressions once they have entered the blockchain, there is justification for a mandatory obligation to obscure directed at intermediaries. However, even if such a mandate were to be adopted, it should not be automatic, and should have limitations. Obligating intermediaries to obscure based on victim allegations of violations would result in extensive collateral censorship, as anyone might argue that an NFT violates their rights. Therefore, in order to avoid such consequences, a duty to obscure should be imposed only after a judicial decision has determined that the token does indeed violate a legal right.

We are well aware of the challenges that could be posed by mandating obscuration of NFTs that contain unprotected speech. Accordingly, we propose that if binding regulations are not promoted, intermediaries could voluntarily adopt the solution of obscuring content on their platforms. Intermediaries already apply similar voluntary measures in other contexts and adopt obscuring practices in their community standards and moderation policies. In these contexts, intermediaries use algorithms to downgrade some types of content and make them less visible.515See supra Part III; see generally Goldman, supra note 503. Intermediaries could use the same practices to voluntarily obscure abusive NFTs.

C. Prosecution or Lawsuits Against Abusive Token Owners – Remedies

The third proposed method of mitigation is direct lawsuits or prosecution against the token owner. Unlike online intermediaries and individuals that share or transfer the original token, which are immune to liability,51647 U.S.C. § 230; see Batzel v. Smith, 333 F.3d 1018 (9th Cir. 2003). (Section 230 also exempts internet users who share content published by others); see also Barrett v. Rosenthal, 146 P.3d 510, 525, 528–29 (Cal. 2006) (“[C]ongressional purpose of fostering free speech on the Internet supports the extension of section 230 immunity to active individual ‘users.’”). the victim can file a legal claim against the person who created the token.517See generally Patrick H. Hunt, Comment, Tortious Tweets: A Practical Guide to Applying Traditional Defamation Law to Twibel Claims, 73 La. L. Rev. 559 (2013) (reviewing defamation lawsuits for posting expressions posted on Twitter). This option does not come to replace the technological solutions proposed above in Part IV. A. It cannot prevent the harm ex ante, as the NFT would remain on the blockchain, available for blockchain users to see. Unlike obscuring, direct legal action against the owner also would not mitigate harm. It can, however, provide some recourse against the damage caused by abusive NFTs, or deprive the owner of profit from the abusive token. Thus, it has the potential to partially rectify the injustice and allow relief. Moreover, the threat of possible damage claims or disgorgement penalties, that could strip the owners of abusive NFTs of every penny from ill-gotten gain,518See Sarah Worthington, Reconsidering Disgorgement for Wrongs, 62 Mod. L. Rev. 218 (1999). could constitute a substantial negative incentive to avoid creating and transferring abusive tokens in the first place. However, as the following subsection will explain, substantial and procedural difficulties stand in the way of prosecution or direct legal claims.

1. Difficulties and Obstacles to Direct Lawsuits

a. Harmful Expressions Do Not Necessary Violate the Law

The first difficulty is a substantial one: in order to prosecute or file an action against the NFT’s creator, the NFT’s content has to violate the law. Thus, even if an NFT has caused tremendous harm, and has violated the community standards of all mainstream platforms, the police would not interfere, and the victim would not have a direct cause of legal action, unless the NFT violates the law. Not all categories of behavior and damages presented in Part III violate laws, especially in the U.S., where there is a presumption against restrictions of speech. This is because freedom of speech enjoys stronger protection in the U.S. than in other democracies.519See Evelyn Douek, Governing Online Speech: From “Posts-as-Trumps” to Proportionality and Probability, 121 Colum. L. Rev. 759, 775 (2021). A legal action can therefore be successful mainly regarding categories that constitute wrongdoing or criminal offenses.

Abusing NFTs to commit defamation is unlawful, since defamation is considered unprotected speech.520See Killion, supra note 315. Therefore, victims could file an action against the person who first posted an NFT containing defamation against them. Shaming however, is different: if the NFT contains shaming but does not reach the level of defamation, it can be posted without legal liability. This is because shaming is not considered wrongdoing, even if intermediaries do often voluntarily remove shaming that is not tokenized from their social networks.521See supra Part III.A.1.b. Fake news that does not reach the level of defamation is also considered protected speech.522United States v. Alvarez, 567 U.S. 709, 718–19 (2012); Louis W. Tompros, Richard A. Crudo, Alexis Pfeiffer & Rahel Boghossian, The Constitutionality of Criminalizing False Speech Made on Social Networking Sites in a Post-Alvarez, Social Media-Obsessed World, 31 Harv. J.L. & Tech. 65 (2017). For criticism, see Cass R. Sunstein, Liars: Falsehoods and Free Speech in an Age of Deception 48 (2021) (“the plurality in Alvarez was myopic in focusing largely on established categories of cases, such as defamation, in which false statements of fact can sometimes be regulated or sanctioned. In the modern era, false statements falling short of libel are causing serious problems for individuals and society; if they cause such problems, there is a legitimate argument that they should be regulable.”). Therefore, even if NGOs might have legal standing regarding an infringement of public interest, the lawsuit is likely to fail. Victims can file actions regarding violations of privacy that meet the requirements of Prosser’s identified privacy invasions. These violations fall under the Restatement (Second) of Torts.523See Restatement (Second) of Torts § 623A (Am. L. Inst. 1977). Action can therefore be filed for intrusion upon seclusion, or public disclosure of embarrassing private facts. Victims can also file action for infliction of emotional distress.524See Avlana K. Eisenberg, Criminal Infliction of Emotional Distress, 113 Mich. L. Rev. 607, 623–24 (2015). When nude photos are tokenized and disseminated without consent, victims can file actions based on privacy laws, infliction of emotional distress, or specific laws that address revenge porn.525See Nonconsensual Pornography Laws, Cyber C.R. Initiative (last visited Jan. 29, 2023), (demonstrating that as of today, at least 48 states have revenge porn laws that make it a crime to disseminate such photos); Franks, supra note 367, at 1256; Citron, Sexual Privacy, supra note 367, at 1932. Surprisingly, speech that encourages physical violence is not always a violation of the law. For example, U.S. law does not ban hate speech unless it specifically incites to imminent lawless actions, such as incitement to terrorism, or includes a true threat of harm.526Evelyn Mary Aswad, The Future of Freedom of Expression Online, 17 Duke L. & Tech. Rev. 26, 43 (2018).

Even before the creation of NFTs, victims suffered harm from hate speech, and lacked legal remedies. Yet tokenizing speech can aggravate this harm substantially, and victims remain emptyhanded. The problem will get worse with the advance of Web 3.0 blockchain-based social media. In fact, once the harmful expression has entered the blockchain, the only way to mitigate harm is to try to obscure the information by adding more information in the case of permissioned blockchain, or by harnessing online intermediaries to make it less visible.

b. The Problem of Recourse, Reluctance to File Legal Action
and Collection of Monetary Remedies

Even where a direct legal cause of action regarding the abusive token is present, filing a legal action requires significant resources. Criminal procedures require law enforcement entities to invest resources, and they do so only in very clear-cut and extreme cases.527See Citron, supra note 367, at 1930. Private civil suits require victims to invest significant resources and, in many cases, victims cannot afford to hire a lawyer.528Id. at 1929–30. Victims may be reluctant to take legal action in some cases, such as invasions of privacy and violations of a sexual nature. In these instances, victims of abusive NFTs may fear that a civil suit could attract more unwanted attention to the harmful expression and exacerbate harm. Therefore, they might elect to refrain from taking action.

Furthermore, legal proceedings can be complicated, cumbersome, and lengthy. Many requirements must be met for a court to reach a decision, and the contexts in which remedies are available can be limited,529Lavi, Good, Bad, Ugly, supra note 202, at 2622. as some causes of action require plaintiffs to prove damage, or to demonstrate evidence that the defendant has profited from the abusive speech.

c. The Problem of Anonymity and the Procedural Difficulties
of Unmasking the Violator

Even if wrongdoing or criminal offenses have been committed, and even if authorities or the victim of the abusive NFT, are willing to invest the necessary resources to prosecute or file a lawsuit and embark upon a cumbersome legal process, in many cases the victim faces yet another obstacle – token user anonymity.

NFT owners do not have to identify themselves on the blockchain. The token’s anonymity makes it possible to hide and protect the identity of the chain of owners. The original creator’s identity might be known to the marketplace (usually due to regulation that requires this); however, if the owner does not identify themselves voluntarily, their identity will remain unknown to blockchain users, as will the identities of subsequent buyers.

In previous work, we expanded on the right to anonymity as a constitutional right. We criticized such anonymity in the context of fungible cryptocurrencies.530Jabotinsky & Lavi, supra note 17, at 560–61. For expansion on the constitutional right to anonymity as part of free speech see Jeff Kosseff, The United States of Anonymous 37–55 (2022); McIntyre v. Ohio Elections Comm’n, 514 U.S. 334 (1995). We proposed that corporations that issue cryptocurrencies be required to verify the identity of blockchain users.531Jabotinsky & Lavi, Out Speak, supra note 17, at 564–65. In order to balance between legitimate privacy, free speech interests to maintain anonymity and crime prevention, we proposed that unmasking blockchain user identity should be subject to a court warrant.532Id. at 569.

Similarly, we believe that NFT owner identity should be verified by marketplaces, as well as on the blockchain, and that the law should allow unmasking of NFT owners. In order to balance privacy, free speech interests, and the interest of protecting against violations, unmasking should not be a routine matter. Instead, it should be subject to a judicial decision and balancing tests.533See id. at 577–79 (expanding on unmasking cryptocurrency users and the right to anonymity). Otherwise, individuals would avoid creating and transferring NFTs for the beneficial purposes described in Part I, such as artwork and collectibles. Moreover, automatic unmasking of NFT users would impair their function as “the engine of speech.” If users were to know ex ante that they could be exposed to extensive government and private intervention and could be unmasked for tokenizing speech that is protected under the First Amendment, they would be disincentivized from using NFTs to perpetuate their speech. Consequently, they might avoid using NFTs to tokenize political speech, words of protest, criticism of governmental institutions, or whistleblower leaks, without being censored by governments or private intermediaries. Without limitations on unmasking, such as subjecting it to a judicial decision, the innovation achieved by NFTs would grind to a halt, and society would be deprived of its benefits.534Id. at 582 (referring to a related context of unmasking cryptocurrency users and explaining that without limitations unmasking obligations could hinder innovation: “such obligations might impair user trust in the system and hinder innovation.”).

In the related context of using fungible cryptocurrencies to fund illicit activities, we proposed that unmasking fungible tokens should be subject to a warrant.535The conclusion that a warrant is needed is based on the U.S. Supreme Court decision in Carpenter v. United States, 138 S. Ct. 2206 (2018). Where there is probable cause to suspect that a cryptocurrency user’s activities support crime, or infringe on national security, courts should order cryptocurrency issuers to disclose the identity of users on the blockchain.536Jabotinsky & Lavi supra note 17, at 564–65. Similar standards should apply where there is suspicion that a person is using an NFT to commit or support crime.

Furthermore, we propose that procedures should also be developed to unmask NFT owners in the context of civil law. Similar procedures have been developed regarding unmasking of cybersmear on the internet.537Kosseff, supra note 530, at 93–111. Obligations to store information on users, such as IP addresses, and general unmasking procedures, are not uncommon in other civil law contexts, and U.S. scholars proposed to condition Section 230’s immunity to content providers on maintaining IP logs.538Id. at 137–39. People posting defamatory comments on websites might thus be exposed to unmasking of their Internet Service Provider (ISP).539Jabotinsky & Lavi, supra note 17, at 583. However, careful thought is needed, as overly broad unmasking of NFT owners could chill protected expressions under the First Amendment. Similar to the widely employed standard for unmasking in traditional defamation actions,540The widely employed standard was established in the case of Dendrite Int’l, Inc. v. Doe, 775 A.2d 756, 760 (N.J. Super. Ct. App. Div. 2001); Kosseff, supra note 530, at 119–33 (2022) (addressing the need to set the rules and balancing tests to online anonymity); see also Madeline Lamo & Ryan Calo, Regulating Bot Speech, 66 UCLA L. Rev. 988, 1022 (2019) (“First, plaintiffs must notify anonymous speakers in order to provide them with a reasonable opportunity to contest a potential unmasking; second, they must identify precisely which statements are allegedly defamatory; third, they must produce prima facie evidence supporting every element of their claim; and finally, the court must weigh the risk of unmasking the defendant against the harm to the plaintiff on a case-by-case basis.”). On the different standards for unmasking anonymous speakers see Ronen Perry & Tal Zarsky, Liability for Online Anonymous Speech: Comparative and Economic Analyses, 5 J. Eur. Tort L. 205, 215 (2014). unmasking should be subject to a judicial decision, and should be exercised only after identifying precisely which statements constitute a tort. Additionally, plaintiffs should be required to produce prima facie evidence supporting every element of the claim. The court would be tasked with balancing between the risks connected with unmasking the defendant and potential harm to the plaintiff, in light of the specific circumstances of each case.

Courts should weigh “the risk of unmasking the defendant against the harm to the plaintiff on a case-by-case basis.”541Lamo & Calo, supra note 540, at 1022. There should be a need to weigh similar considerations when police or government suspicions arise that an NFT supports crime. Courts would only order unmasking where there is probable cause that the NFT supports crime.

To illustrate the above, imagine that John Doe subpoenas unmasking the identity of anonymous speakers from their ISP, or from a website on which they have posted defamatory comments that are not new. The idea of unmasking anonymous users should be all the more applicable to NFT owners who not only publish harmful expressions, but also perpetuate them on the blockchain. As unmasking is possible only under specific circumstances, and the anonymous speaker’s right to free speech is balanced against the plaintiff’s claim, this unmasking would not violate the First Amendment.

It should be emphasized that prosecution or direct legal action against token owners are not intended to replace the other types of mitigation outlined above, but rather to supplement them.542See supra part IV A-B (preventing the entrance of expressions to the Blockchain by using a pointer to a cloud outside the blockchain, pre-screening abusive expressions ex ante, or obscuring abusive expressions ex post facto). Prosecution can punish violators who abuse tokens. Direct legal action can provide remedies such as compensation or disgorgement. Neither paths prevent the entrance of abusive tokens to the blockchain system ex ante or obscures them ex post. Thus, in these cases, the tokenized expressions would remain on the blockchain. Moreover, as explained, there are difficulties and obstacles to direct prosecution.543See, e.g., Citron, Sexual Privacy, supra note 367, at 1930–31. These obstacles are significant.544As explained, prosecution can occur only if the legal authorities invest efforts in prosecuting token owners of expressions that are criminal offences. Civil legal actions can succeed only if publishing the speech on the NFT is considered a wrong, the plaintiff is willing to invest time and money in filing a legal action and can prove every element of the case. Moreover, an additional procedural process of unmasking might be required before the plaintiff could file a direct action, and the plaintiff would have to meet a standard of evidence to support his claim in order to balance free speech and the plaintiff’s constitutional rights such as reputation and dignity. However, they are not fatal. Plaintiffs who are able to surmount these obstacles would be entitled to damages for harm caused by tokens that include unprotected harmful expressions, or to the remedy of disgorgement, which would deprive the owners of such NFTs of profit.

New technologies emerge, markets change, and the harm that can result from the use of such technologies is greater than ever. We therefore believe that policy makers must rethink the path forward. This Part outlined a comprehensive framework of solutions for abusive tokenized speech. An understanding that the harm of abusive tokens can be perpetual, and can exceed any type of harm known before, should lead to the conclusion that a framework for regulating tokenized speech cannot be left for market self-regulation. This is where the law should step in. We believe that voluntary measures should be taken in the interim until the regulator steps in, and/or as a choice by intermediaries and marketplaces, to self-regulate tokenized speech beyond the threshold of unprotected speech. However, due to the severity of the harm that could potentially be caused by abusive tokens, we believe that legal regulation is preferable to self-regulation in the long run.

We are well aware of possible conflicts that could arise between mandating a framework that includes marketplace liability and Section 230.545Communication Decency Act, 47 U.S.C. § 230(c)(1). Yet we are confident that most of these conflicts can be reconciled by adopting a narrow interpretation of Section 230, to adapt it to new emerging technologies and NFT markets. Thus far, courts have used two primary arguments as the basis for denying Section 230 immunity: “(1) the platform at least partly developed or created the content; or (2) the claim did not treat the platform as the publisher or speaker of third-party content.”546Kosseff, Users’ Guide, supra note 202, at 21; see also Michal Lavi, Targeting Exceptions, 32 Fordham Intell. Prop. Media & Ent. L.J. 65, 110–22 (2021).

A possible interpretation that might allow narrowing the immunity granted in Section 230 would be to treat the act of tokenizing as an act of content development.54747 U.S.C. § 230(f)(3) “Information content provider” is defined as “any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the internet or any other interactive computer service.” Congress declared that online intermediaries could never be treated as “publishers” of material they did not develop. 47 U.S.C. § 230(c)(1) (emphasis added). Thus, the marketplace would be seen as a content developer, excluded from Section 230 immunity.548See e.g., Fair Hous. Council v., LLC, 521 F.3d 1157,1164 (9th Cir. 2008); Fed Trade Comm’n v. Accusearch, 570 F.3d 1187 (10th Cir. 2009), Kosseff, Users’ Guide, supra note 202, at 22–23. Another interpretation that could enable a narrowing of Section 230 immunity, is that Section 230 does not apply to platform design. As mentioned above, some courts have already ruled that Section 230 does not shield online intermediaries in all circumstances and have distinguished negligent design claims from claims that focus on content published by other content providers.549Kosseff, Users’ Guide, supra note 202, at 25; see e.g., Lemmon v. Snap, Inc., 995 F.3d 1085, 1087 (9th Cir. 2021); Loomis v. LLC, B297995 2021 WL 1608878, at *476 (Cal. App. Apr. 26, 2021), Bolger v., Inc., 2020 WL 4692387, at *463 (Cal. App. Aug. 13, 2020). This narrow interpretation could be adopted with respect to platform design and marketplaces, thereby allowing most lawsuits regarding platform design and marketplaces to advance beyond the preliminary stages.

It might further be argued that, due to the irreparable harm that could be caused by NFTs containing unprotected speech, and the fact that the proposed obligations target the marketplace of commerce and not traditional platforms, there is justification for treating NFT marketplaces as an exception to Section 230.550See Alexander Tsesis, Marketplace of Ideas, Privacy, and Digital Audiences, 94 Notre Dame L. Rev. 1585, 1588 (2019) (differentiating between marketplace behavior and freedom of expression). A broad interpretation of marketplace behavior might be a possible path to justify prescreening by marketplace, as the act of tokenizing is marketplace behavior that allows the owner of the token to transact with it.). It is important to remember that the same expressions could still be shared on traditional platforms without prescreening obligations.


NFTs emerged about three years ago and ascended rapidly in 2021, with a growing sales volume reaching billions of USD.551Elizabeth Howcroft, NFT Sales Volume Surges to .5 Bln in 2021 First Half, Reuters (July 6, 2021) [] (“[T]he market for non-fungible tokens (NFTs) surged to new highs in the second quarter, with .5 billion in sales so far this year, up from just .7 million in the first half of 2020, marketplace data showed.”.). These unique digital assets are a game changer in the digital world, revolutionizing conventions of ownership, artwork, and music, as well as developing new markets around the world. As expressions can be tokenized and stored on the blockchain and these expressions cannot be changed or amended at a later stage, they could become a common means for communicating information and an engine of speech. As NFTs gain popularity, it becomes more likely that the world could finally attain genuine freedom of information and expression, freeing the globe from the shackles of censorship and the control of intermediaries which often remove content erroneously, or disregard context.

This Article has identified the promise of such a world, yet it has also recognized that this utopia could easily deteriorate into a dystopia. NFTs have the potential to tokenize harmful expressions and allow the dissemination of abusive expression, without any ability to alter or remove this abuse from the blockchain. Abuse would thus be perpetuated, and the E.U. Right to be Forgotten could no longer be upheld. Moreover, because of the structure of the blockchain, voluntary practices of content removal are inapplicable in many cases. As a result, NFTs could substantially aggravate existing types of harm caused by dissemination of harmful expressions. Thus, NFTs are not just an engine of speech – they could also become an engine for defamation, shaming, incitement to terrorism and fake news. They have the potential to curtail privacy and intimacy and spread hate and incitement that could even lead to physical violence. Tokenizing speech allows people to publish harmful expression without accountability, leaving victims emptyhanded without recourse to any remedy or mitigation by third-parties (such as online platforms) who otherwise remove expressions that do not comply with their community standards.

NFT use is growing exponentially, and NFTs are soon expected to be integrated in social media. Thus, there is real potential that token abuse aimed at perpetuating harmful expressions, and disseminating them to a wider audience, could intensify. Yet, a systematic understanding of the potential of abusive NFTs to cause harm, and the suitable legal mechanisms to mitigate such harm, are currently absent from academic scholarship. This Article endeavors to bridge this gap and offers a comprehensive framework designed to meet the challenges presented by abusive NFTs, allowing different avenues for mitigation and remedy. We also address possible free speech objections to the framework we have proposed and demonstrate that it is in line with the First Amendment.

The solutions we have identified and proposed include an ex ante safety-by-design approach to prevent storage of all information on the blockchain in the first place. Alternatively, the Article has advocated for the regulation of marketplaces, making them serve as gatekeepers by prescreening abusive expressions so that abusive expressions would not be tokenized to begin with. A second avenue we have identified and proposed in order to mitigate harm is ex post obscuration of abusive information, either by adding more information to the blockchain (a solution that could have an impact especially in permissioned blockchains), or by harnessing online intermediaries to reduce the visibility of such information. Lastly, we have proposed that complementary mitigation of compensation for damage, or disgorgement, should be made possible through legal action against NFT owners.