security research, software archaeology, geek of all trades
560 stories

I'm All-In on Server-Side SQLite

1 Comment and 3 Shares

I'm Ben Johnson. I wrote BoltDB, an embedded database that is the backend for systems like etcd. Now I work at, on Litestream. Litestream is an open-source project that makes SQLite tenable for full-stack applications through the power of ✨replication✨. If you can set up a SQLite database, you can get Litestream working in less than 10 minutes.

The conventional wisdom of full-stack applications is the n-tier architecture, which is now so common that it's easy to forget it even has a name. It's what you're doing when you run an "application server" like Rails, Django, or Remix alongside a "database server" like Postgres. According to the conventional wisdom, SQLite has a place in this architecture: as a place to run unit tests.

The conventional wisdom could use some updating. I think that for many applications – production applications, with large numbers of users and high availability requirements – SQLite has a better place, in the center of the stack, as the core of your data and persistence layer.

It's a big claim. It may not hold for your application. But you should consider it, and I'm here to tell you why.

A Brief History of Application Databases

50 years is not a long time. In that time, we've seen a staggering amount of change in how our software manages data.

In the beginning of our story, back in the '70s, there were Codd's rules, defining what we now call "relational databases", also known today as "databases". You know them, even if you don't: all data lives in tables; tables have columns, and rows are addressable with keys; C.R.U.D.; schemas; a textual language to convey these concepts. The language, of course, is SQL, which prompted a Cambrian explosion of SQL databases, from Oracle to DB2 to Postgres to MySQL, throughout the '80s and '90s.

It hasn't all been good. The 2000s got us XML databases. But our industry atoned by building some great columnar databases during the same time. By the 2010s, we saw dozens of large-scale, open-source distributed database projects come to market. Now anyone can spin up a cluster and query terabytes of data.

As databases evolved, so too did the strategies we use to plug them in to our applications. Almost since Codd, we've divided those apps into tiers. First came the database tier. Later, with memcached and Redis, we got the caching tier. We've got background job tiers and we've got routing tiers and distribution tiers. The tutorials pretend that there are 3 tiers, but we all know it's called "n-tier" because nobody can predict how many tiers we're going to end up with.

You know where we're going with this. Our scientists were so preoccupied with whether or not they could, and so on.

See, over these same five decades, we've also seen CPUs, memory, & disks become hundreds of times faster and cheaper. A term that practically defines database innovation in the 2010s is "big data". But hardware improvements have made that concept slippery in the 2020s. Managing a 1 GB database in 1996? A big deal. In 2022? Run it on your laptop, or a t3.micro.

When we think about new database architectures, we're hypnotized by scaling limits. If it can't handle petabytes, or at least terabytes, it's not in the conversation. But most applications will never see a terabyte of data, even if they're successful. We're using jackhammers to drive finish nails.

The Sweet Release of SQLite

There's a database that bucks a lot of these trends. It's one of the most popular SQL databases in the world, so standardized it's an official archival format of the Library of Congress, it's renowned for its reliability and its unfathomably encompassing test suite, and its performance is so good that citing its metrics on a message board invariably starts an argument about whether it should be disqualified. I probably don't have to name it for you, but, for the one person in the back with their hand raised, I'm talking about SQLite.

SQLite is an embedded database. It doesn't live in a conventional architectural tier; it's just a library, linked into your application server's process. It's the standard bearer of the "single process application": the server that runs on its own, without relying on nine other sidecar servers to function.

I got interested in these kinds of applications because I build databases. I wrote BoltDB, which is a popular embedded K/V store in the Go ecosystem. BoltDB is reliable and, as you'd expect from an in-process database, it performs like a nitro-burning funny car. But BoltDB has limitations: its schema is defined in Go code, and so it's hard to migrate databases. You have to build your own tooling for it; there isn't even a REPL.

If you're careful, using this kind of database can get you a lot of performance. But for general-purpose use, you don't want to run your database off the open headers like a funny car. I thought about the kind of work I'd have to do to make BoltDB viable for more applications, and the conclusion I quickly reached was: that's what SQLite is for.

SQLite, as you are no doubt already typing into the message board comment, is not without its own limitations. The biggest of them is that a single-process application has a single point of failure: if you lose the server, you've lost the database. That's not a flaw in SQLite; it's just inherent to the design.

Enter Litestream

There are two big reasons everyone doesn't default to SQLite. The first is resilience to storage failures, and the second is concurrency at scale. Litestream has something to say about both concerns.

How Litestream works is that it takes control of SQLite's WAL-mode journaling. In WAL mode, write operations append to a log file stored alongside SQLite's main database file. Readers check both the WAL file and the main database to satisfy queries. Normally, SQLite automatically checkpoints pages from the WAL back to the main database. Litestream steps in the middle of this: we open an indefinite read transaction that prevents automatic checkpoints. We then capture WAL updates ourselves, replicate them, and trigger the checkpointing ourselves.

The most important thing you should understand about Litestream is that it's just SQLite. Your application uses standard SQLite, with whatever your standard SQLite libraries are. We're not parsing your queries or proxying your transactions, or even adding a new library dependency. We're just taking advantage of the journaling and concurrency features SQLite already has, in a tool that runs alongside your application. For the most part, your code can be oblivious to Litestream's existence.

Or, think of it this way: you can build a Remix application backed by Litestream-replicated SQLite, and, while it's running, crack open the database using the standard sqlite3 REPL and make some changes. It'll just work.

You can read more about how this works here.

It sounds complicated, but it's incredibly simple in practice, and if you play with it you'll see that it "just works". You run the Litestream binary on the server your database lives on in "replicate" mode:

litestream replicate fruits.db s3://my-bukkit:9000/fruits.db

And then you can "restore" it to another location:

litestream restore -o fruits-replica.db s3://my-bukkit:9000/fruits.db

Now commit a change to your database; if you restore again then you'll see the change on your new copy.

The ordinary way people use Litestream today is to replicate their SQLite database to S3 (it's remarkably cheap for most SQLite databases to live-replicate to S3). That, by itself, is a huge operational win: your database is as resilient as you ask it to be, and easily moved, migrated, or mucked with.

But you can do more than that with Litestream. The upcoming release of Litestream will let you live-replicate SQLite directly between databases, which means you can set up a write-leader database with distributed read replicas. Read replicas can catch writes and redirect them to the leader; most applications are read-heavy, and this setup gives those applications a globally scalable database.

Litestream SQLite, Postgres, CockroachDB, or any other database

They all work on; we do built-in persistent storage and private networking for painless clustering, so it's easy to try new stuff out.

Try Fly  

You Should Take This Option More Seriously

One of my first jobs in tech in the early 2000s was as an Oracle Database Administrator (DBA) for an Oracle9i database. I remember spending hours poring over books and documentation to learn the ins and outs of the Oracle database. And there were a lot. The administration guide was almost a thousand pages—and that was just one of over a hundred documentation guides.

Learning what knobs to turn to optimize queries or to improve writes could make a big difference back then. We had disk drives that could only read tens of megabytes per second so utilizing a better index could change a 5-minute query into a 30 second query.

But database optimization has become less important for typical applications. If you have a 1 GB database, an NVMe disk can slurp the whole thing into memory in under a second. As much as I love tuning SQL queries, it's becoming a dying art for most application developers. Even poorly tuned queries can execute in under a second for ordinary databases.

Modern Postgres is a miracle. I've learned a ton by reading its code over the years. It includes a slew of features like a genetic query optimizer, row-level security policies, and a half dozen different types of indexes. If you need those features, you need them. But most of you probably don't.

And if you don't need the Postgres features, they're a liability. For example, even if you don't use multiple user accounts, you'll still need to configure and debug host-based authentication. You have to firewall off your Postgres server. And more features mean more documentation, which makes it difficult to understand the software you're running. The documentation for Postgres 14 is nearly 3,000 pages.

SQLite has a subset of the Postgres feature set. But that subset is 99.9% of what I typically need. Great SQL support, windowing, CTEs, full-text search, JSON. And when it lacks a feature, the data is already next to my application. So there's little overhead to pull it in and process it in my code.

Meanwhile, the complicated problems I really need to solve aren't really addressed by core database functions. Instead, I want to optimize for just two things: latency & developer experience.

So one reason to take SQLite seriously is that it's operationally much simpler. You spend your time writing application code, not designing intricate database tiers. But then there's the other problem.

The Light Is Too Damn Slow

We're beginning to hit theoretical limits. In a vacuum, light travels about 186 miles in 1 millisecond. That's the distance from Philadelphia to New York City and back. Add in layers of network switches, firewalls, and application protocols and the latency increases further.

The per-query latency overhead for a Postgres query within a single AWS region can be up to a millisecond. That's not Postgres being slow—it's you hitting the limits of how fast data can travel. Now, handle an HTTP request in a modern application. A dozen database queries and you've burned over 10ms before business logic or rendering.

There's a magic number for application latency: responses in 100ms or less feel instantaneous. Snappy applications make happy users. 100ms seems like a lot, but it's easy to carelessly chew it up. The 100ms threshold is so important that people pre-render their pages and post them on CDNs just to reduce latency.

We'd rather just move our data close to our application. How much closer? Really close.

SQLite isn't just on the same machine as your application, but actually built into your application process. When you put your data right next to your application, you can see per-query latency drop to 10-20 microseconds. That's micro, with a μ. A 50-100x improvement over an intra-region Postgres query.

But wait, there's more. We've effectively eliminated per-query latency. Our application is fast, but it's also simpler. We can break up larger queries into many smaller, more manageable queries, and spend the time we've been using to hunt down corner-casey N+1 patterns building new features.

Minimizing latency isn't just for production either. Running integration tests with a traditional client/server database easily grows to take minutes locally and the pain continues once you push to CI. Reducing the feedback loop from code change to test completion doesn't just save time but also preserves our focus while developing. A one-line change to SQLite will let you run it in-memory so you can run integration tests in seconds or less.

Small, Fast, Reliable, Globally Distributed: Choose Any Four

Litestream is distributed and replicated and, most importantly, still easy to get your head around. Seriously, go try it. There's just not much to know.

My claim is this: by building reliable, easy-to-use replication for SQLite, we make it attractive for all kinds of full-stack applications to run entirely on SQLite. It was reasonable to overlook this option 170 years ago, when the Rails Blog Tutorial was first written. But SQLite today can keep up with the write load of most applications, and replicas can scale reads out to as many instances as you choose to load-balance across.

Litestream has limitations. I built it for single-node applications, so it won't work well on ephemeral, serverless platforms or when using rolling deployments. It needs to restore all changes sequentially which can make database restores take minutes to complete. We're rolling out live replication, but the separate-process model restricts us to course-grained control over replication guarantees.

We can do better. For the past year, what I've been doing is nailing down the core of Litestream and keeping a focus on correctness. I'm happy with where we've landed. It started as a simple, streaming back up tool but it's slowly evolving into a reliable, distributed database. Now it's time to make it faster and more seamless, which is my whole job at There are improvements coming to Litestream — improvements that aren't at all tied to! — that I'm psyched to share.

Litestream has a new home at, but it is and always will be an open-source project. My plan for the next several years is to keep making it more useful, no matter where your application runs, and see just how far we can take the SQLite model of how databases can work.

Read the whole story
5 days ago
On the one hand, I am fully in favor of application architectures that avoid needing a database server.

On the other hand, I will be happy if I never need to touch SQLite’s bastardized typeless SQL ever again.
Pittsburgh, PA
5 days ago
That does make me curious about the adoption rate for strict tables
5 days ago
That's a new feature since the last time I looked at SQLite. Reading the docs, though, it seems to me that drh doesn't actually understand why people want this feature, and that makes me not trust it.
Share this story

How I Got from Mastodon’t to Mastodon


I finally wrapped my head around Mastodon, a social media platform, this past week. On Monday, April 25, I was beyond annoyed by how confusing I found Mastodon to be — and a similar exasperation was expressed by numerous friends of mine. For a while, I embraced this camaraderie of disinclination. But the more I worked to understand Mastodon, the more my perception changed, and my attitude along with it.

Tuesday was still more of the same. By Wednesday afternoon, however, I was quite active on Mastodon, and I began to run into some of those same friends, as well as familiar avatars from other social media platforms. I also met, in internet terms, new folks — and new-ish folks (one introduced themselves as the person who wrote a bot I interact with on another social media platform). That bot-to-human incident is just one anecdote, but anecdotes can be orienting, even if only as stories. The story here was that I’d traversed from a highly public social network to a relatively more circumspect one, and upon arrival I met not a bot but the person behind the bot.

By Friday, April 28, I had emerged as something resembling a Mastodonian. I’d moved through the three common stages of digital adoption: from annoyed through engaged to engrossed. That evening, when a friend casually asked, via a group email thread, if Mastodon was worth paying attention to, I began to reply — and I only finished after unexpectedly writing a roughly 2,000-word explanation to help my friend, along with the other participants in the thread, understand how Mastodon functions. Or more to the point, how I understand Mastodon to function, and why I think Mastodon might matter.

Grains of Salt
To begin with, I can’t say with assuredness that I’ll be sticking around on Mastodon. My general rule of thumb with online tools is to simply sign up and see if it sticks. I’ve tried so many social media tools, and very few have stuck. I quickly ditched Mastodon twice in the past, but it certainly makes more sense to me now than it did then. And since I found Mastodon difficult to make sense of, I wanted to share here my sense of what Mastodon is, why it can be hard to initially comprehend, and how one might go about both comprehending and engaging with it.

Yes, I know the complaint: if a social media platform requires a 2,000-word explanation (more like 4,500 words, as of this essay, which expands upon my original email), it is doomed to fail. I’m not here to say Mastodon is the future. I’m just here to say Mastodon is very interesting — and that while a lot of the perceived bugs may be bugs, and a lot of the conundrums are just subpar design and inefficient communication, some of those seeming bugs are features (or the residue of features), and much of that subpar communication is because of just how different Mastodon is from the current dominant forms of social media. In other words: Don’t miss the paradigm forest due to the bug trees.

If Mastodon succeeds (define success as you wish), it won’t simply be because the service became popular. It won’t even be because a significant number of people got over the same conceptual hump I did in order to understand Mastodon. It will be because an even more significant number of people won’t ever recognize the conceptual hump, because what right now, at the start of May 2022, seems downright odd about Mastodon actually will have become the new normal. That potential outcome is quite interesting.

And if you want to experience Mastodon before reading my attempt at an explanation, check it out at

Reminiscing About the Early Pliocene Era of Computer Communication
Some personal context might help. And you can skip this section entirely. It’s just background on who wrote this thing you’re reading.

I’ve been on enough social media platforms that it feels as if their combined logos could fill a yearbook. My first experience online, broadly defined, was a nascent form of social media: a dial-up BBS, or bulletin board system. This would have been roughly around the time The Empire Strikes Back was released. Back then, I didn’t think much about the “self-enclosed-ness” of the BBS. The notion of dialing into a system and then communicating directly with people on the other end, and only those who had likewise dialed in, mapped easily to the idea of a phone call, even if we were communicating by typing rather than speaking.

The mental mapping from BBS to phone call was all the more easy to comprehend because an actual phone line was required to hook the computer — a RadioShack TRS-80, in my case — up to the world outside one’s home. (This wasn’t my home. This was a friend’s. An extra phone line cost real money, as did the phone call itself. Such expenses were beyond my childhood home’s norms for decision-making. My parents were not entirely clear on this BBS concept at first, but they did tell me about the emergence of phones in their own youth. The idea of a “party line” — or “party wire,” vis-à-vis the Normal Rockwell illustration of that name — helped all of us understand the BBS more than we might have otherwise.)

Then high school and college happened, and I didn’t log on again until the early 1990s (not counting the limited school network, which was just for programming, when I was an undergraduate flirting with being — and then being flummoxed by the demands of — a computer science major). If I had to put a date on it, I imagine I logged on for the first time in April or May of 1993 — so almost exactly 29 years ago. This would have been the direct result of the debut issue of Wired magazine. If archaic phone systems helped me understand social media, then it was paper that helped me go digital.

Two Steps to Understanding Mastodon
As I said at the opening, I had already tried Mastodon previously, since it launched in 2016. Back then, though, I wasn’t frustrated by it. I was simply unenthusiastic. Mastodon’s interface felt as if a long-running food co-op tried to recreate Twitter or Facebook: it all sorta worked, but was utilitarian at best, and mired in complex systems at worst. You could almost smell the carob brownies. The benefits of Mastodon were unclear to me. At that early phase of my adoption, Mastodon reminded me of so many wannabe SoundCloud replacements whose sole apparent purpose was to replace SoundCloud. “SoundCloud done right” is a self-denuding rallying cry. They brought nothing new to the party, and few if any of them gained steam.

I was also reminded of a certain geek ethos, the one in which a computer-minded individual expresses interest in, say, having a blog, but actually takes far more active interest in creating, from scratch, their own blogging software. They never end up blogging. Mastodon felt, initially, to me like it might have been made by people with more interest in making a micro-blog network platform than in actually micro-blogging themselves.

This past week, however, was quite different. This past week I wasn’t unenthusiastic; this time, actual frustration kicked in. And while frustration is, well, frustrating, it can also be an engine of intrigue. I had not been that confused online for some time. It was sort of intoxicating. I’d like to say I simply put concerted effort into “getting” Mastodon, but that wasn’t quite how it played out. At first, all I did was complain, and the variety of responses to my complaints informed my experience. I’m fortunate to have a lot of patient and informed online friends.

Also helping in the process of getting acclimated: user error on my part. I ended up somehow with two different Mastodon accounts. In part this was a hassle, because their URLs were just similar enough that I took one to be an abbreviation for the other. But having two Mastodon accounts, each with its own unique URL, helped me understand something that had not, to me, been obvious previously: there are numerous Mastodon URLs. There is no or for Mastodon. The concept of Mastodon doesn’t merely contain — as Walt Whitman taught us to verbalize — multitudes, but is founded on them.

The interface can be maddening as you come up to speed. If privacy is a concern, you might find yourself wondering why you can change a public account’s individual posts private or but not a private accounts individual posts public. You might change an account from private to public, and then wonder why your earlier posts remain private. When you try to figure out how your posts show up on some other instances, you may end up looking at a chart, one that a friend has rightly likened to something out of the brain-frying time-travel film Primer (note: I love the movie, and it fried my brain). All these things eventually make sense, but the difference from the widely experienced, carefully designed chutes and ladders of Twitter and Facebook is palpable. I’ll get more into this in the next section, but suffice to say: people would maybe less often confuse Mastodon’s posts with Twitter’s tweets if Mastodon didn’t refer to its posts as “toots.”

Indeed, Mastodon’s current communications really don’t help matters. As of this writing, when you sign up for a new account on the main Mastodon URL, you are immediately asked to choose one of myriad “servers,” which are broken into “categories.” What is not clear is that all those servers are in effect communities and that they are each separate “instances” of Mastodon. (This is stated on the page, but “stated” is different from “clear,” and clear is different from “apparent,” let alone “self-evident.”) Much of the rest of this article will involve unpacking that single word: “instance.” Once I got that word, that concept, everything about Mastodon that had previously been frustrating began, instead, to make sense. I then deleted my two conflicting Mastodon accounts and I started a new one.

As whenever you make it through a thick conceptual window, this experience of finally “getting” Mastodon was fulfilling. For the first two days, my attitude was: this is the stupidest interface I’ve ever used. And then it made sense. To explain how it came to make sense, I retraced my steps. What felt at the time like an extended process of trial and error could, in fact, be reduced considerably. Partially that is because numerous of my steps were missteps, such as those recounted up above. In the end, I think there are two steps to understanding why Mastodon is special.

Step 1 of 2: Mastodon Looks like Twitter but It’s More Like WordPress
It’s very important to not think of Mastodon as simply a replacement for Twitter. Why? Because Twitter is a single globe-spanning instance of software that every user is inside together. Mastodon, however, is software more along the lines of the way provides software. When you install WordPress’s open-source software at your own URL, it’s its own self-contained instance of WordPress. WordPress is software in a practical sense, whereas Twitter is software only in the sense that it’s a digital service. My own website,, is on WordPress (I am vaguely familiar with the geek ethos mentioned earlier: from 1996, when I founded, until 2007, when I commissioned someone to port the site to WordPress, I published the entire site with hand-coded static files, every single .html page, even the RSS feed). If someone posts a comment on, that’s happening in my specific instance of WordPress, not on “WordPress as a single globe-spanning platform.”

So, let’s break this down: Make sure you get the difference between WordPress and Twitter. Now, imagine Twitter not as a company with a single platform, but as an installable-on-the-internet piece of software like WordPress. That’s a step toward understanding Mastodon. Mastodon lets you set up your own self-contained instance of the software, just like WordPress does, and you can run it on your own (server use costs money, and the more users you have, the more it costs; it’s more expensive than WordPress). No one can join your Mastodon instance whom you don’t want as a member. You can set the rules as you like. You can make it open to anyone who wants to read it or wall it off entirely — and even if you make it open to anyone who wants to read, you can allow each of your instance’s individual users to choose to hide their own posts from anyone but the people they choose to see it. (If you’re handy with code, you can even fork Mastodon and make your own version — so long as you post the source code online, per the open-source licensing agreement.) Also, you don’t need to set up Mastodon yourself. You can just join a pre-existing server/community.

This took days to comprehend, and then even when I got it, it took a while to grok it. My head hurt. I got angry. Then suddenly it clicked. A big reason I got angry is there are a lot of know-it-all Mastodon-heads out there who condescendingly ask regularly, “Why aren’t you just on Mastodon?” when people complain about Twitter and Facebook. The answer to that question, as it turns out, isn’t just “Mastodon isn’t easy to understand.” It isn’t even “Mastodon isn’t as clean and efficient as those heavily funded websites that are literally designed to algorithmically reflect parts of our consciousness we’re not even aware of.” No, the more full answer is, “To really use Mastodon, you have to step through a conceptual window that’s akin, perhaps, to, long ago, someone who’s only ever used AOL then trying to use the Internet. Except even harder to comprehend, unless someone is patient and takes the time to explain it.” I’m trying to explain it, first to myself, and then to anyone who wants to read this.

Step 2 of 2: Mastodon Communities Can Easily (if Currently Clumsily) Connect with Each Other
This is where Mastodon gets interesting — like, really interesting. It’d be enough if Mastodon were just “WordPress for self-contained social media groups.” But before talking about Mastodon’s built-in interconnectedness, let’s return to the concept of blog comments above.

Do you remember a piece of once ubiquitous online software called Disqus? (I’m not sure how broadly utilized it is anymore.) Disqus provided connective commenting between separate blogs and websites. For example, if I went to some experimental-music blog, and someone said something interesting in the comments, I could click on their avatar, and I’d see other stuff they’d commented on all around the internet. So if they had commented on another blog, I could then click through and see what they had commented on. Maybe I’d discover another experimental-music blog, or maybe I’d find out they also like recipes for Estonian cuisine, or maybe I’d come upon the music made by the very person who possesses that avatar.

The phenomenon of Disqus was more than blogs cross-linking through so-called “blogrolls.” Disqus was also more than a portfolio of blogs owned by one company and using a shared platform. This was seemingly truly (but not actually, as I’ll explain in a moment) ad hoc — and it was exciting. Disqus just happened: you show up on one blog, and there’s your avatar — you show up on another, same. (Now, it wasn’t quite as easy as I describe, which is part of the reason it didn’t take off as much as it might have. Which is part of why what I’m getting around to describing about Mastodon is so interesting.)

I once saw one of Disqus’ two founders, Daniel Ha, give a talk, early on in the company’s existence, and he made a comment I think about a lot to this day. He said something along the lines of how comments people made online were just as valid a form of publishing, of self-expression, as was the writing of a post or article. That’s not quite how he put it, but I feel like much of the subsequent explosive growth of social media shows just how accurate his observation was. (If this seems self-evident to you, I will note this was not a widespread perception at the time.)

You may be thinking, “Well, that’s cool, but how is that blog commenting scenario different from Mastodon?” The thing with Disqus was it was centralized. You had all these different blogs, but the only way they connected was through Disqus. You had little to no control as a Disqus commenter. If someone started saying crappy stuff to you or just crappy or inconsequential stuff in general, you couldn’t unfollow them or hide them on blogs where you might stumble on them (at least when I used the service — it may have gained such functionality). There were issues for blog owners, too, but let’s just pause there and move on. The key thing was it was centralized: if Disqus went down, all of Disqus went down. If Disqus made a big change, it immediately impacted the entire network. Had Disqus ever gone under (which it hasn’t), it might well have disappeared.

A cool thing about Mastodon is the software is created so that anyone on any single Mastodon instance (like, say,, which appears to be the biggest one, or, where I eventually signed up, despite me not totally liking the somewhat creepy tone of the word “lurk”) can still communicate with people on other Mastodon instances. Even as I type this, I can’t quite understand how it works, but it does. (A friend explained to me helpfully that the underlying protocol, ActivityPub, which Mastodon and other online services, can be thought of as “kind of like two-way RSS,” which is to say the protocol most of us know as a way to track a bunch of blogs through one tool, such as Feedly, Inoreader, or the sadly defunct Google Reader. I don’t know much about ActivityPub, but I’ve been reading up. And I put this section in parentheses to emphasize that when you start seeing terms like “RSS” and “ActivityPub,” it’s a bit beyond the technical literacy — even the technical curiosity — I’ve assumed for a reader of his essay.) If I log onto in the morning, I might see replies from other Mastodon accounts at places like or or or or or or or, all real unique Mastodon instances, and I can communicate individuals who call such places home. I can even, in a subtly signaled way, see who in my feed is part of “my” home instance (i.e., and who isn’t: accounts that share my instance appear by their avatar names, whereas accounts from other instances appear with their avatar name appended by the name of their alternate instance (e.g., I appear as on the feed of someone at any Mastodon instance other than; for anyone on, I appear simply as @disquiet).

If these other accounts turn out to be bots or merely inconsequential to what I’m interested in focusing on, I can mute them. If I find that a particular instance of Mastodon (like ihate.ambient — not a real instance) is filled with bots or hateful humans, I can save myself the Whack-a-Mole effort and just mute the whole instance — and, this is another clincher, I can do so as a user. Read the previous clause again: as a user. I don’t need to depend on the Mastodon instance in which I am located to filter whom I communicate with.

Think about it this way: each Mastodon instance can become its own little community without necessarily being cut off from the broader world. (The term for this sort of arrangement is “federated.” The word, which predates Mastodon, is one that the service features repeatedly on its home page, even though the same page offers no definition for curious newcomers.) The managers of a given instance can certainly say, “You can only chat here, and the rest of the internet can’t see in unless they have an account.” However, the real power of Mastodon is how you can have your own little instance for a distributed community of individuals to discuss folk dancing, or living at sea, or modular synthesizers, or vintage sports equipment — likewise, you could have one for your family, or your college class, or your neighborhood volunteer clean-up group — and the participants can connect with each other as well as with users beyond your instance, as each user sees fit.

Witnessing these varied instances of Mastodon communicate with each other is kind of amazing. I do a lot of stuff online, and I love being online. I still think of IMAP, an internet standard protocol that powers a lot of email, as magical. Mastodon is cool on that order of magnitude. It’s science-fiction cool.

The Next Steps
That was really helpful for me to type out, because doing so helped me understand Mastodon more clearly through explaining it to myself. This documents my experience and perception. Like I said, I passed through a conceptual window this week, as far as Mastodon is concerned. And a funny thing happens after you pass through a conceptual window: you can’t always see clearly back through it. It took almost as much effort to retrace my steps as it did to take those steps in the first place, albeit minus any of the frustration. (Fortunately, I have my sequence of tweets from that week, and the trajectory is pretty clearly delineated if you read them in order.)

So, will Mastodon take off? It’s done well during the current Twitter-evacuation, or at least current “Twitter trial separation,” but Mastodon still needs to do a lot of hard work. It needs to work on that interface. It needs to infuse its “federated” underpinning with deeper meaning and purpose so that the term is unifying and clarifying rather than merely vaguely differentiating. And Mastodon needs to do a much better job of explaining to new users how it works. It needs to help newcomers start off. As mentioned earlier, when you show up you have to blindly choose a community — and it doesn’t explain clearly that it’s an arbitrary choice, to some degree, because you can communicate across instances. The whole concept of inherently interconnected instances is not self-evident, or easy to immediately comprehend. To understand the solution, users must first appreciate the problem. “Getting off Twitter and Facebook” is a problem for many, but it’s not really the problem that Mastodon is trying to solve. Per my comment about SoundCloud earlier, it doesn’t do justice to what Mastodon (along with other experiments in federated and decentralized social networks) is pushing toward.

The issues aren’t merely about language. If you’re on and I’m on post.lurk,org, and I “follow” you, this is how it plays out: first, I jump through a few somewhat opaque hoops to follow you, and then on it shows that I’m following you. However, anytime I happen to find myself back on your page, I’ll still see a big “follow” button, which naturally makes me wonder whether or not I’m following you. This is not a big problem at first, but I don’t know how sustainable it can be in the long run when I and a growing number of people are following a lot of accounts. This sort of disconnect may just become an accepted online norm, or it may provide just the sort of cognitive dissonance that keeps a service from reaching a broader audience.

And that about covers it. As is clear, after these nearly 4,500-ish words, those being a revision of a nearly 2,000-word email, the qualities of Mastodon hold a lot of promise and appeal to me. I spend a lot of time online, and I don’t do so alone. I joke regularly that Facebook is where I realize how little I have in common with my friends, while Twitter is where I realize how much I have in common with people I don’t know. I’m not sure where Mastodon fits in that formulation, and I’m slowly sorting out that a whole new formulation may be required.

A lot of my online imagination is tied up in the Disquiet Junto, an online community I’ve moderated since 2012, and it was preceded by a half decade spent organizing online collaborations between musicians. The Junto isn’t a “place,” not even a virtual one in the sense we think of virtual places currently. It exists on numerous platforms, key among them: SoundCloud, Slack, Twitter, and, the latter an instance of Discourse, another online discussion platform. (This platform diaspora, so to speak, largely occurred following the suddenness with which SoundCloud, many years ago, removed its “groups” functionality.) Using Mastodon has helped me understand how that current constellation of online Junto locales may not be truly “federated.” Part of me wonders if a Disquiet Junto instance of Mastodon might be worth pursuing, but right now the onboarding process (both practical and conceptual) is too arduous. I want the Junto to be welcoming, and Mastodon isn’t welcoming — at least not enough, and at least not yet.

Both through speculative interest and practical application, online networks are where I spend a lot of time. Six years into its existence, Mastodon registers as a potentially important step forward. Perhaps some service other than Mastodon will have eventual widespread, ubiquity-equivalent success with this “federated” model. Perhaps some even more autonomous identity — closer to an email address or phone number — will arise in the process. (This lengthy post is not in any way comprehensive, but if a lingering question is “Would it help to have more than one Mastodon account?” then the answer may relate to the question “Do you need more than one phone number?” Not everyone does, but there are work and life circumstances when it may be useful, and even necessary.) Perhaps the internet will achieve something even more “decentralized” than a “merely” “federated” model — which is to say, a situation in which no one need “join” a server, and can simply participate (one hedge would be a groundswell, I imagine, of usage such that everyone has their own individual Mastodon instance, but that feels more like a hack than an intentional system).

No matter what comes in this regard, it will have been Mastodon that helped rewire my brain for such things. Rewiring can be a painful procedure, but it was worth the effort.

In any case, if you do join Mastodon, you can find me, at least for the time being, at:

Acknowledgements: Special thanks to Todd Elliott, Kamen Nedev, Matt Nish-Lapidus, C. Reider, and Jason Wehmhoener, among others, who helped me get on Mastodon, helped me sort out Mastodon, and/or read this at some stage of draft form, and to Bart Beaty for having asked the initial question via email. Any broken metaphors or just plain incorrect information is my fault alone.

Read the whole story
13 days ago
Pittsburgh, PA
Share this story

Bad Map Projection: Madagascator

1 Comment and 5 Shares
The projection's north pole is in a small lake on the island of Mahé in the Seychelles, which is off the top of the map and larger than the rest of the Earth's land area combined.
Read the whole story
16 days ago
I kinda like it. I wonder if one could use tricks from the Dymaxion projection to reduce shape distortion of East Africa.
Pittsburgh, PA
Share this story

The Expanding Job

1 Comment and 3 Shares

If you find yourself regularly opening this newsletter and value the labor that goes into it….have you become a paid subscriber? Think about it. I’m always telling myself that I’m going to start paying for something when I just get to the computer and have my credit card in hand, and then it takes weeks to do it. But maybe today’s your day.

(Getty Images)

There’s a story that academics like to tell, in some variation, about the time in grad school when they were writing their dissertations. Usually it starts with someone talking about how long it took to write, which somehow leads to an offhand mention of someone else in their program who did it faster, with seemingly more ease, and just generally more support.

The secret of their success? They had wives.

The next beat of the story, regardless of the gender or sexual orientation of the teller, usually goes something like “I wish I had a wife to get me through my dissertation!” The rest of the group nods enthusiastically, floating in reverie. Just imagine, during those thick, heady years of research and writing: someone to go shopping and cook meals! Someone to read drafts and catch typos and even give feedback! A wife would be a buoy, a beacon, a savior. What privilege, to have a wife!

Whenever I’ve been a part of these conversations, they have reliably included feminists of all stripes who’d normally frown at this genre of gender/gender role essentialism. But the yearning was there, all the same, for a way of doing the work that felt sustainable. We understood that to achieve that sort of sustainability, you had to wind back the clock: to a different sort of job market and configuration of academia, sure. But that set-up was also built on the understanding that every academic would be able to off-load the most time-consuming of tasks to their assistants. Some of those assistants took the form of their wives. Some were graduate students and research assistants. And some were actual department secretaries, who oversaw and facilitated the quotidian tasks that now absorb so many academics’ lives.

Look at most any academic book written by a man before, oh, 1990, and you’ll almost certainly see evidence of this relationship in the Acknowledgments. Often, it’s the wife literally typing the book manuscript. Think how much more work was possible, with that labor outsourced! But you see it now, too: consult the books of a particularly prolific male academic, and the labor of his partner will be nowhere to seen in the actual pages, but parsable, in loose code, in his note of gratitude at the end. Their labor isn’t just what makes the work possible, it makes the entire paradigm of swiftly climbing the career ladder with a family in tow possible.

Do I begrudge these people? Should you? It’s a complicated question, because what they’re doing is actually navigating the system as it was designed. The problem, then, is less the people who’ve excelled within that system, and more the system itself — which, like so many corners of the American labor market, still assumes the support of a partner that does not work full-time outside of the home. If you have that support, you will excel. If you don’t, you either have to make enough money to buy it, or you will (perhaps in slow motion, but inevitably) drown.

The assumption of domestic support is just one of part of the equation, though. That support is what makes it possible for people to figure out care for their children and their elders, to ensure that they have nourishing food on the table, to make sure they get a solid, interrupted night of sleep, to feel comfortable inviting colleagues and peers to their homes, to feel confident that they’ll have clean clothes when they wake up, and that the problems with their cars or homes or health insurance coverage will be addressed, and their childrens’ summer camp schedules coordinated, and holiday cards distributed, and that there’ll be something “magical” in place for St. Patrick’s Day morning (yes, this is a thing now, don’t get me started). That support liberates them from the work of balancing the mental load — and, again, allows them to focus the bulk of their attention towards the work they do for pay and glory.

But the “rockstars” of any profession usually have another secret: they’ve found themselves in a situation in which they’re able to still do one job. I don’t mean that they’re not doing several things at once. I mean they’re not trying to do the work that three or four people. In the case of a CEO or president: they have at least one assistant, if not more. In the case of the successful, senior academic: they still have a small army of paid research assistants to do their book research for them and teaching assistants to do their grading and the intense work of interacting directly with students.

These sorts of jobs are increasingly rare. Just two generations ago, magazine journalists would often go into the field, interview a few people, then rely heavily on “researchers” (almost always women) who would flesh out their pieces with historical details and previous reporting. (See Lynn Povich’s The Good Girls’ Revolt for more details on how this worked at Newsweek). Part of the reason an ad man like Don Draper in Mad Men was able to take naps and be creative all day was because he had a secretary doing all the paperwork and meeting-setting and telling him where to be and when. He just thought big thoughts.

Contrast that set-up with today’s TV journalism, where the current expectation is for MMJs (multi-media journalists) to function as one-person bands, producing, filming, appearing on-camera, and editing their own content in the field, generally for a salary barely approaching a living wage. Comms job ads increasingly demand applicants to be well-versed in writing and editing — and graphic design and social media marketing. A secretary used to provide services for one department; now, she provides them for three.

Sometimes, this job “expansion” is the result of layoffs, downsizing, reorgs, or budget shortages, where remaining workers are told to do more with less — or leave. These over-filled jobs are legion in “passion” occupations (non-profits, education, caregiving) where the maxims of vocational awe make doing more with less a badge of honor. In many organizations, particularly anywhere where a consultant has been called in to “trim the fat,” the jobs that ensured that work was performed smoothly and without overload have been eliminated, the essential components of their job descriptions added onto those that remained. To look back at the last forty years of corporate layoffs is to watch so many employers forget — or be convinced to forget — that fat has an essential purpose.

Some job descriptions expanded alongside the ongoing assumption that new technologies (photocopiers, word processors, computers, faxes, email, Slack, Teams, Zoom, online calendars, project managers, or automating shortcuts) dramatically reduce workload, thereby justifying the number of tasks that have been pushed onto the plate of a single worker. In reality, those technologies do simplify an existing task — but they also add a new, complex layer of additional work. As I discussed with Sarah Marshall on this week’s You’re Wrong About, email did not kill the memo; it created a different, wilder, more slippery sort of memo.

(And then there’s the additional workplace labor required of women, and people who are not gender conforming, and people of color in the workplace — whether related to “the grooming gap,” or reflecting and absorbing microaggressions, or having to constantly arrange your face to so as not to be mistaken for bitchy, or aggressive, or “not a team player.” To be clear, not being a white cis male in the workplace isn’t a job unto itself. But it’s not not a job, either.)

Whatever the reason for job role expansion, the effects are the same. There’s usually shitty or non-existent management, a general feel of precarity in the air (even if everyone’s jobs are, in fact, pretty secure) and horrible work boundary hygiene, because the only way to actually do all the things expected of you is to allow work to spill outside of “normal” work hours: into the night or weekend, into parental or sick leave, and into actual PTO, if you give yourself permission to take it.

Employees struggle to adequately perform work that, if someone were to look closely and objectively, are so obviously the work of more than one person. But because these are salaried jobs, generally but not always without union protections, that level of work — within the organization, but also within an industry — is just….the way things are. Workers have no choice but to hunker down and do it, because if you don’t, you’re “not a good culture fit.”

This sort of job creep was worsened, in multiple ways, by the pandemic, but the pandemic was not its catalyst. These were already smothering jobs, devouring jobs that combine with the similarly expanding expectations of child and eldercare to become something darker and more volatile. These days, I think of these jobs as not just expanding, but on the verge of exploding. That might sound dramatic, but the connotation feels right. Stay in these jobs long enough, and they’ll guaranteed to leave scars.

We’ve reached a point of diminishing returns when it comes to productivity, creativity, concentration, cooperation, plus all of the other skills we try to cultivate alongside the labor we do for pay. It’s easy to feel like you’re actively getting worse at your job when you’re past the point of burnout. People might not be quitting work for good — shit feels too precarious for that — many are quitting industries. Or, if they’re still in their fields, they’re searching for jobs that offer some other way: a staggering 44% of employees are currently looking for a new job.

What do you do with a stat like that? It’s not that people don’t want to work. It’s that their jobs feel, for whatever reason, unsustainable: unsustainable for their mental and physical health, but also unsustainable for their family, and their longterm survival. Many people actually really like the work that they do, if they were, indeed, allocating the bulk of their time to doing that work. And they do could do it so well, for so much longer, with so much more creativity and precision, if they were just doing that one job, instead of the three currently required of them.

In pursuit of growth, we have whittled our systems down to the most lean versions of themselves. In the name of lowering taxes on the rich, we have pushed our public apparatuses to the point of bare functionality. We are living in a moment of untold abundance yet so many have opted to survive on the gristle of human experience. (Most) salaried workers are so much more financially stable than so many, but I find us to be far less generous. We are bad, fearful neighbors; we funnel terror of downward mobility into competitive parenting practices; we are fiercely protective of the little time called our own after navigating the demands of our paid and unpaid labor. We’re flaky, we’re bad at showing up, we have little tolerance for even slight inconvenience, we stew in our own dissatisfactions without acting, because who has the time or energy to act, to do something, when you’re working all the time? People who aren’t gaslit by the ever-expanding demands of their jobs, and with far healthier relationships to work — that’s who.

When it’s left to the individual to resist a system like this, these jobs will continue to expand their expectations of what a single worker can provide. And who will survive the gauntlet? The same people who are able to survive it now: men — in particular, white, cis-gender white men with wives — but also anyone who can lean in or acquire the grit or girl boss in a way that approximates those men. The bar for acceptable and expectable work loads will just keep moving higher, as everyone else keeps stretching themselves as thin as possible to reach it before collapsing on the ground, convinced the failure was theirs alone.

So what’s the solution? It’s not to somehow procure everyone a wife — or, more practically, an underpaid virtual assistant from the global south, and then another, and another — in order to make a job accomplishable for “one” person. Instead, each organization has to ask itself: what is the work we want, and need, to do? And if you are utterly unwilling to hire more people to do the amount of work we do, and utterly unwilling to decrease the amount of work you do, then you should be honest with ourselves: you’re fine with the human wreckage, you’re fine with moral injury, you’re fine with churn, you’re fine with continually unraveling societal bonds, you’re fine with snow-capped organizations, you’re fine with the enduring wage gap, and you’re fine with the toxicity that pervades our company.

But if you are not, in fact, fine with any of those things, and want to live by your stated beliefs — as a leader, as an employees, as a member of this society trying to make life survivable, not just for yourself, but for others — you need to start over when it comes to the way you arrange work. The wreckage will come, one way or another. It’s up for you to decide whether the status quo that gets blown up — or your actual employees.

And if you’re an employee, what do the leaders at your organization say they believe — and how does what they expect of you, and your co-workers, actually communicate? No job is perfect. But you can sense pretty quickly if a company is actually blind to the ways it’s trying to blow you, or if that explosion is designed as a test to survive. Sure, maybe you can survive it, and continue surviving it. Humans are resilient. We can, and have, endured so much. But it’s worth asking, particularly when it comes to work: at what cost, and at whose exclusion?

Looking for this week’s Things I Read and Loved? Become a Paid Subscriber!

Subscribe now

Subscribing is also how you’ll access the heart of the Culture Study Community. There’s the weirdly fun/interesting/generative weekly discussion threads plus the Culture Study Discord, where there’s dedicated space for the discussion of this piece (and a whole thread dedicated to Houseplants games), plus equally excellent threads for Career Malaise, Productivity Culture, SNACKS, Job Hunting, Advice, Fat Space, WTF is Crypto, Diet Culture Discourse, Good TikToks, and a lot more.

If you’ve never been part of a Discord: I promise it’s much easier and less intimidating than you imagine. Finally, you’ll also receive free access to audio version of the newsletter via Curio.

As always, if you are a contingent worker or un- or under-employed, just email and I’ll give you a free subscription, no questions asked. If you’d like to underwrite one of those subscriptions, you can donate one here.

If you’re reading this in your inbox, you can find a shareable version online here. You can follow me on Twitter here, and Instagram here — and you can always reach me at

Read the whole story
27 days ago
Pittsburgh, PA
Share this story
1 public comment
26 days ago
Part of the reason your job sucks so much is that we decided work that women do/did was so easy anyone could do it in just 10 minutes a day.

A list of new(ish) command line tools

3 Comments and 8 Shares

Hello! Today I asked on twitter about newer command line tools, like ripgrep and fd and fzf and exa and bat.

I got a bunch of replies with tools I hadn’t heard of, so I thought I’d make a list here. A lot of people also pointed at the modern-unix list.

replacements for standard tools

new inventions

Here are some tools that are not exactly replacements for standard tools:

  • z, fasd, autojump, zoxide (tools to make it easier to find files / change directories)
  • broot (a file manager)
  • direnv (load environment variables depending on the current directory)
  • fzf, peco (“fuzzy finder”)
  • croc and magic-wormhole (send files from one computer to another)
  • hyperfine (benchmarking)
  • httpie, curlie, xh (for making HTTP requests)
  • entr (run arbitrary commands when files change)
  • asdf (version manager for multiple languages)
  • tig, lazygit (interactive interfaces for git)
  • lazydocker (interactive interface for docker)
  • choose (the basics of awk/cut)
  • ctop (top for containers)
  • fuck (autocorrect command line errors)
  • pbcopy/pbpaste (for clipboard <> stdin/stdout) maybe aren’t “new” but were mentioned a lot. You can use xclip to do the same thing on Linux.


  • jq (a great JSON-wrangling tool)
  • jc (convert various tools’ output into JSON)
  • yq (like jq, but for YAML). there’s also another yq
  • fq (like jq, but for binary)
  • fx (interactive json tool)
  • jless (json pager)
  • xsv (a command line tool for csv files, from burntsushi)
  • visidata (“an interactive multitool for tabular data”)

grep things:

some of my favourites

My favourites of these that I use already are entr, ripgrep, git-delta, httpie, plocate, and jq. I’m interested in trying out btm, z, xsv, and duf.

Read the whole story
33 days ago
gotta try some of these myself
Pittsburgh, PA
Share this story
2 public comments
30 days ago
Amazing list
33 days ago
Ripgrep is life changing

A Fleshy Pink Gradient: Inside Cryptoart's Quest for Aesthetic Definition

1 Comment and 2 Shares
As cryptoart defenses get just a little bit smarter, a new narrative emerges: NFT artists are diverse outsiders, but conveniently are all on the same side! What does the actual range of NFT aesthetics look like, though, and just how aligned are their interests really?

In the face of relentless criticism, skepticism, and derisive jeering, supporters of cryptoart and NFTs have gotten a little bit cannier about their rhetoric. There's a lot more handwringing about making sure that emissions have "offsets", for example, and some hasty and late attempts at situating cryptoart into a wider art history--of the avant garde, of computer art, of outsider art. Unfortunately for me, they've paired these defenses with a lot of self righteousness. Like, they've got a persecution complex now, and it's such a drag. Check out this tweet for example from (artist? art promoter?) Sara Ludy:

With all the cancelling/shame/hate/death threats being projected at artists for minting NFTs, I'm reminded of this passage about the violent history of computer art. We need to do something before this gets even further out of control, we cannot repeat this.

Yeah ma'am, "first they came for the ponzi schemes and I said nothing for I was selling real shit on itchio". The tie-in to "cancel culture", a thing exclusively decried by economically comfortable cis people with large platforms despite it actually predominantly ruining the lives of impoverished queer indie artists, is a nice rhetorical touch of course. But hold up, let's take a look at that passage she mentions:

See now this is pretty interesting actually. I do think it's pretty absurd on its face to compare the reception of computer art to say Byzantine iconoclasm. After all, there's quite a difference between "a fellow artist" calling someone a traitor, and having your eyes gouged out, which the Byzantines were as I remember it pretty fond of doing to their political rivals. But there's something here to actually respond to, an actual academic source that positions cryptoart as part of a long line of computer art practices, an avant garde attacked by traditional artists! There's an actual rhetorical move here, an argument of sorts that I can respond to! I couldn't help myself, dammit. You can take the Zoe out of the academy but no matter how many structurally induced nervous breakdowns you give her you can't take the academic out of the Zoe.

Comrades, I pirated the book.

And the book paints a very different picture of the history of computer art than the simple binary presented here of computer artists united in their persecution by an inflexible art world.

I have to credit the rhetorical use of the book, though, as part of this new wave of more refined defenses of cryptoart. One interesting insight that comes from looking at art as part of a field of social production is that the game of achieving legitimacy happens on multiple boards at once. Like: if you find yourself struggling to break free from the commercial tendencies of your medium, it makes sense sometimes to try and distance yourself from your roots, find theoretical approaches that consecrate your work, possibly at the expense of your peers. So, your "comics" become your "graphic novels": you make a move to another part of the field of social production for your medium.

But another strategy might be to take the medium itself and move its whole place on the wider board of social production for ALL art and entertainment. Rather than just competing with peers on the game board of "comics", you create a "movement" for comics rights and artistic recognition. So, having established that "biff! pow! comics aren't just for kids anymore!", your "graphic novels" not only are an avant garde within your own field, but gain cultural clout, along with ALL your peers, in comparison to movies, paintings, theater, novels, &c.

That's what this conflation of NFTs with all of computer art, and the situating of computer art as a maligned avant garde that deserves critical reassessment, attempts to do. Cryptoart is the avant garde within computer art, and computer art itself deserves a more privileged place within culture. You play the game on multiple boards at once, and create an idea of "comic creators" or "computer artists" in order to develop a mass base of support among an imagined group of peers, a base which will join in a fight that conveniently elevates your own position in the wider marketplace.

The problem with this rhetoric is that not everyone you're dragging with you as you move your game board necessarily wants to be lumped in with cryptoartists, and in fact not everyone involved in computer art wants to be tied to other computer artists they see as hacks or corporate cheerleaders. This rhetorical move paints computer and cryptoartists as being united, but that's far from the case. Plenty of gif artists, 3d modelers, and webcomickers were interested in their media precisely because digital reproduction wasn't bound by the same restrictions as print media. This parallels the history of computer art as a whole: much of When The Machine Made Art is dedicated to the often dramatic conflicts within the computer art movement. In context, Ludy's quote is part of a complex picture of a movement of artists making individual moves--some to promote computer art as a field, yes, but others to break from their peers for commercial, ideological, and aesthetic reasons. Out of context, it conveniently buffs away these complexities. In other words, the rhetorical utility of the quote comes from the virtual certainty that unless an autistic trans woman with years of academic brain poisoning and bitterness happened to see the tweet and actually read the book, those complexities would remain flattened!


Broadly speaking, this seems to be the trend in the academic or pseudo-academic boosterism around cryptoart. First, that cryptoart is at once wildly diverse but also without any internal conflicts or contradictions, a united field of upstart mavericks. Second, that none of the pseudoacademic language around cryptoart will face any sort of real scrutiny.

The optimistically entitled "In Search of an Aesthetics of Crypto Art" is a solid example of the genre. The paper as a whole is fairly silly stuff, which I'll freely admit I mostly skimmed. It relies on STEM posturing to boost its claims to authority (there are so many graphs!) but it doesn't say very much at all about the field. For example, rather than trying to critically describe what they see, they opt to find the most popular tags on the website and graph them according to average sale price and... wait does that say "Average Views"? But the X axis is popularity of use. Did they actually mistitle the graph? Hooboy.

This is a great example of the "great diversity that is unified and without contradictions" mode of cryptoart criticism. At its core, this sort of number crunching exercise is not a search or a successful recovery of an aesthetics of crypto art, or exploration of the multiple aesthetics within cryptoart, but a guide to optimizing marketing gimmicks. Imagine if we actually did this to other forms of art. You'd only be able to describe Ad Reinhardt as part of the "#black and sometimes dark blue,#2d" movement. This doesn't actually identify trends of genre the way an art historian or critic does but in the way an investor would, which makes their faffing about with phrases like "neoliberal flattening" laughably absurd. Equally absurd is their portrayal of the cute party trick of overlaying and averaging 22 thousand nfts into a vague pink gradient as "suggestive of libidinal melancholy" and having "an unmistakable allure of technostalgia":

I keep running into a fundamental problem with cryptoart defenses: I can't really reply to an argument that doesn't exist as anything more than a vague phantasm or impression. A lot of writing on the subject is just that--impressionistic, like a text generator's idea of what academic texts sound sort of like. Despite a smug denunciation of the old art world and its impenetrable "art speak", the style of the paper is jargon-laden and obscure. When translated into English, the whole thing comes out as word salad:

While therefore useful, aesthetics remains highly problematic, premised as it is on the ‘disinterested’ viewpoint of a supposedly neutral agent who is, in reality, the very definition of an elitist white male spectator. Crypto art’s detractors might therefore argue that this kind of ‘aesthetics’ is no more than NFTs deserve. Certainly, it would take a monumental act of sophistry to claim for crypto art the same kind of ‘autonomy’ from mass culture that Theodor Adorno once tried to claim for art. If anything, crypto art is the most rarefied (or at least the most recent) product of the cultural industries, represented by exactly the community of multidisciplinary artists historically excluded from the art world. Its problem is how to reconcile further environmental damage with the potential of NFTs to redeem a generation of digital creatives from lives of economic precarity.

Let me try to translate. "Aesthetics is useful but assumes the ideal critic is an elite white male. [Teacher's note: all of aesthetics, through all of history, or just certain schools of aesthetics?] Skeptics might say this fits the field of NFTs. [Teacher's note: because NFTs are overwhelmingly minted by elite white men? If this is so, state it directly instead of usinga euphemistic sentence construction.] Certainly, cryptoart is undeniably subject to cultural markets. [Teacher's note: wait, hold up, what about the critique about elite white men?] If anything, cryptoart is the current peak of commercial art, made up of artists and art forms not part of non-commercial art. [Teacher's note: ok, that's sort of a tautology, but again how'd we get from the first two sentences to here?] Its problem is how to make a bunch of digital artists a lot of money despite the huge environmental cost." By the end of the paragraph, the problems raised in the beginning have been completely forgotten and switched out, by the sleight of hand of equating commercial producers ("community of multidisciplinary artists") with *diverse* producers (not "elitist white male spectators"). I have no idea where "aesthetics" went in this paragraph, having begun as a central problem and ended as not even worth mentioning. Someone really ought to let cryptobooster academics know that just because they're writing about cryptographically signed art doesn't mean they must make such a hash of their rhetoric.

What we have then is a kind of machine intelligence art criticism. Like the old "colorless green ideas sleep furiously," the criticism follows familiar grammars of both traditional and more data-driven art analysis, but the actual contents are gibberish. It is order without insight.

So like I thought, right, what if we did the opposite and achieved insight through pure bloody chaos:

I'm sharing this diagram I made for myself in part because I think it's funny to be like "oh throw out this scientific assessment, use THIS instead" and then post what visually is an insane woman's conspiracy theory board covered in jokes about the subject matter. But also, why wouldn't I share my artistic and critical process like this? I'm not bashful about it being messy, subjective, and intuitive, and don't need to pretend to a STEM objectivity premised on- oh, what was it again? "A supposedly neutral agent who is, in reality, the very definition of an elitist white male spectator"?

No, I just drifted, like any good situationist, through the top seller streets of superrare and jotted down my impressions of what I saw. I took seriously the idea that there was a diverse field of artists and artforms to uncover on these marketplaces. The diagram is a living thing: I just added some more sections today because when I started this, in the middle of my entire community being evicted, Bored Apes hadn't completely taken over as emblematic of cryptoart. So, belatedly, I added a broad category of "AVATARS" roughly where I think that kind of finds itself (sort of in the neighborhood of crypto triumphalism, commercial art in revolt, and concept art without a source text). I drew another crazy line between the hashmasklikes up to what I've labeled "Im Basquiat", though, because there's this whole segment of avatars that deliberately takes from "urban" aesthetics, freely pilfers black culture, &c. Do you see how this kind of works? This isn't an authoritative text but rather me trying to map out for myself lines of affinity and aesthetic association. And hey, I'm not the only person to notice some of these trends:

This might be the first piece of cryptoart I actually like. It actually has, like, real jokes! Jokes that work! I just laughed out loud when my brain registered the art used for the "GAN" category! And you know I think the little border around that icon to integrate it better into the emoji aesthetic of the image is pretty clever. Maybe I've just had my standards lowered by looking at so much of this stuff, but this actually feels like it has a sense of humor, self-satire, and effort that almost no other cryptoart possesses, and it's executed in a way that takes advantage of the iconic art style to set up a bunch of reasonably fun visual gags. I mean it's even giving me the itch to go on a rant about why emojis do not constitute "a language". I won't because sheesh gang I have SOME self control, but hey we've finally found some cryptoart that provokes that kind of response in me, good work that man.

One of the things that drew me to this was specifically the identification of "Offbrand Basquiat" and "Pseudo Picasso" as trends. I quickly accumulated a similar array of names on my own chart, named after the tweet in which internet poet laureate dril claimed authorship of the work of enigmatic street and conceptual artist Banksy: "im banky." Not only is there a lot of "im banky" going around in crypto art, and a lot of "im basquiat", there's also quite a bit of say "im koons", "im rothko", and so on. If this one artist is taking notice of the trends... why did it escape the authors of the paper ostensibly on the aesthetics of the movement?

Maybe because it exposes a level of contention, self-reference that goes beyond just "ha ha isn't cryptoart so wacky?" to actually take swipes at other creators. Instead of joining hands with the people elevating the whole art, it suggests some cryptoartists might be a little fed up with their peers. Am I putting words into the mouth of the artist obxium? Maybe... but out of idle curiosity I checked out his twitter and just a few hours ago as of writing this sentence he had tweeted out a really quite solid analysis of why "all NFTs look the same". A way more concise analysis than my own, though I'll try not to take it too personally 😕 . To me, this suggests not a unified movement but one, much like computer art historically, with potentially significant ruptures.

I increasingly felt this way as I jotted down notes on the affinities in the field. They weren't all connected, and some sectors felt notably isolated from each other. Like, one of the big sectors is what I've termed "Commercial Art In Revolt". This encompasses everything from Corporate Memphis On The Immutable Blockchain, to, as it shades into countercultural spaces, Banksyesque edgy deconstructions of graphic tropes. For graphic and industrial designers looking to expand their income streams beyond the flakey world of corporate commissions, cryptoart potentially offers a way to move themselves in the field of social production into greater legitimacy and with it achieve higher earning and greater stability.

The thing about this is that it also shades into a field of art that happens to be, well, ssstupid? This is the Blockchain Wank field, ethereum and bitcoin logos everywhere, a cultural blighttown encompassing everything from Logan Paul Wank to general tech triumphalism to Elon Supremacy. It's all pretty dire and it's all pretty popular. A LOT of Beeple stuff falls into this general region of the map, including that first dumb Paul vs Mayweather piece I dragged. Where it shades into the Commercial Art In Revolt sector, I think you get a lot of pieces about the way Ethereum will liberate us all: graphic agitprop created not for a state or corporation but for the mass fandom/whale base of crypto enthusiasts. They know they have a ready audience for like a space marine pulling the ethereum logo out of demon lord's skull or whatever cause this milieu fucking loves being propagandized to in the corniest way imaginable.

Let me shift to another part of the map though. There's a diverse range of works at the top of the pile worth examining. These are people who work in gifs, videocollage, digital painting, pixels, voxels, 3d renders... immaterial forms of art practice. I'm making a distinction for the sake of convenience and due to the differing nature of production, aims, and finished product, from both procedural art (grammar based generation, GAN art) and pure geometrism (I heard you like cubes so I put some cubes in your cube so you could recapitulate the modernist dream of techo-futurism while--you get it). Rather than these practices which tend to involve programming prowess and a focus on process, here I'm talking about people who are more like sculptors, painters, film directors, print makers, &c.  They just have the bad fortune of picking media that the art world has a harder time commodifying. For them, there really IS an argument to be made that they represent an avant garde that the establishment's been slow to consecrate.

Do I seem more sympathetic to this bunch? It helps that they're subjectively what I consider art that doesn't suck shit. And also, I do get it, I get the frustration of being shut out of traditional avenues because your medium is digital and your mindset doesn't line up with the commodity form the traditional art world expects.

Though, you know, on the other hand, the response I and all my friends picked was "fuck the commodity form, try and build something new" not "invent a new commodity form that requires us to drown New Orleans in order to make it run". So actually maybe fuck them after all. But you can perhaps see the dilemma for these people!

And cryptoart has pushed them into another dilemma. The nature of cryptoart as a unified platform, whose boosters are profoundly committed to selling the world on the market place as a whole, these different milieus find themselves crammed together. It maybe doesn't matter right now while whales are gobbling up just about anything they think will be an Investment, but I can't help but wonder how it feels for someone sincerely trying to sell interesting gif art to sit next to a Pepe smoking a bong shaped like the Ethereum logo. Maybe they don't give a shit! I just wonder is all, because the history of digital art has always been contested, and I can't imagine that will just stop because of The Blockchain.

And in particular I wonder what will happen to all the kinds of computer art that don't seem to have much of a place in the cryptoart ecosystem. Next time we'll continue diving into this diagram, the history of computer art, and whose work gets buried when you simply average all this art together into one big neutral pink blob.

This Has Been

A Fleshy Pink Gradient

110 people supported this article on Patreon. Will you join them?

If you liked this article, consider tipping me on Ko-Fi!

The Corniness Is The Point

As celebrities sell out to NFT schemes, mutual hostility with the skeptics boils over. How can we keep our heads as cryptoart now achieves a seemingly impossible level of corniness?

Culture Kept In Its Coffin: How The Netflix Model Buries Our Media History

Classic anime like Revolutionary Girl Utena could get a new lease on life if released serially in the present day... but Netflix and its many competitors aren't in the business of preserving or selling art. What do we lose when our media history becomes #Content?

I Don't Ever Wanna Talk That Way Again: Transfemme Singers and the Dissonant Body

Shouting and howling. Pitching up and clipping out. Smothering in soundscapes of sighs. From 100 Gecs to Against Me! to Ada Rook, trans women push vocal technology to the breaking point--and in the process expose how we think of gender.

Read the whole story
47 days ago
Pittsburgh, PA
Share this story
1 public comment
47 days ago
"A Fleshy Pink Gradient: Inside Cryptoart's Quest for Aesthetic Definition"
Next Page of Stories