security research, software archaeology, geek of all trades
566 stories
·
9 followers

All People Are Created Educable, a Vital Oft-Forgotten Tenet of Modern Democracy

1 Share

Book cover: Who Owns the News, a History of Copyright, by Will Slauter

(I have one of my more traditional history posts underway, but wanted to post this separate thought first. Felt timely.)

Many shocking, new ideas shaped the American Experiment and related 18th century democratic ventures; as an historian of the period, I often notice that one of the most fundamental of them, and most shocking to a world which had so long assumed the opposite, often goes unmentioned — indeed sometimes denied — in today’s discussions of democracy: the belief that all people are educable.  I think it’s urgent that we bring that principle back into the spotlight if we want to defend democracy from one of its common failure modes: pseudo-populist oligarchy.

Within “all men are created equal” lies the sub-principle that all people, or specifically all enfranchised citizens of a state (which often at the time meant white male adults, though some made it broader, or narrower) that all such people are, if given appropriate educational resources, capable of learning, exercising sound judgment, and acting on said judgment, thus that they all people are equally rational and capable of competent self-governance.  This thesis does not assume that all people when adults are equally prepared to participate in government, but that all people when born have the capacity to absorb education if given access to it.  Rare intellectual disabilities might make the education process challenging for certain individuals, but (the thesis argues) even then the right support and resources make education possible, and such situations are not the default human state.  This is the thesis that all people are fundamentally educable. 

Many in the 18th c. who thought democracy was absurd rejected it because they disagreed with this thesis, believing that the majority of people (even of white men) were not educable, i.e. that even with educational resources most people were born incapable of being guided by Reason and making sound political judgments. Those who believed this predicted that government by the people would collapse into absurdity, since it would be led by a parliament of fools. We get a taste of what such critics of democracy thought would happen to America in the satirical scenes in Shakespeare’s Henry VI Part 2 in which Jack Cade’s populist rebels happily kill each other and laugh about it, and believe they can end hunger by having everyone eat on the king’s tab at restaurants and making the gutters run with wine (and which is the source of the much-misunderstood “First thing we do is kill all the lawyers,” step 1 in which executing everyone who can read is their step 2) — this is what many 18th c. anti-democrats believed would happen if governing was truly done by the people.

Drawing of a mob of peasants brandishing weapons with two severed heads on spears, with Jack Cade waving a sword above them all.

1867 Illustration of Jack Cade and his rebels with the severed heads of Lord Say and his son-in-law, hard-working administrators, killed because Lord Say built a paper mill, supported books, and spoke Latin. Shakespeare is very overt in his depiction of the imagined savagery of a self-governing mob.

Often modern people have trouble wrapping our heads around how sure pre-modern Europeans were that human minds and their capacities (A) varied fundamentally, (B) were locked in at birth and immutable, and (C) were only very rarely rational or educable.  This doesn’t mean elite education, it means any education, grasping the basics beyond I’m hungry and I want to eat that fish.  Plato and Aristotle (and many transformations thereof over 2,000 years), described a human soul/mind led by three forces: the appetites, the passions, and the intellect i.e. reason.  The appetites were simplest and most bodily: I’m hungry, I’m thirsty, I’m tired and want to rest, I’m bored and want entertainment, I’m horny and want sex, my arms hurt I don’t want to carry this anymore.  The passions we might call mental but worldly: pride, ambition, loyalty, patriotism I want to be famous, I want to be respected, I want to be well-talked-of in the city, I want to protect my way of life, I want to have power, I want to advance the glory of the state, I want to battle evil, etc.  Reason, or the intellect, was the calculating, understanding, and contemplative power, which did math, understood the universe, aspired to the spiritual and eternal (whether Justice or the Pythagorean theorem) and exercised ethical judgment, weighing goods and bads deciding the best course (Eating this whole jar of pickles would be yummy but then I’ll get a stomachache; electing this demagogue would make me rich but then he would tyrannize the state.)  Both Aristotle and Plato say that different souls are dominated by different organs of the soul (i.e. either the appetites, passions, or intellect) and that only a tiny minority of human souls are dominated by the intellect, a larger minority by the passions, and practically all by the base appetites.  Plato’s Republic uses an exam/aptitude system to identify these rare souls of gold (as opposed to silver = passions, bronze/iron = appetites) and make them rulers of the city, and proposes a eugenicist breeding program to produce more.

The principle that souls of gold (i.e. souls fully capable of being educated & of wise rule) are a tiny minority, and that most humans are immutably not educable from birth, was very thoroughly absorbed into European belief, and dominated it for 2,000 years.  In Dante, we see the entire structure of Hell revolve around the appetites/passions/intellect distinction.  Medieval epistemology, psychology, and even ideas about medicine and plants incorporated this principle, and spun elaborate explanations for how and why different souls perceived the heavenly world (Good, Justice, Providence) better than others.  Eugen Weber’s powerful history, Peasants into Frenchmen: The Modernization of Rural France, 1870-1914, shows how people in the period wrote about their own French peasants in incredibly insulting, infantilizing, quasi-bestial terms, strikingly similar to the racist language we’re used to the Age of Empires using to demean non-Europeans. Anyone who hasn’t looked at period sources will struggle to believe how ferociously confident the European majority was in the thesis that the majority of people even in their own country could never understand a book, a moral quandary, or a political proposition.  Keeping the rare wise elites in charge was the only barrier between order and savagery.  The fact that so many people were willing to believe in the totally mythical tragedy of the commons (yes, it’s totally invented, real peasants took great care of their commons) is one relic of how certain people were for a long time (and some still are) that most people are not capable of making the kinds of prudent, sustainable judgments necessary for custodianship of a polity.

It took a lot to get even a small radical fringe by 1750 to entertain the notion that all people–or even just all men–were created equally educable.  A long survey of the causes would get unwieldy, but they include (among other things) contact with indigenous cultures in the Americas and other regions which had functional governments without European-style systems, revolutions in medicine and the understanding of the sense organs which undermined old hierarchy-enforcing ideas about how cognition and sensation functioned, second-order consequences of the rags-to-riches path opened by Renaissance courts employing scholars from any background so long as they had good Latin, and Protestantism’s second-order implication that, if people didn’t need priests as intermediaries between their prayers and God, perhaps they didn’t need aristocrats as intermediaries between them and power.  But by 1750 that fringe existed, and had enough momentum to implement its experiment in the new United States, which most people who were considered sensible at the time thought would quickly degenerate into chaos, because they didn’t think most people were capable of understanding the world enough to vote sensibly, draft legislation, or serve in a congress, and that the tiny wise minority would be drowned out by the majority wanting to vote for dining on the king’s tab and killing all the lawyers.

At this point, if this essay were a twitter thread, one would see the obligatory snarky self-proclaimed cynic pop up with a comment that America did degenerate into foolish populist chaos, look at the Trump voters, and I know of several Shakespeare companies that put on Henry VI with Cade as Trump. That is why it’s so important to focus on the distinction between educated and educableand that the claim made by America’s early founders and shapers wasn’t that all people are capable of ruling wisely, but that all people are capable of becoming capable of ruling wisely. This is why those who shaped America insisted so fiercely on universal public education; they believed (we have thousands of essays, letters, and documents to this effect!) that citizens would only be capable of being wise voters and rulers if they had access to a good education. Without education, they believed, people would indeed vote for foolish things, so they had to transform their populace, from one where rural peasants were starved for education, to one where everyone was invited to Reason’s classroom. They also believed that a well-informed public was vital, thus that news and newspapers were indispensable for democracy to function, which is why the early US government subsidized the shipping of newspapers and the circulation of knowledge through things like Media Mail–here see Will Slauter’s fantastic history Who Owns the News?

Now, at one point I helped my Ph.D. adviser James Hankins with his research on the history of conservatism.  We (mostly he) looked at many examples over many times, places, and regimes, and observed after innumerable case studies that a consistent defining characteristic of conservative thought over time is the belief that some people are better at ruling than others, thus that the best way to run a government and society is to put those superior people in power.  Whether it’s a hereditary aristocracy, an exam-based meritocracy, an earn-the-franchise-through-military-service timocracy, or a divine right monarchy, many systems posit that some are more capable of rule than others, and that the best system will put them in power.

These days, when I cite this definition of conservatism, invariably someone brings up Frank Wilhoit’s observation that “Conservatism consists of exactly one proposition, to wit: There must be in-groups whom the law protects but does not bind, alongside out-groups whom the law binds but does not protect.” While this is a very powerful summary of trends in 21st century conservatism, useful for thinking about a lot of current politics, it isn’t broad enough when we want go back 1,000 years or more because (I know this will sound absurd) the idea that law is supposed to bind anyone is actually fairly new.  In my period (Renaissance) for example, law is mainly supposed to provide an Earthly portrait of divine judgment & mercy, and everyone is supposed to break laws all the time but then get the penalties waived, so the process of transgressing, being condemned, and being pardoned or let off with a lesser sentence gives the soul an ethically therapeutic preview of the universality of sin and the hope for being let off with just Purgatory instead of Hell, and the idea of law actually binding or protecting anybody maybe goal #24 in the lawmakers’ minds, with a lot of really weird-to-us-modern ones higher on the list.  But in pre-modern and modern conservatism alike, we see the shared conviction that some people are fundamentally better at ruling (or just better) than others, and that one must put the better in power.

The thesis that all people are educable is fundamentally opposed to this.

Democracy can function, says Thomas Paine (to pick a spokesman for the US founders), because human beings are fundamentally educable, and if given a good teacher, a good reading list, and some newspapers, all human beings, or at least the overwhelming majority of them, will become capable of wise judgment and self-rule.  One’s civic duty is not to identify the wise minority and put them in power, but to disseminate the tools of education so the majority can become wise.  This thesis is opposed to aristocracy, to oligarchy, to timocracy, even to most forms of meritocracy, since education isn’t supposed to prepare people to be sorted out by the exam but to demonstrate that human beings are so excellent that everyone can pass it.

Let’s return now to our snarky self-labeled cynic, who points at Trump voters and people who are wrong on the internet to joke that most people are fundamentally too stupid to be educated.  Setting aside the fact that the engines of social media currently make fringe and foolish voices far louder than sensible ones, making them seem like a majority, America at present does not live in the world of robust public education and state-strengthened free circulation of journalism which the minds behind the experiment thought were so essential. Today’s America has seen decades of the intentional conservative-led starving and squeezing of public education, efforts to increase the disparity in education quality between public education and private or charter school education, conservative-led homeschool movements which aim to expose people to a narrow range of ideology, and also the devastation of newspapers, journalism, and a vast misinformation campaign. All this adds up to preventing many who are educable from becoming educated. Thomas Paine, and those I’m using him to represent, would recognize this as a sabotage of their system, one they would say might indeed enable Cade-style populism, which (as in Henry VI) is easy for ambitious elites to then harness to their own ends.  Thus, Paine would say: of course the democracy isn’t working well if such an essential precondition is being sabotaged.

In sum, we need to talk more about the vital tie between democracy and the conviction that all people are created educable.  It helps make clear how strategic the strangulation of educational resources is, and that one of the less loud but most dangerous threats to our confidence in democracy is the project to make it seem like most people can’t make sensible political judgments, reducing people’s confidence in democracy as a system by seeming to prove true conservative principle that there will always be a few who should rule and many who can’t.  When I see conservative thinking start to show up in acquaintances (or Silicon Valley leaders) who consider themselves progressive but also consider themselves smart, it often begins with them feeling that most people are stupid and the world would be better off if the smart were in charge.  One can often get such people to pause and reflect by bringing up the question of whether they think all people are fundamentally educable, and whether the solution isn’t to put the reins of power into genius hands but to put the Encyclopedia in everyone else’s.  Information is key.  Those peasants who shared commons maintained them sustainably for centuries because (as we now recognize) they were educated in the ways that mattered, they learned from families and communities to understand what they were doing, using local knowledge of commons, grazing etc. as they made choices.  If one’s democratic state is the commons, people will likewise maintain it well, but not if they’re intentionally deprived of access to basic knowledge of how it works and what can harm or heal it, and drowned instead in deliberate falsehoods.

We all know we need to support education & good journalism, and combat misinformation, but revisiting the principle that all people are created educable is a good way to remember that these are not merely invaluable social goods, like sanitation or public parks.  They were conceived from the start as essential components of modern democracy, in direct opposition to the many-centuries-old conservative principle that some are best to rule and others to be ruled.  Enlightenment-style democracy cannot function without the conviction that all people are created educable.  If we forget that, if we doubt it, if we let it shake our confidence in the experiment which didn’t turn into Jack Cade for more than two centuries (bets were not on America surviving for so long in 1776!), we risk opening the gates to the old failure mode of oligarchy rising when democracy wavers.

P.S. Donate to Wikipedia – both Diderot and Thomas Paine would smile.

Another illustration of Jack Cade’s rebellion. The reality was indeed destructive, but performances of such events, like the myth of the tragedy of the commons, also served to reinforce the old thesis that the people cannot rule. Turns out, we can.

Share

Read the whole story
zwol
14 days ago
reply
Pittsburgh, PA
Share this story
Delete

Software engineering practices

1 Comment and 2 Shares

Gergely Orosz started a Twitter conversation asking about recommended "software engineering practices" for development teams.

(I really like his rejection of the term "best practices" here: I always feel it's prescriptive and misguiding to announce something as "best".)

I decided to flesh some of my replies out into a longer post.

Documentation in the same repo as the code

The most important characteristic of internal documentation is trust: do people trust that documentation both exists and is up-to-date?

If they don't, they won't read it or contribute to it.

The best trick I know of for improving the trustworthiness of documentation is to put it in the same repository as the code it documents, for a few reasons:

  1. You can enforce documentation updates as part of your code review process. If a PR changes code in a way that requires documentation updates, the reviewer can ask for those updates to be included.
  2. You get versioned documentation. If you're using an older version of a library you can consult the documentation for that version. If you're using the current main branch you can see documentation for that, without confusion over what corresponds to the most recent "stable" release.
  3. You can integrate your documentation with your automated tests! I wrote about this in Documentation unit tests, which describes a pattern for introspecting code and then ensuring that the documentation at least has a section header that matches specific concepts, such as plugin hooks or configuration options.

Mechanisms for creating test data

When you work on large products, your customers will inevitably find surprising ways to stress or break your system. They might create an event with over a hundred different types of ticket for example, or an issue thread with a thousand comments.

These can expose performance issues that don't affect the majority of your users, but can still lead to service outages or other problems.

Your engineers need a way to replicate these situations in their own development environments.

One way to handle this is to provide tooling to import production data into local environments. This has privacy and security implications - what if a developer laptop gets stolen that happens to have a copy of your largest customer's data?

A better approach is to have a robust system in place for generating test data, that covers a variety of different scenarios.

You might have a button somewhere that creates an issue thread with a thousand fake comments, with a note referencing the bug that this helps emulate.

Any time a new edge case shows up, you can add a new recipe to that system. That way engineers can replicate problems locally without needing copies of production data.

Rock solid database migrations

The hardest part of large-scale software maintenance is inevitably the bit where you need to change your database schema.

(I'm confident that one of the biggest reasons NoSQL databases became popular over the last decade was the pain people had associated with relational databases due to schema changes. Of course, NoSQL database schema modifications are still necessary, and often they're even more painful!)

So you need to invest in a really good, version-controlled mechanism for managing schema changes. And a way to run them in production without downtime.

If you do not have this your engineers will respond by being fearful of schema changes. Which means they'll come up with increasingly complex hacks to avoid them, which piles on technical debt.

This is a deep topic. I mostly use Django for large database-backed applications, and Django has the best migration system I've ever personally experienced. If I'm working without Django I try to replicate its approach as closely as possible:

  • The database knows which migrations have already been applied. This means when you run the "migrate" command it can run just the ones that are still needed - important for managing multiple databases, e.g. production, staging, test and development environments.
  • A single command that applies pending migrations, and updates the database rows that record which migrations have been run.
  • Optional: rollbacks. Django migrations can be rolled back, which is great for iterating in a development environment but using that in production is actually quite rare: I'll often ship a new migration that reverses the change instead rather than using a rollback, partly to keep the record of the mistake in version control.

Even harder is the challenge of making schema changes without any downtime. I'm always interested in reading about new approaches for this - GitHub's gh-ost is a neat solution for MySQL.

An interesting consideration here is that it's rarely possible to have application code and database schema changes go out at the exact same instance in time. As a result, to avoid downtime you need to design every schema change with this in mind. The process needs to be:

  1. Design a new schema change that can be applied without changing the application code that uses it.
  2. Ship that change to production, upgrading your database while keeping the old code working.
  3. Now ship new application code that uses the new schema.
  4. Ship a new schema change that cleans up any remaining work - dropping columns that are no longer used, for example.

This process is a pain. It's difficult to get right. The only way to get good at it is to practice it a lot over time.

My rule is this: schema changes should be boring and common, as opposed to being exciting and rare.

Templates for new projects and components

If you're working with microservices, your team will inevitably need to build new ones.

If you're working in a monorepo, you'll still have elements of your codebase with similar structures - components and feature implementations of some sort.

Be sure to have really good templates in place for creating these "the right way" - with the right directory structure, a README and a test suite with a single, dumb passing test.

I like to use the Python cookiecutter tool for this. I've also used GitHub template repositories, and I even have a neat trick for combining the two.

These templates need to be maintained and kept up-to-date. The best way to do that is to make sure they are being used - every time a new project is created is a chance to revise the template and make sure it still reflects the recommended way to do things.

Automated code formatting

This one's easy. Pick a code formatting tool for your language - like Black for Python or Prettier for JavaScript (I'm so jealous of how Go has gofmt built in) - and run its "check" mode in your CI flow.

Don't argue with its defaults, just commit to them.

This saves an incredible amount of time in two places:

  • As an individual, you get back all of that mental energy you used to spend thinking about the best way to format your code and can spend it on something more interesting.
  • As a team, your code reviews can entirely skip the pedantic arguments about code formatting. Huge productivity win!

Tested, automated process for new development environments

The most painful part of any software project is inevitably setting up the initial development environment.

The moment your team grows beyond a couple of people, you should invest in making this work better.

At the very least, you need a documented process for creating a new environment - and it has to be known-to-work, so any time someone is onboarded using it they should be encouraged to fix any problems in the documentation or accompanying scripts as they encounter them.

Much better is an automated process: a single script that gets everything up and running. Tools like Docker have made this a LOT easier over the past decade.

I'm increasingly convinced that the best-in-class solution here is cloud-based development environments. The ability to click a button on a web page and have a fresh, working development environment running a few seconds later is a game-changer for large development teams.

Gitpod and Codespaces are two of the most promising tools I've tried in this space.

I've seen developers lose hours a week to issues with their development environment. Eliminating that across a large team is the equivalent of hiring several new full-time engineers!

Automated preview environments

Reviewing a pull request is a lot easier if you can actually try out the changes.

The best way to do this is with automated preview environments, directly linked to from the PR itself.

These are getting increasingly easy to offer. Vercel, Netlify, Render and Heroku all have features that can do this. Building a custom system on top of something like Google Cloud Run or Fly Machines is also possible with a bit of work.

This is another one of those things which requires some up-front investment but will pay itself off many times over through increased productivity and quality of reviews.

Read the whole story
zwol
58 days ago
reply
I have thought for many years that schema validation, versioning, and migration should be built into SQL. Even just adding a command that checks whether a single table has the schema the application expects would be a huge help.
Pittsburgh, PA
Share this story
Delete

“Hello world” is slower in C++ than in C (Linux)

2 Comments

A simple C program might print ‘hello world’ on screen:

#include <stdio.h>
#include <stdlib.h>

int main() {
    printf("hello world\n");
    return EXIT_SUCCESS;
}

You can write the equivalent in C++:

#include <iostream>
#include <stdlib.h>

int main() {
    std::cout << "hello world" << std::endl;
    return EXIT_SUCCESS;
}

How fast do these programs run? We can check using a benchmarking tool such as hyperfine. Such tools handle various factors such as shell starting time and so forth.

I do not believe that printing ‘hello world’ itself should be slower or faster in C++, at least not significantly. What we are testing by running these programs is the overhead due to the choice of programming language.

Under Linux when using the standard C++ library (libstdc++), we can asked that the standard C++ be linked with the executable. The result is a much larger binary executable, but it may provide faster starting time.

Hyperfine tells me that the C executable runs considerably faster:

C 0.5 ms
C++ (dynamic) 1.4 ms
C++ (static) 0.7 ms

My source code and Makefile are available. I get these numbers on Ubuntu 22.04 LTS using an AWS node (Graviton 3).

There might be methodological issues having to do with hyperfine. Yet if these numbers are to be believed, there is a significant penalty due to C++ for tiny program executions, under Linux.

One millisecond of overhead, if it is indeed correct, is a huge penalty.

Read the whole story
zwol
111 days ago
reply
I wonder how much improvement C++ sees if you start it off with std::cin.tie(0); std::ios::sync_with_stdio(0);

But also, benchmarking “hello world” is pointless… there *are* programs (usually in the form of shell scripts) where process startup and short output overhead dominates; benchmark one of those, lest you waste a lot of time optimizing the wrong thing.
Pittsburgh, PA
jepler
111 days ago
That's true, but if you have an exec()-heavy workflow, it all adds up. This is super common in compiled language build workflows. Say you need to invoke the compiler 10,000 times. Lemire's .9ms difference becomes 9 seconds. Hopefully, those are spread across multiple cores but it could still add up to 1s or more extra time per build. I checked and "ccache gcc" in the cache-hit case for a simple input file can be under 3ms, so looking at it this way (ccache is a C++ program) the extra overhead of c++ would perhaps become 30% of your total build process. (this on my machine where hello stdio takes 0.4ms so not far off from lemire)
Share this story
Delete
1 public comment
jepler
111 days ago
reply
OK if you think 0.5ms is not much ... `musl-gcc -static` is 0.2ms. A version which uses `write(2)` instead of stdio is 0.1ms (again using musl). Eliminating all libraries and just putting two syscalls (sys_write and sys_exit) in _start is 0.1ms.

Edited to add: It's too fast for hyperfine's UI to show things properly, it doesn't go smaller than 0.1ms. The json output reveals a median time of just 58us for the very fastest version, enough to round up to 0.1ms. That's crazy fast for starting & exiting a whole UNIX process!
Earth, Sol system, Western spiral arm

A Walkable Internet

1 Share

Sometimes I think that popular media’s fascination with counterintuitive propositions is a big contributor to what got us into this mess. I use the word “media” there to mean more than just major publications, but we’ll get to that later. Also, sometimes, I like to think up counterintuitive propositions myself, like software doesn’t mean “code,” it means “a system for consolidating control of the means of production.” Or maybe the Internet can be defined as “that which will promise you what you want.”

Lucy Bellwood presenting a slide with the text Photo by Stefan Shepherd from Lucy’s extraordinary September 2016 talk, which I think about at least every other day.

I don’t offer these takes with any intention to defend them. I just think they’re useful mental calisthenics, valuable as alternative modes of thought to the definitions that creep into common idiomatic use: things like the Internet can be defined as “the most active population of large social media platforms.” I certainly use that shorthand myself, often in a scornful tone, despite my own attempts to stretch the popular conception of the Internet around the deconglomerated approaches that people these days call “IndieWeb.” One of the writers I admire, and linked to back in March when talking about this stuff, is Derek Guy of Die, Workwear and Put This On. Sometimes I sneak onto Twitter to see his dorky fashion memes, and today I discovered this, one of his more popular tweets of late. It has, as of this writing, numbers underneath it that far exceed its author’s follower count.

This is a gentle proposition, almost to the point of being anodyne. Maybe you disagree with it. I happen to agree, myself, as someone who has spent a number of years enjoying such a lifestyle; I agree in particular that it is luxurious, which is to say a luxury. One way I define luxury is an ephemeral privilege not to be taken for granted. Many people are systematically deprived of the privilege of walkability by the way that capital and its frequent servant, municipal policy, prioritize car travel and inherited wealth to create housing insecurity and food deserts. To me, that understanding seems built into the way these two sentences are constructed.

Three days after it was posted, I can sit here and watch the retweet and quote numbers on the post tick upward every time I breathe. I don’t think that’s due to positive attention.

I’m not here to write about how Twitter Is Bad. Even Twitter, as a body, agrees that Twitter Is Bad. I’ve written variations on that theme for ten years as of next month, and I can’t declare myself piously abstemious from social media when I’m quoting social media posts in my post about social media. The interests of capital demand that Twitter makes its graph lines go up; the simplest mechanism to make them go up is to incentivize conflict; the capital circulates through organizations until the system’s design iterates toward conflict optimization. Primed for bickering, just as the man says. The story of social media is the story of how billionaires made other people figure out how they could extract money from the late-twentieth-century invention of the real-time flame war.

I feel bad for Guy because I like his work and have a bit of a parasocial relationship with him: he is, more or less, the person who taught me how to enjoy shopping and wearing clothes. (I know many other people are subject to worse for less online, every day. I mean it when I say it’s Bad out there.) If not for Die, Workwear, I don’t think I would ever have chosen to take this series of self-portraits, a couple years back, wearing things I bought and liked just for myself.

Dress-Up

I posted those photos on Flickr, even though I have my own IndieWeb site where I can host as many photos as I want. Flickr is a social media platform. It’s a rarity, not in that it did not generate for its acquiring capitalists the graph numbers they wanted, but in that it was then left to molder in neglect instead of being defenestrated for its failure. I have strong disagreements about some recent choices by its current owners, whatever their best intentions. But at least it’s not Instagram. Flickr has, for many years, retained an interface bent toward the humane and curious, instead of capitulating to the wind-tunnel abrasion of those who value human life less than the ascendance of the line on the graph.

Another thing I posted on Flickr, back in 2018, was the set of photos I took with Kat on our trip to Budapest together. One of the places we visited was Szimpla Kert, a romkocsma or “ruin bar,” built almost twenty years ago in what was once the city’s Jewish quarter by people in its neighborhood who wanted to make something new out of something old. It was once a condemned shell of a building; now it’s a huge draw, with thousands of visitors on a given night, most of whom are tourists like us. Locals might disagree, but I did not find that its charm was diminished by the crowd. It was idiosyncratic, vibrant, complex, and unique. Hungary—like my country, and like the Internet—is a more worrisome place to live than it was a few years ago. But Szimpla seems to be thriving, in large part because it is knit tightly into its local community.

Szimpla Kert

“Szimpla Kert” translates to “simple garden.” I have a little experience with the allure of gardening, and also how much work a garden takes to maintain; I’m sure the people running Szimpla work very hard. But an interesting way of looking at a garden, to me, is a space for growth that you can only attempt to control.

In the middle of drafting this increasingly long post, Kat asked me if I wanted to take a walk up to her garden bed, which is part of a community plot a ways to the north of us. I was glad to agree. I helped water the tomatoes and the kale, and ate a sugar snap pea Kat grew herself right off its vine, and on the way back I picked up dinner from our favorite tiny takeout noodle place. It took over an hour to make the full loop and return home, and I was grateful for every step. An unhurried walk was exactly what my summer evening needed. I luxuriated in its languidness, because I could.

When you put something in a wind tunnel, you’re not doing so because you value the languid. I am far from the first person to say that maybe we could use a little more friction in the paths we take to interact with each other online. Friction can be hindering or even damaging, and certainly annoying; I’m not talking about the way we’ve somehow reinvented popup ads as newsletter bugboxes and notification permission requests. I just want to point out that friction is also how our feet (or assistive devices) interact with the ground. We can’t move ourselves forward without it.

It’s a privilege to have the skills, money, time and wherewithal to garden. You need all those kinds of privilege to run your own website, too. I think social media platforms sold us on the idea that they were making that privilege more equitable—that reducing friction was the same thing as increasing access. I don’t buy that anymore. I don’t want the path between my house and the noodle restaurant to be a conveyor belt or a water slide; I just want an urban design that means it’s not too far from me, with level pavement and curb cuts and some streets closed to cars on the way. I want a neighborhood that values its residents and itself.

This is why I’m as just interested in edifices like Szimpla Kert and Flickr as I am in the tildeverse and social CMS plugins and building the IndieWeb anew. Portland is the most walkable city I’ve lived in, and it ended up that way kind of by accident: the founders optimized for extra corner lots out of capitalist greed, but the emergent effect was a porous grid that leaves more space for walkers and wheelchairs and buses and bikes. The street finds its own uses for things, and people find their own uses for the street. Sometimes people close a street to traffic, at least for a little while. And sometimes people grow things there.

I don’t expect the Internet we know will ever stop pumping out accelerants for flame wars directed at people who just felt like saying something nice about a walk to the grocery store. That paradigm is working for the owners of the means of production, for now, though it’s also unsustainable in a frightening way. (I will never again look at a seething crowd, online or off, without thinking twice about the word “viral.”) But if someone who lives in Chicago can’t entirely ignore what suburban white people get up to in the Loop on St. Patrick’s Day, then one doesn’t have to go out of one’s way to join in, either.

I’m ready to move on from the Information Superhighway. I don’t even like regular superhighways. The Internet where I want to spend my time and attention is one that considers the pedestrian and unscaled, with well-knit links between the old and the new, with exploration and reclamation of disused spaces, and with affordances built to serve our digital neighbors. I’m willing to walk to get there.

A front-end developer and former colleague I admire once said, in a meeting, “I believe my first responsibility is to the network.” It was a striking statement, and one I have thought about often in the years since. That mode of thought has some solid reasoning behind it, including a finite drag-reduction plan I can support: winnowing redundant HTTP requests increases accessibility for people with limited bandwidth. But it’s also a useful mental calisthenic when applied to one’s own community, physical or digital. Each of us is a knot tying others together. The maintenance of those bonds is a job we can use machines to help with, but it is not a job I think we should cede to any platform whose interests are not our own.

The Internet will promise you what you want, and the Internet will not give it to you. Here I am, on the Internet, promising you that people wielding picnics have put a stop to superhighways before.

IncompletePhoto by Diego Jimenez; all rights reserved.

Fifteen years ago this summer, I was exercising a tremendous privilege by living and working in London, in the spare room of an apartment that belonged to friends I met online. They were part of a group that met regularly to walk between subway stations, tracing the tunnel route overground, which they called Tube Walks. There was no purpose to the trips except to get some fresh air, see some things one might not otherwise have seen, and post the photos one took on Flickr.

My five months south of the Thames were my first real experience of a walkable life. I grew up in suburbs, struggled without a car in Louisville, and then, for the first time, discovered a place where I could amble fifteen minutes to the little library, or the great big park, or the neighborhood market, which would sell me just enough groceries for a single dinner. Battersea is not a bourgeois neighborhood, but it’s rich in growth and in history. It changed what I wanted from my life.

London, like Budapest, like Chicago, is a city that has burned down before. People built it back up again, and they didn’t always improve things when they did. But it’s still there, still made up of neighborhoods, still full of old things and new things you could spend a lifetime discovering. And small things, too, growing out of the cracks, just to see how far they can get.

Not sure where this little guy thinks he's going

Daniel Burnham, who bears responsibility for much of the shape of post-fire Chicago, was posthumously accorded the famous imperative to “make no little plans.” But I like little plans, defined as the plans I can see myself actually following.

I didn’t know where this post was going where I started it, and now it’s the longest thing I’ve ever published on this blog. If you read the whole thing, then please take a moment of your day and write me to tell me about a website that you make, or that you like, or that you want to exist. I’ll write back. More than ever, I want to reclaim my friendships from the machinery of media, and acknowledge directly the value that you give to my days.

Read the whole story
zwol
148 days ago
reply
Pittsburgh, PA
Share this story
Delete

SPAs: theory versus practice

1 Comment and 2 Shares

I’ve been thinking a lot recently about Single-Page Apps (SPAs) and Multi-Page Apps (MPAs). I’ve been thinking about how MPAs have improved over the years, and where SPAs still have an edge. I’ve been thinking about how complexity creeps into software, and why a developer may choose a more complex but powerful technology at the expense of a simpler but less capable technology.

I think this core dilemma – complexity vs simplicity, capability vs maintainability – is at the heart of a lot of the debates about web app architecture. Unfortunately, these debates are so often tied up in other factors (a kind of web dev culture war, Twitter-stoked conflicts, maybe even a generational gap) that it can be hard to see clearly what the debate is even about.

At the risk of grossly oversimplifying things, I propose that the core of the debate can be summed up by these truisms:

  1. The best SPA is better than the best MPA.
  2. The average SPA is worse than the average MPA.

The first statement should be clear to most seasoned web developers. Show me an MPA, and I can show you how to make it better with JavaScript. Added too much JavaScript? I can show you some clever ways to minimize, defer, and multi-thread that JavaScript. Ran into some bugs, because now you’ve deviated from the browser’s built-in behavior? There are always ways to fix it! You’ve got JavaScript.

Whereas with an MPA, you are delegating some responsibility to the browser. Want to animate navigations between pages? You can’t (yet). Want to avoid the flash of white? You can’t, until Chrome fixes it (and it’s not perfect yet). Want to avoid re-rendering the whole page, when there’s only a small subset that actually needs to change? You can’t; it’s a “full page refresh.”

My second truism may be more controversial than the first. But I think time and experience have shown that, whatever the promises of SPAs, the reality has been less convincing. It’s not hard to find examples of poorly-built SPAs that score badly on a variety of metrics (performance, accessibility, reliability), and which could have been built better and more cheaply as a bog-standard MPA.

Example: subsequent navigations

To illustrate, let’s consider one of the main value propositions of an SPA: making subsequent navigations faster.

Rich Harris recently offered an example of using the SvelteKit website (SPA) compared to the Astro website (MPA), showing that page navigations on the Svelte site were faster.

Now, to be clear, this is a bit of an unfair comparison: the Svelte site is preloading content when you hover over links, so there’s no network call by the time you click. (Nice optimization!) Whereas the Astro site is not using a Service Worker or other offlining – if you throttle to 3G, it’s even slower relative to the Svelte site.

But I totally believe Rich is right! Even with a Service Worker, Astro would have a hard time beating SvelteKit. The amount of DOM being updated here is small and static, and doing the minimal updates in JavaScript should be faster than asking the browser to re-render the full HTML. It’s hard to beat element.innerHTML = '...'.

However, in many ways this site represents the ideal conditions for an SPA navigation: it’s small, it’s lightweight, it’s built by the kind of experts who build their own JavaScript framework, and those experts are also keen to get performance right – since this website is, in part, a showcase for the framework they’re offering. What about real-world websites that aren’t built by JavaScript framework authors?

Anthony Ricaud recently gave a talk (in French – apologies to non-Francophones) where he analyzed the performance of real-world SPAs. In the talk, he asks: What if these sites used standard MPA navigations?

To answer this, he built a proxy that strips the site of its first-party JavaScript (leaving the kinds of ads and trackers that, sadly, many teams are not allowed to forgo), as well as another version of the proxy that doesn’t strip any JavaScript. Then, he scripted WebPageTest to click an internal link, measuring the load times for both versions (on throttled 4G).

So which was faster? Well, out of the three sites he tested, on both mobile (Moto G4) and desktop, the MPA was either just as fast or faster, every time. In some cases, the WebPageTest filmstrips even showed that the MPA version was faster by several seconds. (Note again: these are subsequent navigations.)

On top of that, the MPA sites gave immediate feedback to the user when clicking – showing a loading indicator in the browser chrome. Whereas some of the SPAs didn’t even manage to show a “skeleton” screen before the MPA had already finished loading.

Screenshot from conference talk showing a speaker on the left and a WebPageTest filmstrip on the right. The filmstrip compares two sites: the first takes 5.5 seconds and the second takes 2.5 seconds

Screenshot from Anthony Ricaud’s talk. The SPA version is on top (5.5s), and the MPA version is on bottom (2.5s).

Now, I don’t think this experiment is perfect. As Anthony admits, removing inline <script>s removes some third-party JavaScript as well (the kind that injects itself into the DOM). Also, removing first-party JavaScript removes some non-SPA-related JavaScript that you’d need to make the site interactive, and removing any render-blocking inline <script>s would inherently improve the visual completeness time.

Even with a perfect experiment, there are a lot of variables that could change the outcome for other sites:

  • How fast is the SSR?
  • Is the HTML streamed?
  • How much of the DOM needs to be updated?
  • Is a network request required at all?
  • What JavaScript framework is being used?
  • How fast is the client CPU?
  • Etc.

Still, it’s pretty gobsmacking that JavaScript was slowing these sites down, even in the one case (subsequent navigations) where JavaScript should be making things faster.

Exhausted developers and clever developers

Now, let’s return to my truisms from the start of the post:

  1. The best SPA is better than the best MPA.
  2. The average SPA is worse than the average MPA.

The cause of so much debate, I think, is that two groups of developers may look at this situation, agree on the facts on the ground, but come to two different conclusions:

“The average SPA sucks? Well okay, I should stop building SPAs then. Problem solved.” – Exhausted developer

 

“The average SPA sucks? That’s just because people haven’t tried hard enough! I can think of 10 ways to fix it.” – Clever developer

Let’s call these two archetypes the exhausted developer and the clever developer.

The exhausted developer has had enough with managing the complexity of “modern” web sites and web applications. Too many build tools, too many code paths, too much to think about and maintain. They have JavaScript fatigue. Throw it all away and simplify!

The clever developer is similarly frustrated by the state of modern web development. But they also deeply understand how the web works. So when a tool breaks or a framework does something in a sub-optimal way, it irks them, because they can think of a better way. Why can’t a framework or a tool fix this problem? So they set out to find a new tool, or to build it themselves.

The thing is, I think both of these perspectives are right. Clever developers can always improve upon the status quo. Exhausted developers can always save time and effort by simplifying. And one group can even help the other: for instance, maybe Parcel is approachable for those exhausted by Webpack, but a clever developer had to go and build Parcel first.

Conclusion

The disparity between the best and the average SPA has been around since the birth of SPAs. In the mid-2000s, people wanted to build SPAs because they saw how amazing GMail was. What they didn’t consider is that Google had a crack team of experts monitoring every possible problem with SPAs, right down to esoteric topics like memory leaks. (Do you have a team like that?)

Ever since then, JavaScript framework and tooling authors have been trying to democratize SPA tooling, bringing us the kinds of optimizations previously only available to the Googles and the Facebooks of the world. Their intentions have been admirable (I would put my own fuite on that pile), but I think it’s fair to say the results have been mixed.

An expert developer can stand up on a conference stage and show off the amazing scores for their site (perfect performance! perfect accessibility! perfect SEO!), and then an excited conference-goer returns to their team, convinces them to use the same tooling, and two years later they’ve built a monstrosity. When this happens enough times, the same conference-goer may start to distrust the next dazzling demo they see.

And yet… the web dev community marches forward. Today I can grab any number of “starter” app toolkits and build something that comes out-of-the-box with code-splitting, Service Workers, tree-shaking, a thousand different little micro-optimizations that I don’t even have to know the names of, because someone else has already thought of it and gift-wrapped it for me. That is a miracle, and we should be grateful for it.

Given enough innovation in this space, it is possible that, someday, the average SPA could be pretty great. If it came batteries-included with proper scroll, focus, and screen reader announcements, tooling to identify performance problems (including memory leaks), progressive DOM rendering (e.g. Jake Archibald’s hack), and a bunch of other optimizations, it’s possible that developers would fall into the “pit of success” and consistently make SPAs that outclass the equivalent MPA. I remain skeptical that we’ll get there, and even the best SPA would still have problems (complexity, performance on slow clients, etc.), but I can’t fault people for trying.

At the same time, browsers never stop taking the lessons from userland and upstreaming them into the browser itself, giving us more lines of code we can potentially delete. This is why it’s important to periodically re-evaluate the assumptions baked into our tooling.

Today, I think the core dilemma between SPAs and MPAs remains unresolved, and will maybe never be resolved. Both SPAs and MPAs have their strengths and weaknesses, and the right tool for the job will vary with the size and skills of the team and the product they’re trying to build. It will also vary over time, as browsers evolve. The important thing, I think, is to remain open-minded, skeptical, and analytical, and to accept that everything in software development has tradeoffs, and none of those tradeoffs are set in stone.

svelte-astro








Download video: https://videos.files.wordpress.com/ONDqa5xs/svelte-astro_mp4_hd.mp4



Download video: https://videos.files.wordpress.com/ONDqa5xs/svelte-astro_mp4_dvd.mp4



Download video: https://videos.files.wordpress.com/ONDqa5xs/svelte-astro_mp4_std.mp4
Read the whole story
zwol
153 days ago
reply
Semi-counterpoint: I’m going to argue that “delegating function to the browser” is exactly the *strength* of MPAs (or, as we used to call them, “web sites”).
Pittsburgh, PA
Share this story
Delete

On the Dangers of Cryptocurrencies and the Uselessness of Blockchain

6 Comments and 12 Shares

Earlier this month, I and others wrote a letter to Congress, basically saying that cryptocurrencies are an complete and total disaster, and urging them to regulate the space. Nothing in that letter is out of the ordinary, and is in line with what I wrote about blockchain in 2019. In response, Matthew Green has written—not really a rebuttal—but a “a general response to some of the more common spurious objections…people make to public blockchain systems.” In it, he makes several broad points:

  1. Yes, current proof-of-work blockchains like bitcoin are terrible for the environment. But there are other modes like proof-of-stake that are not.
  2. Yes, a blockchain is an immutable ledger making it impossible to undo specific transactions. But that doesn’t mean there can’t be some governance system on top of the blockchain that enables reversals.
  3. Yes, bitcoin doesn’t scale and the fees are too high. But that’s nothing inherent in blockchain technology—that’s just a bunch of bad design choices bitcoin made.
  4. Blockchain systems can have a little or a lot of privacy, depending on how they are designed and implemented.

There’s nothing on that list that I disagree with. (We can argue about whether proof-of-stake is actually an improvement. I am skeptical of systems that enshrine a “they who have the gold make the rules” system of governance. And to the extent any of those scaling solutions work, they undo the decentralization blockchain claims to have.) But I also think that these defenses largely miss the point. To me, the problem isn’t that blockchain systems can be made slightly less awful than they are today. The problem is that they don’t do anything their proponents claim they do. In some very important ways, they’re not secure. They doesn’t replace trust with code; in fact, in many ways they are far less trustworthy than non-blockchain systems. They’re not decentralized, and their inevitable centralization is harmful because it’s largely emergent and ill-defined. They still have trusted intermediaries, often with more power and less oversight than non-blockchain systems. They still require governance. They still require regulation. (These things are what I wrote about here.) The problem with blockchain is that it’s not an improvement to any system—and often makes things worse.

In our letter, we write: “By its very design, blockchain technology is poorly suited for just about every purpose currently touted as a present or potential source of public benefit. From its inception, this technology has been a solution in search of a problem and has now latched onto concepts such as financial inclusion and data transparency to justify its existence, despite far better solutions to these issues already in use. Despite more than thirteen years of development, it has severe limitations and design flaws that preclude almost all applications that deal with public customer data and regulated financial transactions and are not an improvement on existing non-blockchain solutions.”

Green responds: “‘Public blockchain’ technology enables many stupid things: today’s cryptocurrency schemes can be venal, corrupt, overpromised. But the core technology is absolutely not useless. In fact, I think there are some pretty exciting things happening in the field, even if most of them are further away from reality than their boosters would admit.” I have yet to see one. More ore specifically, I can’t find a blockchain application whose value has anything to do with the blockchain part, that wouldn’t be made safer, more secure, more reliable, and just plain better by removing the blockchain part. I postulate that no one has ever said “Here is a problem that I have. Oh look, blockchain is a good solution.” In every case, the order has been: “I have a blockchain. Oh look, there is a problem I can apply it to.” And in no cases does it actually help.

Someone, please show me an application where blockchain is essential. That is, a problem that could not have been solved without blockchain that can now be solved with it. (And “ransomware couldn’t exist because criminals are blocked from using the conventional financial networks, and cash payments aren’t feasible” does not count.)

For example, Green complains that “credit card merchant fees are similar, or have actually risen in the United States since the 1990s.” This is true, but has little to do with technological inefficiencies or existing trust relationships in the industry. It’s because pretty much everyone who can and is paying attention gets 1% back on their purchases: in cash, frequent flier miles, or other affinity points. Green is right about how unfair this is. It’s a regressive subsidy, “since these fees are baked into the cost of most retail goods and thus fall heavily on the working poor (who pay them even if they use cash).” But that has nothing to do with the lack of blockchain, and solving it isn’t helped by adding a blockchain. It’s a regulatory problem; with a few exceptions, credit card companies have successfully pressured merchants into charging the same prices, whether someone pays in cash or with a credit card. Peer-to-peer payment systems like PayPal, Venmo, MPesa, and AliPay all get around those high transaction fees, and none of them use blockchain.

This is my basic argument: blockchain does nothing to solve any existing problem with financial (or other) systems. Those problems are inherently economic and political, and have nothing to do with technology. And, more importantly, technology can’t solve economic and political problems. Which is good, because adding blockchain causes a whole slew of new problems and makes all of these systems much, much worse.

Green writes: “I have no problem with the idea of legislators (intelligently) passing laws to regulate cryptocurrency. Indeed, given the level of insanity and the number of outright scams that are happening in this area, it’s pretty obvious that our current regulatory framework is not up to the task.” But when you remove the insanity and the scams, what’s left?

EDITED TO ADD: Nicholas Weaver is also adamant about this. David Rosenthal is good, too.

Read the whole story
popular
157 days ago
reply
zwol
157 days ago
reply
Pittsburgh, PA
Share this story
Delete
6 public comments
bronzehedwick
157 days ago
reply
Crypto is one of the rare cases where if we burn it to the ground it will help our species survive.
Jersey City, NJ
pdp68
157 days ago
reply
"This is my basic argument: blockchain does nothing to solve any existing problem with financial (or other) systems. Those problems are inherently economic and political, and have nothing to do with technology. And, more importantly, technology can’t solve economic and political problems. Which is good, because adding blockchain causes a whole slew of new problems and makes all of these systems much, much worse."
Belgium
chrismo
157 days ago
reply
#tech
ReadLots
157 days ago
reply
If we can just move all of the fraud into the blockchain, maybe then it can have purpose - keeping the scammers busy in crypto and leaving us outside of it alone.
acdha
157 days ago
reply
Green's “rebuttal” was disappointingly weak — to be honest, I read it expecting the end to be that he'd picked up some lucrative consulting work from a cryptocurrency company.
Washington, DC
GaryBIshop
157 days ago
reply
Well said!
Next Page of Stories