security research, software archaeology, geek of all trades
570 stories
·
9 followers

No more Flatpak (by default) in Ubuntu Flavors

2 Comments
The Ubuntu Flavors offerings (Kubuntu and the like) have decided that the way to improve the user experience is to put more emphasis on the Snap package format.

Going forward, the Flatpak package as well as the packages to integrate Flatpak into the respective software center will no longer be installed by default in the next release due in April 2023, Lunar Lobster. Users who have used Flatpak will not be affected on upgrade, as flavors are including a special migration that takes this into account. Those who haven’t interacted with Flatpak will be presented with software from the Ubuntu repositories and the Snap Store.
Read the whole story
zwol
29 days ago
reply
A plague on both their houses, I say, for both stand in defiance of the Highlander Principle of Package Management and the bill will inevitably come due
Pittsburgh, PA
Share this story
Delete
1 public comment
jepler
29 days ago
reply
an honest linux distribution would not raise artificial barriers to software choice. but yeah this is ubuntu forcing the non-free snap infrastructure on people in any way they can. use debian, y'all
Earth, Sol system, Western spiral arm

Build security with the assumption it will be used against your friends

1 Comment and 3 Shares
Working in information security means building controls, developing technologies that ensure that sensitive material can only be accessed by people that you trust. It also means categorising people into "trustworthy" and "untrustworthy", and trying to come up with a reasonable way to apply that such that people can do their jobs without all your secrets being available to just anyone in the company who wants to sell them to a competitor. It means ensuring that accounts who you consider to be threats shouldn't be able to do any damage, because if someone compromises an internal account you need to be able to shut them down quickly.

And like pretty much any security control, this can be used for both good and bad. The technologies you develop to monitor users to identify compromised accounts can also be used to compromise legitimate users who management don't like. The infrastructure you build to push updates to users can also be used to push browser extensions that interfere with labour organisation efforts. In many cases there's no technical barrier between something you've developed to flag compromised accounts and the same technology being used to flag users who are unhappy with certain aspects of management.

If you're asked to build technology that lets you make this sort of decision, think about whether that's what you want to be doing. Think about who can compel you to use it in ways other than how it was intended. Consider whether that's something you want on your conscience. And then think about whether you can meet those requirements in a different way. If they can simply compel one junior engineer to alter configuration, that's very different to an implementation that requires sign-offs from multiple senior developers. Make sure that all such policy changes have to be clearly documented, including not just who signed off on it but who asked them to. Build infrastructure that creates a record of who decided to fuck over your coworkers, rather than just blaming whoever committed the config update. The blame trail should never terminate in the person who was told to do something or get fired - the blame trail should clearly indicate who ordered them to do that.

But most importantly: build security features as if they'll be used against you.

comment count unavailable comments
Read the whole story
zwol
56 days ago
reply
Pittsburgh, PA
Share this story
Delete
1 public comment
ReadLots
56 days ago
reply
Exactly, build in secret backdoors only you can access!

A trio of dubious denial-of-service security vulnerability reports which are just style points piled on top of nothing

1 Comment and 2 Shares

A security vulnerability report arrived that showed that if you loaded a specially-crafted file into a file viewer, the file viewer crashed. This was filed as a denial of service against the file viewer.

And indeed, the crafted file did cause the viewer to crash: The file format internally consists of several recursive structures, and in the crafted file, those structures contained loops. The viewer didn’t have any detector for infinite loops, so its recursive parser crashed with a stack overflow.

Let’s run through the usual questions.

Who is the attacker? The attacker is presumably somebody who sends you a crafted file and tricks you into viewing it.

Who is the victim? The victim is the person running the viewer and is unable to view the file.

What has the attacker gained? The attacker prevented the victim from viewing the file.

Now, another way to prevent the victim from being able to use the viewer to view a file is to send them a file full of garbage, or even simpler: Don’t send them the file in the first place. But that’s so obviously pointless that it’s presumably not what the finder was reporting.

Maybe what the attacker gained is that the victim’s file viewer program crashed, and the victim therefore lost any unsaved data. But this is a file viewer program. There is no unsaved data at all, since it never modifies the file. All the victim has to do to recover is to relaunch the file viewer program. The crash is not persistent, and it’s certainly not unrecoverable. You recover by just running the program again!

Therefore, all the attacker really accomplished was to annoy the user briefly. They could also have done that by tricking the user into viewing a file that contained the message “Ha ha, I annoyed you!”

A similar security vulnerability report came in that claimed to have identified a denial of service attack against a file editor: Loading a crafted file into the file editor caused the file editor to crash.

What happened is that the file is corrupted in a specific way that causes a helper function to say, “Nope, I can’t parse this,” and return a null pointer to represent the error. But the caller of the helper function didn’t check whether the call succeeded and tried to use the resulting null pointer. Therefore, this is a guaranteed null pointer crash, with no ability to control the read-from pointer. No chance for memory modification or information disclosure. Worst case is data loss and denial of service.

But does it even get you that much?

Before you load a new file, this particular program will prompt you to save your changes to the old file before it closes the old file and opens the new one. So there is no loss of data here: The user was given a chance to save their unsaved data. Any unsaved data would have been lost anyway even if the new file were not corrupted.

That leaves denial of service. The program crashes trying to load the crafted file, but the user can just launch it again. There is no permanent corruption, the relaunch of the program proceeds as usual. This is just a bug in the file editor program. The user will say, “Oh, well. I guess I’m not opening that file any more.” All you managed to do was annoy the user momentarily.

A third security vulnerability report came in that claimed to have identified a denial of service attack against Windows logon. The finder claimed that if a user set up their user profile with a crafted image with a crafted file name, then the image would come out garbled on the logon screen. The finder included a copy of the problematic image, as well as instructions on how to set it as the profile picture.

What the finder didn’t explain was why this was a denial of service, or why there was really any issue at all.

I guess an attacker can set a crafted image as their profile image, thereby preventing their image from displaying correctly on the logon screen. But so what? Just create an image filled with static or other garbage, and set it as your profile image. But why stop at garbage? If you don’t want people to know what you look like, then upload a blank picture, or a picture of a cow. You can even try to impersonate somebody else: Upload a picture of a celebrity, or your boss!

Who is the attacker? The attacker is presumbly the person setting the crafted profile picture.

Who is the victim? The victim is, I guess, everybody else who is looking at the profile picture.

What has the attacker gained? Unclear. This doesn’t let them do anything they couldn’t already do by much more conventional means.

Nothing has been gained here. Nobody is even slightly annoyed!¹

In all of these cases, what the finder identified was a bug. Thanks for reporting the bug, and we’ve assigned it to the component owners. But these bugs have no security consequences. You could have gotten the same result by just using a file full of random garbage.

¹ Okay, if you upload a picture of your boss, you might annoy people who mistake you for your boss. And you might annoy your boss.

The post A trio of dubious denial-of-service security vulnerability reports which are just style points piled on top of nothing appeared first on The Old New Thing.

Read the whole story
zwol
65 days ago
reply
You would think Microsoft would have accepted by now that *any* memory access crash must be treated as if it were a remote code execution vulnerability; if it isn't today, it will be in a few weeks.
Pittsburgh, PA
LeMadChef
62 days ago
Which one of these three bugs resulted in an out-of-bounds memory access?
zwol
62 days ago
It's possible to convert null pointer dereferences to RCEs (see for instance https://googleprojectzero.blogspot.com/2023/01/exploiting-null-dereferences-in-linux.html). I don't have an example to hand, but I would be very surprised if it wasn't possible to turn a stack overflow into an RCE, particularly on Windows with the in-process stack unwinding on CPU exception.
LeMadChef
62 days ago
"I don't have an example to hand" And neither does Microsoft (or any company for that matter) have time to spend on "this might be a security loophole." from every random passerby - especially ones laid out in this blog. I'm going to go with decades of deep experience over some random RCE submitter (not you, the people in the article).
zwol
62 days ago
And that's exactly the attitude I'm calling out as incorrect. _I_ may not have an example to hand; Microsoft's security team damn well ought to.
zippy72
61 days ago
And then a few items later I get an article from google project zero about how dangerous Bull pointer derefs are - almost as if they were listening
Share this story
Delete

Stack Overflow temporarily bans users from sharing responses generated by ChatGPT, pending a final ruling; the site's mods say most ChatGPT answers are wrong (James Vincent/The Verge)

2 Comments

James Vincent / The Verge:
Stack Overflow temporarily bans users from sharing responses generated by ChatGPT, pending a final ruling; the site's mods say most ChatGPT answers are wrong  —  Stack Overflow, the go-to question-and-answer site for coders and programmers, has temporarily banned users from sharing responses generated by AI chatbot ChatGPT.

Read the whole story
zwol
108 days ago
reply
This was exactly my experience of futzing around with Copilot for half an hour last year.
Pittsburgh, PA
Share this story
Delete
1 public comment
JayM
108 days ago
reply
Heh. Hopefully Theresa is figuring it out! :)
Atlanta, GA

All People Are Created Educable, a Vital Oft-Forgotten Tenet of Modern Democracy

1 Share

Book cover: Who Owns the News, a History of Copyright, by Will Slauter

(I have one of my more traditional history posts underway, but wanted to post this separate thought first. Felt timely.)

Many shocking, new ideas shaped the American Experiment and related 18th century democratic ventures; as an historian of the period, I often notice that one of the most fundamental of them, and most shocking to a world which had so long assumed the opposite, often goes unmentioned — indeed sometimes denied — in today’s discussions of democracy: the belief that all people are educable.  I think it’s urgent that we bring that principle back into the spotlight if we want to defend democracy from one of its common failure modes: pseudo-populist oligarchy.

Within “all men are created equal” lies the sub-principle that all people, or specifically all enfranchised citizens of a state (which often at the time meant white male adults, though some made it broader, or narrower) that all such people are, if given appropriate educational resources, capable of learning, exercising sound judgment, and acting on said judgment, thus that they all people are equally rational and capable of competent self-governance.  This thesis does not assume that all people when adults are equally prepared to participate in government, but that all people when born have the capacity to absorb education if given access to it.  Rare intellectual disabilities might make the education process challenging for certain individuals, but (the thesis argues) even then the right support and resources make education possible, and such situations are not the default human state.  This is the thesis that all people are fundamentally educable. 

Many in the 18th c. who thought democracy was absurd rejected it because they disagreed with this thesis, believing that the majority of people (even of white men) were not educable, i.e. that even with educational resources most people were born incapable of being guided by Reason and making sound political judgments. Those who believed this predicted that government by the people would collapse into absurdity, since it would be led by a parliament of fools. We get a taste of what such critics of democracy thought would happen to America in the satirical scenes in Shakespeare’s Henry VI Part 2 in which Jack Cade’s populist rebels happily kill each other and laugh about it, and believe they can end hunger by having everyone eat on the king’s tab at restaurants and making the gutters run with wine (and which is the source of the much-misunderstood “First thing we do is kill all the lawyers,” step 1 in which executing everyone who can read is their step 2) — this is what many 18th c. anti-democrats believed would happen if governing was truly done by the people.

Drawing of a mob of peasants brandishing weapons with two severed heads on spears, with Jack Cade waving a sword above them all.

1867 Illustration of Jack Cade and his rebels with the severed heads of Lord Say and his son-in-law, hard-working administrators, killed because Lord Say built a paper mill, supported books, and spoke Latin. Shakespeare is very overt in his depiction of the imagined savagery of a self-governing mob.

Often modern people have trouble wrapping our heads around how sure pre-modern Europeans were that human minds and their capacities (A) varied fundamentally, (B) were locked in at birth and immutable, and (C) were only very rarely rational or educable.  This doesn’t mean elite education, it means any education, grasping the basics beyond I’m hungry and I want to eat that fish.  Plato and Aristotle (and many transformations thereof over 2,000 years), described a human soul/mind led by three forces: the appetites, the passions, and the intellect i.e. reason.  The appetites were simplest and most bodily: I’m hungry, I’m thirsty, I’m tired and want to rest, I’m bored and want entertainment, I’m horny and want sex, my arms hurt I don’t want to carry this anymore.  The passions we might call mental but worldly: pride, ambition, loyalty, patriotism I want to be famous, I want to be respected, I want to be well-talked-of in the city, I want to protect my way of life, I want to have power, I want to advance the glory of the state, I want to battle evil, etc.  Reason, or the intellect, was the calculating, understanding, and contemplative power, which did math, understood the universe, aspired to the spiritual and eternal (whether Justice or the Pythagorean theorem) and exercised ethical judgment, weighing goods and bads deciding the best course (Eating this whole jar of pickles would be yummy but then I’ll get a stomachache; electing this demagogue would make me rich but then he would tyrannize the state.)  Both Aristotle and Plato say that different souls are dominated by different organs of the soul (i.e. either the appetites, passions, or intellect) and that only a tiny minority of human souls are dominated by the intellect, a larger minority by the passions, and practically all by the base appetites.  Plato’s Republic uses an exam/aptitude system to identify these rare souls of gold (as opposed to silver = passions, bronze/iron = appetites) and make them rulers of the city, and proposes a eugenicist breeding program to produce more.

The principle that souls of gold (i.e. souls fully capable of being educated & of wise rule) are a tiny minority, and that most humans are immutably not educable from birth, was very thoroughly absorbed into European belief, and dominated it for 2,000 years.  In Dante, we see the entire structure of Hell revolve around the appetites/passions/intellect distinction.  Medieval epistemology, psychology, and even ideas about medicine and plants incorporated this principle, and spun elaborate explanations for how and why different souls perceived the heavenly world (Good, Justice, Providence) better than others.  Eugen Weber’s powerful history, Peasants into Frenchmen: The Modernization of Rural France, 1870-1914, shows how people in the period wrote about their own French peasants in incredibly insulting, infantilizing, quasi-bestial terms, strikingly similar to the racist language we’re used to the Age of Empires using to demean non-Europeans. Anyone who hasn’t looked at period sources will struggle to believe how ferociously confident the European majority was in the thesis that the majority of people even in their own country could never understand a book, a moral quandary, or a political proposition.  Keeping the rare wise elites in charge was the only barrier between order and savagery.  The fact that so many people were willing to believe in the totally mythical tragedy of the commons (yes, it’s totally invented, real peasants took great care of their commons) is one relic of how certain people were for a long time (and some still are) that most people are not capable of making the kinds of prudent, sustainable judgments necessary for custodianship of a polity.

It took a lot to get even a small radical fringe by 1750 to entertain the notion that all people–or even just all men–were created equally educable.  A long survey of the causes would get unwieldy, but they include (among other things) contact with indigenous cultures in the Americas and other regions which had functional governments without European-style systems, revolutions in medicine and the understanding of the sense organs which undermined old hierarchy-enforcing ideas about how cognition and sensation functioned, second-order consequences of the rags-to-riches path opened by Renaissance courts employing scholars from any background so long as they had good Latin, and Protestantism’s second-order implication that, if people didn’t need priests as intermediaries between their prayers and God, perhaps they didn’t need aristocrats as intermediaries between them and power.  But by 1750 that fringe existed, and had enough momentum to implement its experiment in the new United States, which most people who were considered sensible at the time thought would quickly degenerate into chaos, because they didn’t think most people were capable of understanding the world enough to vote sensibly, draft legislation, or serve in a congress, and that the tiny wise minority would be drowned out by the majority wanting to vote for dining on the king’s tab and killing all the lawyers.

At this point, if this essay were a twitter thread, one would see the obligatory snarky self-proclaimed cynic pop up with a comment that America did degenerate into foolish populist chaos, look at the Trump voters, and I know of several Shakespeare companies that put on Henry VI with Cade as Trump. That is why it’s so important to focus on the distinction between educated and educableand that the claim made by America’s early founders and shapers wasn’t that all people are capable of ruling wisely, but that all people are capable of becoming capable of ruling wisely. This is why those who shaped America insisted so fiercely on universal public education; they believed (we have thousands of essays, letters, and documents to this effect!) that citizens would only be capable of being wise voters and rulers if they had access to a good education. Without education, they believed, people would indeed vote for foolish things, so they had to transform their populace, from one where rural peasants were starved for education, to one where everyone was invited to Reason’s classroom. They also believed that a well-informed public was vital, thus that news and newspapers were indispensable for democracy to function, which is why the early US government subsidized the shipping of newspapers and the circulation of knowledge through things like Media Mail–here see Will Slauter’s fantastic history Who Owns the News?

Now, at one point I helped my Ph.D. adviser James Hankins with his research on the history of conservatism.  We (mostly he) looked at many examples over many times, places, and regimes, and observed after innumerable case studies that a consistent defining characteristic of conservative thought over time is the belief that some people are better at ruling than others, thus that the best way to run a government and society is to put those superior people in power.  Whether it’s a hereditary aristocracy, an exam-based meritocracy, an earn-the-franchise-through-military-service timocracy, or a divine right monarchy, many systems posit that some are more capable of rule than others, and that the best system will put them in power.

These days, when I cite this definition of conservatism, invariably someone brings up Frank Wilhoit’s observation that “Conservatism consists of exactly one proposition, to wit: There must be in-groups whom the law protects but does not bind, alongside out-groups whom the law binds but does not protect.” While this is a very powerful summary of trends in 21st century conservatism, useful for thinking about a lot of current politics, it isn’t broad enough when we want go back 1,000 years or more because (I know this will sound absurd) the idea that law is supposed to bind anyone is actually fairly new.  In my period (Renaissance) for example, law is mainly supposed to provide an Earthly portrait of divine judgment & mercy, and everyone is supposed to break laws all the time but then get the penalties waived, so the process of transgressing, being condemned, and being pardoned or let off with a lesser sentence gives the soul an ethically therapeutic preview of the universality of sin and the hope for being let off with just Purgatory instead of Hell, and the idea of law actually binding or protecting anybody maybe goal #24 in the lawmakers’ minds, with a lot of really weird-to-us-modern ones higher on the list.  But in pre-modern and modern conservatism alike, we see the shared conviction that some people are fundamentally better at ruling (or just better) than others, and that one must put the better in power.

The thesis that all people are educable is fundamentally opposed to this.

Democracy can function, says Thomas Paine (to pick a spokesman for the US founders), because human beings are fundamentally educable, and if given a good teacher, a good reading list, and some newspapers, all human beings, or at least the overwhelming majority of them, will become capable of wise judgment and self-rule.  One’s civic duty is not to identify the wise minority and put them in power, but to disseminate the tools of education so the majority can become wise.  This thesis is opposed to aristocracy, to oligarchy, to timocracy, even to most forms of meritocracy, since education isn’t supposed to prepare people to be sorted out by the exam but to demonstrate that human beings are so excellent that everyone can pass it.

Let’s return now to our snarky self-labeled cynic, who points at Trump voters and people who are wrong on the internet to joke that most people are fundamentally too stupid to be educated.  Setting aside the fact that the engines of social media currently make fringe and foolish voices far louder than sensible ones, making them seem like a majority, America at present does not live in the world of robust public education and state-strengthened free circulation of journalism which the minds behind the experiment thought were so essential. Today’s America has seen decades of the intentional conservative-led starving and squeezing of public education, efforts to increase the disparity in education quality between public education and private or charter school education, conservative-led homeschool movements which aim to expose people to a narrow range of ideology, and also the devastation of newspapers, journalism, and a vast misinformation campaign. All this adds up to preventing many who are educable from becoming educated. Thomas Paine, and those I’m using him to represent, would recognize this as a sabotage of their system, one they would say might indeed enable Cade-style populism, which (as in Henry VI) is easy for ambitious elites to then harness to their own ends.  Thus, Paine would say: of course the democracy isn’t working well if such an essential precondition is being sabotaged.

In sum, we need to talk more about the vital tie between democracy and the conviction that all people are created educable.  It helps make clear how strategic the strangulation of educational resources is, and that one of the less loud but most dangerous threats to our confidence in democracy is the project to make it seem like most people can’t make sensible political judgments, reducing people’s confidence in democracy as a system by seeming to prove true conservative principle that there will always be a few who should rule and many who can’t.  When I see conservative thinking start to show up in acquaintances (or Silicon Valley leaders) who consider themselves progressive but also consider themselves smart, it often begins with them feeling that most people are stupid and the world would be better off if the smart were in charge.  One can often get such people to pause and reflect by bringing up the question of whether they think all people are fundamentally educable, and whether the solution isn’t to put the reins of power into genius hands but to put the Encyclopedia in everyone else’s.  Information is key.  Those peasants who shared commons maintained them sustainably for centuries because (as we now recognize) they were educated in the ways that mattered, they learned from families and communities to understand what they were doing, using local knowledge of commons, grazing etc. as they made choices.  If one’s democratic state is the commons, people will likewise maintain it well, but not if they’re intentionally deprived of access to basic knowledge of how it works and what can harm or heal it, and drowned instead in deliberate falsehoods.

We all know we need to support education & good journalism, and combat misinformation, but revisiting the principle that all people are created educable is a good way to remember that these are not merely invaluable social goods, like sanitation or public parks.  They were conceived from the start as essential components of modern democracy, in direct opposition to the many-centuries-old conservative principle that some are best to rule and others to be ruled.  Enlightenment-style democracy cannot function without the conviction that all people are created educable.  If we forget that, if we doubt it, if we let it shake our confidence in the experiment which didn’t turn into Jack Cade for more than two centuries (bets were not on America surviving for so long in 1776!), we risk opening the gates to the old failure mode of oligarchy rising when democracy wavers.

P.S. Donate to Wikipedia – both Diderot and Thomas Paine would smile.

Another illustration of Jack Cade’s rebellion. The reality was indeed destructive, but performances of such events, like the myth of the tragedy of the commons, also served to reinforce the old thesis that the people cannot rule. Turns out, we can.

Share

Read the whole story
zwol
128 days ago
reply
Pittsburgh, PA
Share this story
Delete

Software engineering practices

1 Comment and 2 Shares

Gergely Orosz started a Twitter conversation asking about recommended "software engineering practices" for development teams.

(I really like his rejection of the term "best practices" here: I always feel it's prescriptive and misguiding to announce something as "best".)

I decided to flesh some of my replies out into a longer post.

Documentation in the same repo as the code

The most important characteristic of internal documentation is trust: do people trust that documentation both exists and is up-to-date?

If they don't, they won't read it or contribute to it.

The best trick I know of for improving the trustworthiness of documentation is to put it in the same repository as the code it documents, for a few reasons:

  1. You can enforce documentation updates as part of your code review process. If a PR changes code in a way that requires documentation updates, the reviewer can ask for those updates to be included.
  2. You get versioned documentation. If you're using an older version of a library you can consult the documentation for that version. If you're using the current main branch you can see documentation for that, without confusion over what corresponds to the most recent "stable" release.
  3. You can integrate your documentation with your automated tests! I wrote about this in Documentation unit tests, which describes a pattern for introspecting code and then ensuring that the documentation at least has a section header that matches specific concepts, such as plugin hooks or configuration options.

Mechanisms for creating test data

When you work on large products, your customers will inevitably find surprising ways to stress or break your system. They might create an event with over a hundred different types of ticket for example, or an issue thread with a thousand comments.

These can expose performance issues that don't affect the majority of your users, but can still lead to service outages or other problems.

Your engineers need a way to replicate these situations in their own development environments.

One way to handle this is to provide tooling to import production data into local environments. This has privacy and security implications - what if a developer laptop gets stolen that happens to have a copy of your largest customer's data?

A better approach is to have a robust system in place for generating test data, that covers a variety of different scenarios.

You might have a button somewhere that creates an issue thread with a thousand fake comments, with a note referencing the bug that this helps emulate.

Any time a new edge case shows up, you can add a new recipe to that system. That way engineers can replicate problems locally without needing copies of production data.

Rock solid database migrations

The hardest part of large-scale software maintenance is inevitably the bit where you need to change your database schema.

(I'm confident that one of the biggest reasons NoSQL databases became popular over the last decade was the pain people had associated with relational databases due to schema changes. Of course, NoSQL database schema modifications are still necessary, and often they're even more painful!)

So you need to invest in a really good, version-controlled mechanism for managing schema changes. And a way to run them in production without downtime.

If you do not have this your engineers will respond by being fearful of schema changes. Which means they'll come up with increasingly complex hacks to avoid them, which piles on technical debt.

This is a deep topic. I mostly use Django for large database-backed applications, and Django has the best migration system I've ever personally experienced. If I'm working without Django I try to replicate its approach as closely as possible:

  • The database knows which migrations have already been applied. This means when you run the "migrate" command it can run just the ones that are still needed - important for managing multiple databases, e.g. production, staging, test and development environments.
  • A single command that applies pending migrations, and updates the database rows that record which migrations have been run.
  • Optional: rollbacks. Django migrations can be rolled back, which is great for iterating in a development environment but using that in production is actually quite rare: I'll often ship a new migration that reverses the change instead rather than using a rollback, partly to keep the record of the mistake in version control.

Even harder is the challenge of making schema changes without any downtime. I'm always interested in reading about new approaches for this - GitHub's gh-ost is a neat solution for MySQL.

An interesting consideration here is that it's rarely possible to have application code and database schema changes go out at the exact same instance in time. As a result, to avoid downtime you need to design every schema change with this in mind. The process needs to be:

  1. Design a new schema change that can be applied without changing the application code that uses it.
  2. Ship that change to production, upgrading your database while keeping the old code working.
  3. Now ship new application code that uses the new schema.
  4. Ship a new schema change that cleans up any remaining work - dropping columns that are no longer used, for example.

This process is a pain. It's difficult to get right. The only way to get good at it is to practice it a lot over time.

My rule is this: schema changes should be boring and common, as opposed to being exciting and rare.

Templates for new projects and components

If you're working with microservices, your team will inevitably need to build new ones.

If you're working in a monorepo, you'll still have elements of your codebase with similar structures - components and feature implementations of some sort.

Be sure to have really good templates in place for creating these "the right way" - with the right directory structure, a README and a test suite with a single, dumb passing test.

I like to use the Python cookiecutter tool for this. I've also used GitHub template repositories, and I even have a neat trick for combining the two.

These templates need to be maintained and kept up-to-date. The best way to do that is to make sure they are being used - every time a new project is created is a chance to revise the template and make sure it still reflects the recommended way to do things.

Automated code formatting

This one's easy. Pick a code formatting tool for your language - like Black for Python or Prettier for JavaScript (I'm so jealous of how Go has gofmt built in) - and run its "check" mode in your CI flow.

Don't argue with its defaults, just commit to them.

This saves an incredible amount of time in two places:

  • As an individual, you get back all of that mental energy you used to spend thinking about the best way to format your code and can spend it on something more interesting.
  • As a team, your code reviews can entirely skip the pedantic arguments about code formatting. Huge productivity win!

Tested, automated process for new development environments

The most painful part of any software project is inevitably setting up the initial development environment.

The moment your team grows beyond a couple of people, you should invest in making this work better.

At the very least, you need a documented process for creating a new environment - and it has to be known-to-work, so any time someone is onboarded using it they should be encouraged to fix any problems in the documentation or accompanying scripts as they encounter them.

Much better is an automated process: a single script that gets everything up and running. Tools like Docker have made this a LOT easier over the past decade.

I'm increasingly convinced that the best-in-class solution here is cloud-based development environments. The ability to click a button on a web page and have a fresh, working development environment running a few seconds later is a game-changer for large development teams.

Gitpod and Codespaces are two of the most promising tools I've tried in this space.

I've seen developers lose hours a week to issues with their development environment. Eliminating that across a large team is the equivalent of hiring several new full-time engineers!

Automated preview environments

Reviewing a pull request is a lot easier if you can actually try out the changes.

The best way to do this is with automated preview environments, directly linked to from the PR itself.

These are getting increasingly easy to offer. Vercel, Netlify, Render and Heroku all have features that can do this. Building a custom system on top of something like Google Cloud Run or Fly Machines is also possible with a bit of work.

This is another one of those things which requires some up-front investment but will pay itself off many times over through increased productivity and quality of reviews.

Read the whole story
zwol
172 days ago
reply
I have thought for many years that schema validation, versioning, and migration should be built into SQL. Even just adding a command that checks whether a single table has the schema the application expects would be a huge help.
Pittsburgh, PA
Share this story
Delete
Next Page of Stories