security research, software archaeology, geek of all trades
469 stories

The Deep Sea

1 Comment and 4 Shares
new project from Neal Agarwal, a constant source of joyful old-school internet
Read the whole story
1 day ago
Pittsburgh, PA
Share this story
1 public comment
5 days ago
seattle, wa

twiddletaffy: debatable if these are actual canon scenes or not

1 Share


debatable if these are actual canon scenes or not

Read the whole story
2 days ago
Pittsburgh, PA
Share this story

Dear anonymous internet user asking for help..


Dear anonymous internet user, dear corporate employee hiding behind a <a href="" rel="nofollow"></a> address, dear “GitHub account with a single issue”,

Thank you for your interest in my free software, my project or the documentation I wrote for you. I am happy to hear you want to ask a question, have a problem, or perhaps even inform me of a new requirement you have.

But with some small exceptions (do read on), I’m afraid I will not be able to help you.

You see, our community and I have done a lot of work to get these projects to where they are today. But your first step in asking for help was deciding that I should not know who you are or where you intend to use my stuff.

This way we got off to a really bad start.

Some of you go so far as to create a custom email address for contacting me (‘’), others even have the gall to send email from addresses like <a href=""></a>. A recent trend is the ‘single issue GitHub account’.

Of particular note are employees from large corporations using my open source software, but not wanting anyone to know that. I get email from random gmail accounts asking questions you’d only ask if you operate a fleet of satellites in space.

So why do I care?

First, I just consider it rude. You come at me hiding who you are but still expect me to do free work for you. Try doing that in real life. What were you thinking? Not introducing yourself AND using a fake identity?

Second, I have found that this anonymity also means respondents feel free to simply walk away with no damage to their reputation. You report a complicated bug, I spend some time investigating, ask about details, and I get no response. Some weeks later a very similar question comes in from a fresh email address, likely the same person, still not wanting to do the work to get help.

Third, my software and other products can be used for good or evil. If I don’t know who you are, am I enabling you to build the new Turkish censorship infrastructure, or helping you implement the Роскомнадзор block list more efficiently? These are two examples that actually happened by the way.

What’s next, send a copy of photo ID?

Of course not. But I do care that the people asking for help have not obviously gone out of their way to hide who they are. I am fine for example with Github issues coming from accounts that are clearly working with many open source projects, even if I don’t know who they are. But I can see they work well in getting issues solved.

Similarly, many internet users are pseudonymous - we may not know exactly who they are, but they have developed a reputation by being part of the community. I love to work with them.

As a case in point consider @SwiftOnSecurity. We don’t know who they are, but their contribution is such that “Swift” is able to get a CEO phoned out of bed at 2AM in the morning with a single tweet. Be like Tay.

“Our corporate policy does not allow us to disclose our use of open source software”

While I have sympathy for the pain this will cause you individually, my open source policy does not allow me to offer free help to corporations who do not even have the decency to admit that they use my software.

I understand it is not easy for (large) corporations to support open source software, with procurement not understanding why you are paying for free software. I really get that.

But one of the few things you CAN do as a corporation is lend a project credibility by admitting that you use it. If your organization decides to even withhold that minimum contribution, please understand I can’t help you.

As an aside, keeping your identity secret can make open source projects overlook the weight of your problem, as happened to Cloudflare in 2014 when they complained anonymously about PowerDNS, and we therefore did not have the context to appreciate the scale of their issue.

“But I found a bug in your software”

While I am grateful for your report, I have no moral obligation to fix your every bug. Life is short, many things need to be done. If you truly want to upset an open source developer, tell them what they “should” be doing - safely behind your anonymous email address or single use GitHub account.

“You write free software so you must provide free support”

I don’t even.

What if I privately tell you who we are, but you keep it secret?

To a certain extent that helps, but not when providing support for open source software.

We wrote words on this earlier for PowerDNS. In short, it does not scale to provide free software support to the whole world but not have a record of that. As noted in Open Source Support: out in the open:

By providing support in the open, other people can learn, search engines pick up our answers, the community can pitch in with solutions or suggestions. Doing free support this way provides a true public benefit.
If you have a domain that does not resolve, we need the actual name of that domain. Not ‘’. If we cannot query your nameservers because you won’t tell us their IP address, we can’t help you.

What about people that really need anonymity

These exist, and I help them. I have extended family living in oppressive regimes. And you know, I can tell if the need for secrecy derives from worries about personal safety. But the vast majority of anonymous users have no such worries - not sharing who they are is mere convenience for them, allowing them to forego the risk of looking stupid under their real name, while making my life harder.


If you contact me for help while taking efforts to stay anonymous, and your anonymous identity has no visible track record, please know that in general there is little I can do for you.

Read the whole story
3 days ago
Pittsburgh, PA
Share this story

Eowyn Kills the Witch King

2 Comments and 4 Shares
Description: The Witch King, from Lord of the Rings, in the battle of pelennor fields.

Witch King: \
Read the whole story
5 days ago
Quoting from memory a snarky conversation about this on rec.arts.sf.written lo these many years ago:

WITCH-KING: No man can kill me.
EOWYN: I am no man!
WITCH-KING: Bah, this Westron is so imprecise. I did not mean _vir_, I meant _homo_.
EOWYN: In that case, permit me to point out that Meriadoc, who is not _homo_ but _pygmaeus_, has just introduced a blade of Gondolin to your knee.
Pittsburgh, PA
Share this story
1 public comment
5 days ago
Why can't both be true?

Local-first software: you own your data, in spite of the cloud

1 Comment and 3 Shares

Local-first software: you own your data, in spite of the cloud Kleppmann et al., Onward! ’19

Watch out! If you start reading this paper you could be lost for hours following all the interesting links and ideas, and end up even more dissatisfied than you already are with the state of software today. You might also be inspired to help work towards a better future. I’m all in :).

The rock or the hard place?

On the one-hand we have ‘cloud apps’ which make it easy to access our work from multiple devices and to collaborate online with others (e.g. Google Docs, Trello, …). On the other hand we have good old-fashioned native apps that you install on your operating system (a dying breed? See e.g. Brendan Burns’ recent tweet). Somewhere in the middle, but not-quite perfect, are online (browser-based) apps with offline support.

The primary issue with cloud apps (the SaaS model) is ownership of the data.

Unfortunately, cloud apps are problematic in this regard. Although they let you access your data anywhere, all data access must go via the server, and you can only do the things that the server will let you do. In a sense, you don’t have full ownership of that data— the cloud provider does.

Services do get shut down1, or pricing may change to your disadvantage, or the features evolve in a way you don’t like and there’s no way to keep using an older version.

With a traditional OS app2 you have much more control over the data (the files on your file system at least, which if you’re lucky might even be in an open format). But you have other problems, such as easy access across all of your devices, and the ability to collaborate with others.

Local-first software ideals

The authors coin the phrase “local-first software” to describe software that retains the ownership properties of old-fashioned applications, with the sharing and collaboration properties of cloud applications.

In local-first applications… we treat the copy of the data on your local device — your laptop, tablet, or phone — as the primary copy. Servers still exist, but they hold secondary copies of your data in order to assist with access from multiple devices. As we shall see, this change in perspective has profound implications…

Great local-first software should have seven key properties.

  1. It should be fast. We don’t want to make round-trips to a server to interact with the application. Operations can be handled by reading and writing to the local file system, with data synchronisation happening in the background.
  2. It should work across multiple devices. Local-first apps keep their data in local storage on each device, but the data is also synchronised across all the devices on which a user works.
  3. It should work without a network. This follows from reading and writing to the local file system, with data synchronisation happening in the background when a connection is available. That connection could be peer-to-peer across devices, and doesn’t have to be over the Internet.
  4. It should support collaboration.In local-first apps, our ideal is to support real-time collaboration that is on par with the best cloud apps today, or better. Achieving this goal is one of the biggest challenges in realizing local-first software, but we believe it is possible.
  5. It should support data access for all time. On one level you get this if you retain a copy of the original application (and an environment capable of executing it). Even better is if the local app using open / long lasting file formats. See e.g. the Library of Congress recommended archival formats.
  6. It should be secure and private by default.Local-first apps can use end-to-end encryption so that any servers that store a copy of your files only hold encrypted data they cannot read.”
  7. It should give the user full ownership and control of their data.…we mean ownership in the sense of user agency, autonomy, and control over data. You should be able to copy and modify data in any way, write down any thought, and no company should restrict what you are allowed to do.

How close can we get today?

Section 3 in the paper shows how a variety of different apps/technologies stack up against the local-first ideals.

The combination of Git and GitHub gets closest, but nothing meets the bar across the board.

… we speculate that web apps will never be able to provide all the local-first properties we are looking for, due to the fundamental thin-client nature of the platform. By choosing to build a web app, you are choosing the path of data belonging to you and your company, not to your users.

Mobile apps that use local storage combined with a backend service such as Firebase and its Cloud Firestore take us closer to the local-first ideal, depending on the way the local data is treated by the application. CouchDB also gets an honourable mention in this part of the paper, only being let down by the difficulty of getting application-level conflict resolution right.

CRDTs to the rescue?

We have found some technologies that appear to be promising foundations for local-first ideals. Most notably the family of distributed systems algorithms called Conflict-free Replicated Data Types (CRDTs)… the special thing about them is that they are multi-user from the ground up… CRDTs have some similarity to version control systems like Git, except that they operate on richer data types than text files.

While most industrial usage of CRDTs has been in server-centric computing, the Ink & Switch research lab have been exploring how to build collaborative local-first client applications built on top of CRDTs. One of the fruits of this work is an open-source JavaScript CDRT implementation called Automerge which brings CRDT-style merge operations to JSON documents. Used in conjunction with the dat:// networking stack the result is Hypermerge.

Just as packet switching was an enabling technology for the Internet and the web, or as capacitive touchscreens were an enabling technology for smart phones, so we think CRDTs may be the foundation for collaborative software that gives users full ownership of their data.

The brave new world

The authors built three (fairly advanced) prototypes using this CRDT stack: a Trello clone called Trellis, a collaborative drawing program, and a ‘mixed-media workspace’ called PushPin (Evernote meets Pinterest…).

If you have 2 minutes and 10 seconds available, it’s well worth watching this short video showing Trellis in action. It really brings the vision to life.

In section 4.2.4 of the paper the authors share a number of their learnings from building these systems:

  • CRDT technology works – the Automerge library did a great job and was easy to use.
  • The user experience with offline work is splendid.
  • CRDTs combine well with reactive programming to give a good developer experience. “The result of [this combination] was that all of our prototypes realized real-time collaboration and full offline capability with little effort from the application developer.”
  • In practice, conflicts are not as significant a problem as we feared. Conflicts are mitigated on two levels: first, Automerge tracks changes at a fine-grained level, and second, “users have an intuitive sense of human collaboration and avoid creating conflicts with their collaborators.”
  • Visualising document history is important (see the Trellis video!).
  • URLs are a good mechanism for sharing
  • Cloud servers still have their place for discovery, backup, and burst compute.

Some challenges:

  • It can be hard to reason about how data moves between peers.
  • CRDTs accumulate a large change history, which creates performance problems. (This is an issue with state-based CRDTs, as opposed to operation-based CRDTs).

Performance and memory/disk usage quickly became a problem because CRDTs store all history, including character-by-character text edits. These pile up, but can’t be easily truncated because it’s impossible to know when someone might reconnect to your shared document after six months away and need to merge changes from that point forward.

It feels like some kind of log-compaction with a history watermark (e.g., after n-months you might not be able to merge in old changes any more and will have to do a full resync to the latest state) could help here?

  • P2P technologies aren’t production ready yet (but “feel like magic” when they do work).

What can you do today?

You can take incremental steps towards a local-first future by following these guidelines:

  • Use aggressive caching to improve responsiveness
  • Use syncing infrastructure to enable multi-device access
  • Embrace offline web application features (Progressive Web Apps)
  • Consider Operational Transformation as the more mature alternative to CRDTs for collaborative editing
  • Support data export to standard formats
  • Make it clear what data is stored on device and what is transmitted to the server
  • Enable users to back-up, duplicate, and delete some or all of their documents (outside of your application?)

I’ll leave you with a quote from section 4.3.4:

If you are an entrepreneur interested in building developer infrastructure, all of the above suggests an interesting market opportunity: “Firebase for CRDTs.”

  1. This link to ‘Our Incredible Journey’ handily provides a good example— it will take you first to a page announcing that Tumblr has been acquired by Automattic, on which you can agree to the new terms of service should you wish. ↩
  2. Not the new breed of OS apps that are really just wrapped browsers over an online service ↩

Read the whole story
13 days ago
The caveats are significant and mean practical designs are all either hybrid or awaiting not yet existing research.
13 days ago
Pittsburgh, PA
Share this story

Tesla wants to reinvent the pickup with the $39,900 Cybertruck


On Thursday night, Tesla CEO Elon Musk revealed his company's take on that most quintessentially American of automobiles, the pickup truck. "Trucks have been basically the same for 100 years. We want to do something different," Musk told a rapturous audience. He wasn't underselling things. Tesla's design is called the Cybertruck, and it looks like a cross between the Aston Martin Bulldog—a wedge-shaped concept from the early 1980s—and that cool APC you remember from Aliens.

"We moved the mass to the outside," Musk said, referring to the fact that the Cybertruck has a stainless steel monocoque construction, like the Model 3. Criticizing the body-on-frame construction technique used for most heavy trucks on sale today, Musk told attendees that "the body and the bed don't do anything useful," before launching into a lengthy demonstration of people hitting or shooting body panels and glass from the Cybertruck to prove the toughness of the exterior.

The shape is highly unconventional, but the size could have been picked by a focus group—almost exactly as wide and tall as a Ford F-150 and about as long as some four-seat versions of America's favorite pickup. At the rear, the 6.5-foot (2m) bed—called the Cybertruck Vault here—has a lockable aerodynamic cover that gives the vehicle 100 cubic feet (2,831L) of protected cargo storage. The Vault will also support loads of up to 3,500lbs (1,588kg).

Some of the Cybertruck's other features suggest that Musk might be paying attention to Bollinger, which is working on a very un-Tesla-like range of boutique battery EV off-roaders. A Bollinger will have 15 inches of ground clearance via its air suspension, so the Cybertruck will have 16 inches, Musk revealed. Like the Bollinger, the Cybertruck will also offer 110V and 220V AC outlets, so the vehicle can act as a power source on remote job sites.

There will be three versions of the Cybertruck. The single (rear) motor configuration will have a range of 250 miles (400km) with a towing capacity of 7,500lbs (3,402kg) for $39,900. For an extra $10,000, there's a dual motor (all-wheel drive) variant, which ups the towing capacity to 10,000lbs (4,536kg) and drops the 0-60mph time by two seconds. A trimotor Cybertruck—presumably with one front motor and two rear motors—will cost $69,900 and is tow-rated for 14,000lbs (6,350kg), but you get 500 miles (800km) of range.

Tesla is now accepting $100 refundable deposits for the Cybertruck, which the order page says will go into production in late 2021, with the three-motor version following a year later.

Listing image by Tesla

Read the whole story
15 days ago
“almost exactly as wide and tall as a Ford F-150 and about as long as some four-seat versions of America's favorite pickup” so, much too large? A golden opportunity to set a trend back towards sensible pickup truck sizes, and you completely blew it.
Pittsburgh, PA
Share this story
1 public comment
15 days ago
Please make this a 4 door passenger vehicle.
New York, NY
Next Page of Stories