security research, software archaeology, geek of all trades
363 stories
·
7 followers

Photo

1 Comment


Read the whole story
zwol
2 days ago
reply
My id is curious what horrific contortions it takes to combine anarchism with fascism, but my superego is sure I would regret doing the research.
Mountain View, CA
Share this story
Delete

Proposal for turning off standard I/O buffering

1 Comment

Some Unix commands, such as grep, will have a command-line flag to say that you want to turn off the buffering that is normally done in the standard I/O library. Some just try to guess what you probably want. Every command is a little different and if the command you want doesn't have the flag you need, you are basically out of luck.

Maybe I should explain the putative use case here. You have some command (or pipeline) X that will produce dribbles of data at uncertain intervals. If you run it at the terminal, you see each dribble timely, as it appears. But if you put X into a pipeline, say with

    X | tee ...

or

    X | grep ...

then the dribbles are buffered and only come out of X when an entire block is ready to be written, and the dribbles could be very old before the downstream part of the pipeline, including yourself, sees them. Because this is happening in user space inside of X, there is not a damn thing anyone farther downstream can do about it. The only escape is if X has some mode in which it turns off standard I/O buffering. Since standard I/O buffering is on by default, there is a good chance that the author of X did not think to affirmatively add this feature.

Note that adding the --unbuffered flag to the downstream grep does not solve the problem; grep will produce its own output timely, but it's still getting its input from X after a long delay.

One could imagine a program which would interpose a pseudo-tty, and make X think it is writing to a terminal, and then the standard I/O library would stay in line-buffered mode by default. Instead of running

    X | tee some-file | ...

or whatever, one would do

    pseudo-tty-pipe -c X | tee some-file | ...

which allocates a pseudo-tty device, attaches standard output to it, and forks. The child runs X, which dribbles timely into the pseudo-tty while the parent runs a read loop to remove dribbles from the master end of the TTY and copy them timely into the pipe. This would work. Although tee itself also has no --unbuffered flag so you might even have to:

    pseudo-tty-pipe -c X | pseudo-tty-pipe -c 'tee some-file' | ...

I don't think such a program exists, and anyway, this is all ridiculous, a ridiculous abuse of the standard I/O library's buffering behavior: we want line buffering, the library will only give it to us if the process is attached to a TTY device, so we fake up a TTY just to fool stdio into giving us what we want. And why? Simply because stdio has no way to explicitly say what we want.

But it could easily expose this behavior as a controllable feature. Currently there is a branch in the library that says how to set up a buffering mode when a stream is opened for the first time:

  • if the stream is for writing, and is attached to descriptor 2, it should be unbuffered; otherwise …

  • if the stream is for writing, and connects descriptor 1 to a terminal device, it should be line-buffered; otherwise …

  • if the moon is waxing …

  • otherwise, the stream should be block-buffered

To this, I propose a simple change, to be inserted right at the beginning:

If the environment variable STDIO_BUF is set to "line", streams default to ine buffering. If it's set to "none", streams default to no buffering. If it's set to "block", streams default to block buffered. If it's anything else, or unset, it is ignored.

Now instead of this:

    pseudo-tty-pipe --from X | tee some-file | ...

you write this:

    STDIO_BUF=line X | tee some-file | ...

Problem solved.

Or maybe you would like to do this:

    export STDIO_BUF=line

which then it affects every program in every pipeline in the rest of the session:

    X | tee some-file | ...

Control is global if you want it, and per-process if you want it.

This feature would cost around 20 lines of C code in the standard I/O library and would impose only an insigificant run-time cost. It would effectively add an --unbuffered flag to every program in the universe, retroactively, and the flag would be the same for every program. You would not have to remember that in mysql the magic option is -n and that in GNU grep it is --line-buffered and that for jq is is --unbuffered and that Python scripts can be unbuffered by supplying the -u flag and that in tee you are just SOL, etc. Setting STDIO_BUF=line would Just Work.

Programming languages would all get this for free also. Python already has PYTHONUNBUFFERED but in other languages you have to do something or other; in Perl you use some horrible Perl-4-ism like

    { my $ofh = select OUTPUT; $|++; select $ofh }

This proposal would fix every programming language everywhere. The Perl code would become:

    $ENV{STDIO_BUF} = 'line';

and every other language would be similarly simple:

    /* In C */
    putenv("STDIO_BUF=line");

It's easy to think of elaborations on this: STDIO_BUF=1:line might mean that only standard output gets line-buffering by default, everything else is up to the library.

This is an easy thing to do. I have wanted this for twenty years. How is it possible that it hasn't been in the GNU/Linux standard library for that long?

Read the whole story
zwol
3 days ago
reply
Hm, I may have to make this happen.
Mountain View, CA
Share this story
Delete

Does Anyone Know What These Knobbly Orbs Are For?

1 Share
article-image

On first glance, these intricately carved knobbly orbs might appear to be ornaments, or perhaps ancient toys. It’s unclear, because little is known about these roughly 4,000- to 5,000-year-old artifacts, except for a few details.

Over 400 of the relics have been found in Aberdeenshire, from the 19th century to as recently as 2013. Aberdeenshire is located in northeastern Scotland, and a few were found scattered across northern Ireland and England. Measuring between 2.75 inches and 4.5 inches in diameter, these orbs are made from materials such as granite, serpentinite, and sandstone, and can fit in the palm of your hand. The number of knobs on each stone ranges from six to 160, and they are engraved with horizontal lines, spirals, or circles.

article-image

Since the 1800s, museums and historical societies have catalogued these stone balls. The more they gathered in their collections, the more questions and theories arose. The earliest theories argued the stones resembled geometric stone symbols of the Picts, a group of people who lived in northern Scotland during the late Iron Age of 800 B.C. to 43. However, most of the balls date back to well before this era.

One 1876 theory from the then Vice President of the Society of Antiquaries of Scotland, Dr. John Alexander Smith, posits that the balls served as weapons, possibly attached to wooden sticks and swung around. In a 1914 Society of Antiquaries of Scotland meeting, archaeologist Ludovic McLellan Mann alternatively proposed, “they were used as movable pieces of weights on weighing beams” that originated from Roman statera balances. Another theory cites that oracles rolled the balls to foresee the future or as though they were dice.

article-image

Archaeologist Dorothy Marshall disputed those theories in her 1976-77 Proceedings of the Society of Antiquaries in Scotland report, and surmised that the balls' use will likely remain unknown.

The seemingly unsolvable puzzle doesn’t deter the speculators. University of Oxford mathematician Professor Marcus Du Sautoy theorizes in a Ashmolean Museum video, that the people using the balls "were starting to be mathematicians" and the artifacts were early examples of abstract thought and humans exploring symmetry. In his article The Carved Stone Balls of Scotland: Who made them, and why?, amateur historian Jeff Nisbet proposed that the carved stones functioned like “the résumés and portfolios of today’s workforce,” and “verified a mason’s ability to work stone,” as they traveled from site to site.

Maybe the balls are indeed portfolios or weapons or nothing at all, but it’s always fun to guess. What do you think these balls represent? We want to hear your most creative ideas, so tell us, and we’ll share some of our favorites.

Read the whole story
zwol
5 days ago
reply
Mountain View, CA
Share this story
Delete

Was the Efail disclosure horribly screwed up?

1 Comment and 3 Shares

TL;DR. No. Or keep reading if you want.

On Monday a team of researchers from Münster, RUB and NXP disclosed serious cryptographic vulnerabilities in a number of encrypted email clients. The flaws, which go by the cute vulnerability name of “Efail”, potentially allow an attacker to decrypt S/MIME or PGP-encrypted email with only minimal user interaction.

By the standards of cryptographic vulnerabilities, this is about as bad as things get. In short: if an attacker can intercept and alter an encrypted email — say, by sending you a new (altered) copy, or modifying a copy stored on your mail server — they can cause many GUI-based email clients to send the full plaintext of the email to an attacker controlled-server. Even worse, most of the basic problems that cause this flaw have been known for years, and yet remain in clients.

EfailDoc

The big (and largely under-reported) story of EFail is the way it affects S/MIME. That “corporate” email protocol is simultaneously (1) hated by the general crypto community because it’s awful and has a slash in its name, and yet (2) is probably the most widely-used email encryption protocol in the corporate world. The table at the right — excerpted from the paper — gives you a flavor of how Efail affects S/MIME clients. TL;DR it affects them very badly.

Efail also happens to affect a smaller, but non-trivial number of OpenPGP-compatible clients. As one might expect (if one has spent time around PGP-loving folks) the disclosure of these vulnerabilities has created something of a backlash on HN, and among people who make and love OpenPGP clients. Mostly for reasons that aren’t very defensible.

So rather than write about fun things — like the creation of CFB and CBC gadgets — today, I’m going to write about something much less exciting: the problem of vulnerability disclosure in ecosystems like PGP. And how bad reactions to disclosure can hurt us all.

How Efail was disclosed to the PGP community

Putting together a comprehensive timeline of the Efail disclosure process would probably be a boring, time-intensive project. Fortunately Thomas Ptacek loves boring and time-intensive projects, and has already done this for us.

Briefly, the first Efail disclosures to vendors began last October, more than 200 days prior to the agreed publication date. The authors notified a large number of vulnerable PGP GUI clients, and also notified the GnuPG project (on which many of these projects depend) by February at the latest. From what I can tell every major vendor agreed to make some kind of patch. GnuPG decided that it wasn’t their fault, and basically stopped corresponding.

All parties agreed not to publicly discuss the vulnerability until an agreed date in April, which was later pushed back to May 15. The researchers also notified the EFF and some journalists under embargo, but none of them leaked anything. On May 14 someone dumped the bug onto a mailing list. So the EFF posted a notice about the vulnerability (which we’ll discuss a bit more below), and the researchers put up a website. That’s pretty much the whole story.

There are three basic accusations going around about the Efail disclosure. They can be summarized as (1) maintaining embargoes in coordinated disclosures is really hard, (2) the EFF disclosure “unfairly” made this sound like a serious vulnerability “when it isn’t”, and (3) everything was already patched anyway so what’s the big deal.

Disclosures are hard; particularly coordinated ones

I’ve been involved in two disclosures of flaws in open encryption protocols. (Both were TLS issues.) Each one poses an impossible dilemma. You need to simultaneously (a) make sure every vendor has as much advance notice as possible, so they can patch their software. But at the same time (b) you need to avoid telling literally anyone, because nothing on the Internet stays secret. At some point you’ll notify some FOSS project that uses an open development mailing list or ticket server, and the whole problem will leak out into the open.

Disclosing bugs that affect PGP is particularly fraught. That’s because there’s no such thing as “PGP”. What we have instead is a large and distributed community that revolves around the OpenPGP protocol. The pillar of this community is the GnuPG project, which maintains the core GnuPG tool and libraries that many clients rely on. Then there are a variety of niche GUI-based clients and email plugin projects. Finally, there are commercial vendors like Apple and Microsoft. (Who are mostly involved in the S/MIME side of things, and may reluctantly allow PGP plugins.)

Then, of course there are thousands of end-users, who will generally fail to update their software unless something really bad and newsworthy happens.

The obvious solution to the disclosure problem to use a staged disclosure. You notify the big commercial vendors first, since that’s where most of the affected users are. Then you work your way down the “long tail” of open source projects, knowing that inevitably the embargo could break and everyone will have to patch in a hurry. And you keep in mind that no matter what happens, everyone will blame you for screwing up the disclosure.

For the PGP issues in Efail, the big client vendors are Mozilla (Thunderbird), Microsoft (Outlook) and maybe Apple (Mail). The very next obvious choice would be to patch the GnuPG tool so that it no longer spits out unauthenticated plaintext, which is the root of many of the problems in Efail.

The Efail team appears to have pursued exactly this approach for the client-side vulnerabilities. Sadly, the GnuPG team made the decision that it’s not their job to pre-emptively address problems that they view as ‘clients misusing the GnuPG API’ (my paraphrase), even when that misuse appears to be rampant across many of the clients that use their tool. And so the most obvious fix for one part of the problem was not available.

This is probably the most unfortunate part of the Efail story, because in this case GnuPG is very much at fault. Their API does something that directly violates cryptographic best practices — namely, releasing unauthenticated plaintext prior to producing an error message. And while this could be understood as a reasonable API design at design time, continuing to support this API even as clients routinely misuse it has now led to flaws across the ecosystem. The refusal of GnuPG to take a leadership role in preemptively safeguarding these vulnerabilities both increases the difficulty of disclosing these flaws, and increases the probability of future issues.

So what went wrong with the Efail disclosure?

Despite what you may have heard, given the complexity of this disclosure, very little went wrong. The main issues people have raised seem to have to do with the contents of an EFF post. And with some really bad communications from Robert J. Hansen at the Enigmail (and GnuPG) project.

The EFF post. The Efail researchers chose to use the Electronic Frontier Foundation as their main source for announcing the existence of the vulnerability to the privacy community. This hardly seems unreasonable, because the EFF is generally considered a trusted broker, and speaks to the right community (at least here in the US).

The EFF post doesn’t give many details, nor does it give a list of affected (or patched) clients. It does give two pretty mild recommendations:

  1. Temporarily disable or uninstall your existing clients until you’ve checked that they’re patched.
  2. Maybe consider using a more modern cryptosystem like Signal, at least until you know that your PGP client is safe again.

This naturally led to a huge freakout by many in the PGP community. Some folks, including vendors, have misrepresented the EFF post as essentially pushing people to “permanently” uninstall PGP, which will “put lives at risk” because presumably these users (whose lives are at risk, remember) will immediately fall back to sending incriminating information via plaintext emails — rather than temporarily switching their communications to one of several modern, well-studied secure messengers, or just not emailing for a few hours.

In case you think I’m exaggerating about this, here’s one reaction from ProtonMail:

Proton

The most reasonable criticism I’ve heard of the EFF post is that it doesn’t give many details about which clients are patched, and which are vulnerable. This could presumably give someone the impression that this vulnerability is still present in their email client, and thus would cause them to feel less than secure in using it.

I have to be honest that to me that sounds like a really good outcome. The problem with Efail is that it doesn’t matter if your client is secure. The Efail vulnerability could affect you if even a single one of your communication partners is using an insecure client.

So needless to say I’m not very sympathetic to the reaction around the EFF post. If you can’t be sure whether your client is secure, you probably should feel insecure.

Bad communications from GnuPG and Enigmail. On the date of the disclosure, anyone looking for accurate information about security from two major projects — GnuPG and Enigmail — would not have been able to find it.

They wouldn’t have found it because developers from both Enigmail and GnuPG were on mailing lists and Twitter claiming that they had never heard of Efail, and hadn’t been notified by the researchers. Needless to say, these allegations took off around the Internet, sometimes in place of real information that could have helped users (like, whether either project had patched.)

It goes without saying that neither allegation was actually true. In fact, both project members soon checked with their fellow developers (and their memories) and found out that they’d both been given months of notice by the researchers, and that Enigmail had even developed a patch. (However, it turned out that even this patch may not perfectly address the issue, and the community is still working to figure out exactly what still needs to be done.)

This is an understandable mistake, perhaps. But it sure is a bad one.

PGP is bad technology and it’s making a bad community

Now that I’ve made it clear that neither the researchers nor the EFF is out to get the PGP community, let me put on my mask and horns and tell you why someone should be.

I’ve written extensively about PGP on this blog, but in the past I’ve written mostly from a technical point of view about the problems with PGP. But what’s really problematic about PGP is not just the cryptography; it’s the story it tells about path dependence and how software communities work.

The fact of the matter is that OpenPGP is not really a cryptography project. That is, it’s not held together by cryptography.  It’s held together by backwards-compatibility and (increasingly) a kind of an obsession with the idea of PGP as an end in and of itself, rather than as a means to actually make end-users more secure.

Let’s face it, as a protocol, PGP/OpenPGP is just not what we’d develop if we started over today. It was formed over the years out of mostly experimental parts, which were in turn replaced, bandaged and repaired — and then worked into numerous implementations, which all had to be insanely flexible and yet compatible with one another. The result is bad, and most of the software implementing it is worse. It’s the equivalent of a beloved antique sports car, where the electrical system is totally shot, but it still drives. You know, the kind of car where the owner has to install a hand-switch so he can turn the reverse lights on manually whenever he wants to pull out of a parking space.

If PGP went away, I estimate it would take the security community less than a year to entirely replace (the key bits of) the standard with something much better and modern. It would have modern crypto and authentication, and maybe even extensions for future post-quantum future security. It would be simple. Many bright new people would get involved to help write the inevitable Rust, Go and Javascript clients and libraries.

Unfortunately for us all, (Open)PGP does exist. And that means that even fancy greenfield email projects feel like they need to support OpenPGP, or at least some subset of it. This in turn perpetuates the PGP myth, and causes other clients to use it. And as a direct result, even if some clients re-implement OpenPGP from scratch, other clients will end up using tools like GnuPG which will support unauthenticated encryption with bad APIs. And the cycle will go round and around, like a spaceship stuck near the event horizon of a black hole.

And as the standard perpetuates itself, largely for the sake of being a standard, it will fail to attract new security people. It will turn away exactly the type of people who should be working on these tools. Those people will go off and build encryption systems in a totally different area, or they’ll get into cryptocurrency. And — with some exceptions — the people who work in the community will increasingly work in that community because they’re supporting PGP, and not because they’re trying to seek out the best security technologies for their users. And the serious (email) users of PGP will be using it because they like the idea of using PGP better than they like using an actual, secure email standard.

And as things get worse, and fail to develop, people who work on it will become more dogmatic about its importance, because it’s something threatened and not a real security protocol that anyone’s using. To me that’s where PGP is going today, and that is why the community has such a hard time motivating itself to take these vulnerabilities seriously, and instead reacts defensively.

Maybe that’s a random, depressing way to end a post. But that’s the story I see in OpenPGP. And it makes me really sad.









Read the whole story
acdha
5 days ago
reply
Washington, DC
zwol
6 days ago
reply
Mountain View, CA
Share this story
Delete
1 public comment
jepler
5 days ago
reply
gnupg totally commits crimes against security by "releasing unauthenticated plaintext prior to producing an error message" by default.

On the other hand, mail clients that access remote resources with no user interaction are also committing crimes against security as well as simply the privacy of their users.

Both should be excoriated in public and at length by security researchers and journalists until everyone sees the light.
Earth, Sol system, Western spiral arm

Episode 25 : There Are No Shortcuts

1 Share
Read the whole story
zwol
6 days ago
reply
Mountain View, CA
Share this story
Delete

Nancy by Olivia Jaimes for May 12, 2018

2 Comments
Nancy by Olivia Jaimes for May 12, 2018
Read the whole story
zwol
11 days ago
reply
big mood
Mountain View, CA
Share this story
Delete
1 public comment
klohrenz
11 days ago
reply
Ouch. (I LOVE NEW NANCY.)
Raleigh, NC
Next Page of Stories