security research, software archaeology, geek of all trades
413 stories

Report: vulnerability in Marvell's Wi-Fi SoC used in devices like PS4, Xbox One, and Surface laptops lets attackers hijack the device with no user interaction (Catalin Cimpanu/ZDNet)


Catalin Cimpanu / ZDNet:
Report: vulnerability in Marvell's Wi-Fi SoC used in devices like PS4, Xbox One, and Surface laptops lets attackers hijack the device with no user interaction  —  List of impacted devices includes PS4, Xbox One, Samsung Chromebooks, and Microsoft Surface devices.

Read the whole story
1 day ago
Pittsburgh, PA
Share this story
1 public comment
2 days ago
Atlanta, GA

Untangling the Issues in the “Transgender in the Military” Litigation

1 Comment

A few weeks ago, the Department of Justice made something of a splash by filing petitions for certiorari “before judgment” in three of the pending cases challenging then-Secretary of Defense Mattis’s new policy regarding transgender service-members.  In each of the cases a district court preliminarily enjoined the policy and DOJ appealed.  One of those cases, Karnoski v. Trump, was argued before the U.S. Court of Appeals for the Ninth Circuit on October 10.  A second, Doe v. Mattis, was argued before the U.S. Court of Appeals for the D.C. Circuit on December 10.  And in the third case, Stockman v. Trump, briefing has not yet commenced and the court of appeals has ordered that the case be “held in abeyance pending issuance of the court’s mandate in Karnoski,” which will presumably control the outcome in Stockman, as well.

On Friday, the D.C. Circuit panel in Doe (consisting of Judges Griffith, Wilkins and Williams) held that the district court should have dissolved its injunction, issued in 2017, because of a subsequent change in circumstances—namely, Secretary Mattis’s revised policy, which he promulgated in February 2018.  The court of appeals concluded that the District Court’s refusal to reconsider its injunction was based upon “an erroneous finding that the [2018] Mattis Plan was the equivalent of [the earlier] blanket ban on transgender service.”  The panel explained:  “Although the Mattis Plan continues to bar many transgender persons from joining or serving in the military, the record indicates that the Plan allows some transgender persons barred under the military’s standards prior to the Carter Policy to join and serve in the military.”  The court of appeals also strongly hinted, without conclusively holding, that the new Mattis plan is likely to survive Fifth Amendment scrutiny in light of the deference that courts ordinarily afford to military judgments.

Presumably, then, the Doe case will now proceed to the merits in the district court and the predicate for the government’s petition to the Supreme Court in that case no longer exists.  The “universal” injunctions in the two Ninth Circuit cases remain operative, however, and the Court is scheduled to discuss the petitions in those cases at its conference this Friday.  In the wake of the Jane Doe decision, the government has “respectfully request[ed]” the Supreme Court to “grant the government’s petitions in Karnoski and Stockman and hold the government’s petition and stay application in Doe to account for the possibility that the Doe respondents may seek en banc review in the D.C. Circuit.  In the alternative, the Court should stay the injunctions in Karnoski and Stockman in their entirety. At a minimum, the Court should stay the nationwide scope of those injunctions, such that each injunction bars the implementation of the Mattis policy only as to the individual respondents in each case.”

The government is asking the Supreme Court to intercede in the two Ninth Circuit cases now so that the Court can resolve the merits of the DOD policy this Term.  Why the rush?  DOJ argues that the Obama-era transgender policy that the trial court injunctions have left in place, which former Secretary Ash Carter promulgated in 2016, poses a grave risk to “military effectiveness and lethality”—that the armed services must be permitted to exclude more transgender service-members now in order to be “in the strongest position to protect the American people, to fight and win America’s wars, and to ensure the survival and success of our Service members around the world.”  This is therefore the sort of rare case of high exigency, the petitions insist, that warrants the Court taking the extraordinary step of circumventing the ordinary course of litigation—and acting even before the court of appeals has reviewed the injunction.  By way of analogy the government cites the landmark precedents of (I kid you not) the Steel Seizure case, the Nixon tapes case, and the Dames & Moore case challenging President Carter’s freeze of Iranian assets during the hostage crisis.

I’d be surprised if the Supreme Court grants the petitions before judgment—in part because the Chief Justice appears committed to making this a relatively low-drama Term; and in part because Friday’s D.C. Circuit decision demonstrates that the fate of the Mattis policy in the lower courts is anything but certain; but more importantly because it’s simply implausible that the immediate exclusion of a handful of transitioned transgender service-members from entering the military, and/or preventing a small number of current service-members from beginning transition, is necessary to enable the armed forces to “fight and win America’s wars, and to ensure the survival and success of our Service members around the world.”  The Court might (or might not) ultimately defer to Secretary Mattis’s judgment when it adjudicates the merits, but I doubt it will be eager to credit—to give credence to—such hyperbole.

It’s more likely the Court will simply grant cert. in the regular course, either if and when the Ninth Circuit affirms the preliminary injunction in Karnoski or when one or more courts of appeals affirms a permanent injunction, and then hear the case next Term.  The government probably realizes as much.  I suspect, therefore, that the government filed these unusual petitions primarily to set the stage for its “alternative” efforts to alter the status quo between now and the time (e.g., early 2020) when the Court finally resolves the merits of the cases.   Those efforts are reflected in stay motions that the government filed in the Supreme Court in December (here, for example, is the motion for a stay in Kanowski). As DOJ wrote in footnote 6 of its Karnoski petition:

Should the Court decline to grant certiorari before judgment, such stays would at least allow the military to implement the Mattis policy in whole or in part while litigation proceeds through the Court’s 2019 Term.  Either way, whether through certiorari before judgment or stays of the injunctions, what is of paramount importance is permitting the Secretary of Defense to implement the policy that, in his judgment after consultation with experts, best serves the military’s interests.

For the reasons stated above, I’d be somewhat surprised if the Court issues such a stay of the injunctions:  It’s fairly evident that implementation of the Carter policy has not caused the sky to fall or grievously impacted military readiness.  Indeed, as explained below, because the Mattis policy by its terms would not effect transgender people already in the military who have been diagnosed with gender dysphoria, the principal immediate impact of the injunctions is simply to allow a handful of people who have already successfully transitioned to the gender with which they identify to “access” into the armed forces.  The idea that that the addition of this small number of transitioned individuals–a small percentage of the transgender persons in the armed forces–would profoundly affect military readiness and effectiveness simply isn’t plausible, even if the Court pays great deference to the Secretary of Defense.

Whether I’m right about that or not, however, the impending stay motions, rather than the petitions before judgment, are probably where the real action is for now.

* * * *

My primary purpose in writing this post, however, is not to predict what the Court will do with the pending petitions and motions for stays, but instead to highlight some interesting ways in which DOJ has recently tried to frame the merits of the case, and, more broadly, to unpack just what’s at stake in these challenges, i.e., to clarify the differences between the Mattis and Carter policies.

Two things are especially striking about the government’s recent filings.

First, although of course the Solicitor General emphasizes what he describes as the profound differences between the Carter and Mattis policies—he is requesting extraordinary relief, after all—he stresses that in one important respect the Carter and Mattis policies are similar to one another:  under both policies, the petitions note, current service-members diagnosed with gender dysphoria, as well as transgender service-members without such a diagnosis, must continue to “serve in their biological sex” rather than “in their preferred sex” (these are the government’s infelicitous terms) as long as they have not completed a transition to the other sex (a prospect that’s possible under the Carter policy but not the Mattis policy).

Second, DOJ argues that therefore both policies, Carter’s and Mattis’s, discriminate primarily on the basis of whether an individual suffers from gender dysphoria or has transitioned rather than on whether the person is transgender.  Here’s the key, striking passage from page 7 of the Karnoski petition:

Like the Carter policy, the Mattis policy holds that “transgender persons should not be disqualified from service solely on account of their transgender status” [citing the Mattis policy at page 149a of the petition].  And like the Carter policy, the Mattis policy draws distinctions on the basis of a medical condition (gender dysphoria) and related treatment (gender transition).  Id. at 207a-208a.  Under the Mattis policy—as under the Carter policy—transgender individuals without a history of gender dysphoria would be required to serve in their biological sex, whereas individuals with a history of gender dysphoria would be presumptively disqualified from service.  Ibid.  The two policies differ in their exceptions to that disqualification.

The D.C. Circuit panel decision on Friday in effect agreed with this latter contention:  “Although the Mattis Plan continues to bar many transgender persons from joining or serving in the military,” the panel explained, “the record indicates that the Plan allows some transgender persons barred under the military’s standards prior to the Carter Policy to join and serve in the military.”

The idea that the Trump/Mattis policy does not discriminate on the basis of transgender status might be a bit startling to those who haven’t been carefully following the developments in the case.  After all, in his initial memorandum (see pp. 99a-100a of the Karnoski petition), President Trump directed Secretary Mattis “to return to the longstanding policy and practice on military service by transgender individuals that was in place prior to June 2016.”  Yet it’s true, at least as a formal matter, that the Mattis policy (see pp. 207a-208a of the Karnoski petition) does not make distinctions based upon transgender status, as such—and after receiving Secretary Mattis’s proposal President Trump revoked his previous order that would have required such discrimination (see pp. 210a-211a).

That (nominal) about-face in the government’s formal ground of distinction is no accident.  The principal reason DOD and DOJ made the move—in effect, to argue that Secretary Mattis has not in fact drawn distinctions along the “transgender” line that President Trump directed—is not merely to try to get some mileage out of the notion that “Obama did it, too,” but also to argue that if the existing Carter policy is not subject to heightened scrutiny under the so-called equal protection component of the Fifth Amendment (which the plaintiffs concede), then the Mattis policy shouldn’t be subject to such heightened scrutiny, either, given that it’s predicated on similar grounds of discrimination (albeit resulting in far harsher consequences).

In this post I’ll try to pull apart these claims, with hopes that I might explain exactly what the Carter and Mattis policies do, and what their similarities and differences are—something I suspect many observers, and judges, might not yet fully understand.

A cautionary note at the outset:  Some of the details of the policies remain somewhat oblique or ambiguous, and I’m not 100% certain I’ve gotten it all right.  I welcome corrections and suggestions, and I’ll amend the post if and when I think it’d be helpful.

* * * *

In order to understand how the two policies (Carter’s and Mattis’s) operate, it’s necessary to clarify two sets of distinctions: (i) between military “accession” and “retention,” and (ii) between transgender status, gender dysphoria, and gender “transition.”


This one’s fairly easy, at least in terms of identifying the categories.  Both of the DOD policies—Carter’s and Mattis’s—have different rules for “accession” and “retention.”  The accession rules are those that establish certain grounds for being disqualified from entering (“accessing”) the armed services.  By contrast, the “retention” rules prescribe grounds for discharging persons who are already serving in the military–conditions they must meet in order to remain in service.

Transgender status/Gender Dysphoria/Gender transition

This is a bit trickier, both because there’s less consensus on some parameters or meanings of the categories.

1. A person is not “transgender” merely because he or she defies or rejects traditional sex stereotypes or roles in any way, or because the person has any particular sexual orientation. Transgender people may identify as straight, gay, lesbian, bisexual, etc.  The most common use of “transgender” is, instead, and in the words of the American Psychiatric Association, to describe “individuals whose gender identity (inner sense of gender)”—such as a deeply felt, inherent sense of being a boy, a man, or male; a girl, a woman, or female; or an alternative gender (e.g., genderqueer, gender nonconforming, gender neutral)—“differs from the sex or gender to which they were assigned at birth.”[1]  For what it’s worth, this appears to be what even the Trump DOD and DOJ mean by the term, too—transgender individuals are those “who identify with a gender different from their biological sex” is the way they (unfortunately and somewhat imprecisely) put it in their briefs.[2]

2. “Gender dysphoria” is a medical term the APA adopted in 2013 in the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (“DSM-5”). According to DSM-5, gender dysphoria in adolescents and adults is “[a] marked incongruence between one’s experience/expressed gender and assigned gender, of at least 6 months’ duration, as manifested by at least two of the following” six things:

1. A marked incongruence between one’s experienced/expressed gender and primary and/or secondary sex characteristics (or in young adolescents, the anticipated secondary sex characteristics).

2. A strong desire to be rid of one’s primary and/or secondary sex characteristics.

3. A strong desire for the primary and/or secondary sex characteristics of the other gender.

4. A strong desire to be of the other gender (or some alternative gender different from one’s assigned gender).

5. A strong desire to be treated as the other gender (or some alternative gender different from one’s assigned gender).

6. A strong conviction that one has the typical feelings and reactions of the other gender (or some alternative gender different from one’s assigned gender).

Now, you might look at that list of six things and assume that all transgender persons must also have gender dysphoria.  After all, surely anyone with a longstanding, deeply felt, inherent sense of being a gender different from the sex to which they were assigned at birth will manifest at least two, and probably more, of the six listed characteristics.

Not so fast, say DOD and DOJ.  Persons with gender dysphoria, they insist, are merely a “subset” of all persons who are transgender (see p.204a)—and, because the Mattis policy turns (at least in part) on dysphoria, not transgender status, it means that “transgender persons [would] not [be] … disqualified from service solely on account of their transgender status” under the Mattis policy (p.149a).

DOD’s purported distinction between the two terms derives from the sentence in the DSM-5 that immediately follows the definition listed above.  The DSM-5 observes that gender dysphoria “is associated with clinically significant distress or impairment in social, occupational, or other important areas of functioning” (emphasis added).  The government reads this sentence to imply that “clinically significant distress or impairment” is a necessary component of what it means to be diagnosed with gender dysphoria, and DOD further assumes that not all persons who have “[a] marked incongruence between one’s experience/expressed gender and assigned gender, of at least 6 months’ duration, as manifested by at least two of the [six listed characteristics]” also suffer such “significant distress or impairment.”  The Mattis view, in other words, is that some transgender persons don’t suffer “significant distress or impairment” and therefore don’t have gender dysphoria.

Surely that’s right in at least a limited sense because some transgender persons who have successfully transitioned to their identified gender no longer suffer significant distress or impairment.  That would explain why the APA itself insists that “not all transgender people suffer from gender dysphoria and that distinction is important to keep in mind.”

As I explain below, however, the Mattis policy would categorically exclude from military service all such persons who have successfully transitioned to their experienced gender.  Therefore that can’t be the basis of the distinction (i.e., between transgender and dysphoria) upon which the government purports to place so much weight.  The DOD/DOJ view, instead, is that certain transgender persons who haven’t transitioned do not suffer significant distress or impairment and therefore don’t suffer dysphoria—and thus also aren’t categorically excluded from service under the Mattis policy, which is based on distinctions involving dysphoria and transition, not transgender status as such.

Many observers, including some of the plaintiffs and their counsel, might well be dubious about this purported distinction:  They might reasonably think that transgender persons who otherwise satisfy two or more of the six DSM-5 “dysphoria” criteria listed above must surely suffer significant distress or impairment if they haven’t transitioned or, at the very least, that the difference between the two categories must be vanishingly small.  On this view, even if there are some exceptional souls who don’t suffer significant distress or impairment when they’re effectively forced to live “as” the sex to which they were assigned at birth despite “a strong desire to be rid of one’s primary and/or secondary sex characteristics because of a marked incongruence with one’s experienced/expressed gender,” they must comprise, at most, a tiny percentage of transgender individuals.  And if the Mattis policy would not categorically exclude this (hypothetical) tiny “subset” of transgender persons from military service, well, then, it would still be the closest possible thing to a pure “transgender ban” as one could conjure without technically crossing that line.

DOD disagrees:  It insists that a meaningful number of transgender persons do not suffer from gender dysphoria—and the DSM-5 appears to support that view, explaining that the discrepancy between the two categories is the “core component” of the gender dysphoria diagnosis.  In addition to a “marked incongruence” between experienced and birth-assigned gender, says the DSM-5, “[t]here must also be evidence of distress about this incongruence.”

Perhaps more importantly, even if there is, in fact, a strong correspondence between transgender persons and persons who are suffering the stress necessary to constitute gender dysphoria–in other words, even if virtually all non-transitioned transgender persons do suffer from dysphoria–DOD points to the fact that most transgender persons serving in the military do not obtain a diagnosis of gender dysphoria and therefore do not, even under the existing Obama/Carter policy (see below), seek to transition while they serve in the armed forces, in which case they cannot serve “as” their experienced gender even under the Carter policy (i.e., they must abide by the grooming, uniform and facilities rules for the sex they were assigned at birth).  According to DOD, of the approximate 8980 current service-members who identify as transgender, only 937 active duty members, i.e., fewer than 15 percent, received a medical diagnosis of gender dysphoria between June 30, 2016 (when Carter issued his directive) and February 2018.

DOD insists that this distinction makes a difference because allegedly the Mattis policy “only” imposes restraints that differ from the Carter policy with respect to transgender persons with diagnoses of dysphoria (or who have already transitioned—see below), and not with respect to transgender persons who have neither transitioned nor suffer from dysphoria.  Transgender persons in the latter category, DOD insists, are not categorically banned from serving under the 2018 Mattis policy, as they would have been if Mattis had simply implemented Trump’s tweet.

3. OK, but even if transgender status is materially different from gender dysphoria, parts of the Mattis policy turn instead not on whether a person suffers from (or has been diagnosed with) dysphoria but instead on whether he or she has transitioned or is engaged in the process of transition.

Everyone agrees that that is certainly a different criterion than transgender status itself—that those who have transitioned or are transitioning are a subset of all transgender persons.

Gender transition is the process of alleviating the significant distress or impairment of gender dysphoria by taking steps to align a person’s body and/or social behavior with the person’s gender identity.  There are at least three principal forms of transitioning:  (i) “social” transition, i.e., helping the person live and work “as” his or her identified gender without medical or surgical treatment; (ii) “medical” transition, which typically consists of aligning secondary sex characteristics with the person’s identified gender using hormone therapy and hair removal or addition; and (iii) “surgical” transition, or gender confirmation surgery, including genital reconstruction surgery, which seeks to make the person’s primary and secondary sex characteristics resemble as closely as possible those commonly associated with the person’s identified gender.

* * * *

With those distinctions now established, let’s take a look at the specifics of how the current Carter policy (see Karnoski petition at 86a-95a and DOD Instruction 1300.28) operates and how the enjoined Mattis policy (see Karnoski petition at 207a-208a) would operate differently.  I’ll review the accession and retention rules in turn.


The currently governing Carter policy identifies three relevant grounds for disqualification from entering military service, each of which is subject to an exception:

(1) An applicant’s history of gender dysphoria is disqualifying, unless, as certified by a licensed medical provider, the applicant has been stable without clinically significant distress or impairment in social, occupational, or other important areas of functioning for 18 months.

(2) An applicant’s history of medical treatment associated with gender transition is also disqualifying, unless, as certified by a licensed medical provider:

(a) the applicant has completed all medical treatment associated with the applicant’s gender transition; and

(b) the applicant has been stable in the preferred gender for 18 months; and

(c) if the applicant is presently receiving cross-sex hormone therapy post-gender transition, the individual has been stable on such hormones for 18 months.

(3) An applicant’s history of sex reassignment or genital reconstruction surgery is also disqualifying, unless, as certified by a licensed medical provider:

(a) a period of 18 months has elapsed since the date of the most recent of any such surgery; and

(b) no functional limitations or complications persist, and no additional surgery is required.

Under the Mattis accession standards, by contrast, individuals are categorically ineligible to join the armed forces if any or all of the following three things are true:

(i) they’ve had a history or diagnosis of gender dysphoria anytime in the past three years; or 

(ii) the’ve ever undergone gender transitionor 

(iii) they’re unwilling or unable to serve “in their biological sex.”  (More below on what this last term means.)

There are thus clear differences between the two policies when it comes to accession.  For example, under the Mattis policy, all persons who are suffering from gender dysphoria or who have suffered from it within the preceding three years would be barred from joining the military.  Under the Carter policy, a person who’s had dysphoria could join if he or she “has been stable without clinically significant distress or impairment in social, occupational, or other important areas of functioning for 18 months.”

The most dramatic difference, however, is that under the current Carter policy persons who have completed gender transition can access into the armed forces if (to simplify a bit) the transition has been successful and the applicant has been “stable in the preferred gender” for at least 18 months.  In sharp contrast, the Mattis policy would categorically prohibit accession of anyone who’s ever undergone gender transition, full stop, no matter how successful that transition has been or how long ago the person transitioned.


Turning now to the retention policies—those that govern persons already enlisted in the armed forces—there’s some uncertainty right at the outset because it’s not clear who, exactly, would be subject to the Mattis rules.  For the sake of clarity, I’ll distinguish how those rules appear to apply to three different categories of enlisted service-members.

1. Let’s start with service-members who have already received a diagnosis of gender dysphoria from a military medical provider before the date the Mattis policy goes into effect (i.e., the date, if ever, when all of the injunctions are stayed or lifted) and who have continued to serve and receive treatment pursuant to the Carter policy.  Implementation of the Mattis policy shouldn’t affect this group of enlistees at all, at least in theory, because Mattis has created a “grandfathering,” or “reliance,” exception for such service-members, one that would allow them to continue to benefit from the Carter policy rules, described below.  According to DOD, “[t]he reasonable expectation of these Service members that the Department would honor their service on the terms that then existed [when they entered or continued in service] cannot be dismissed.”

2. Next, what about future enlistees—transgender persons who join the armed forces after the Mattis policy takes effect?  Well, as explained above, this might be a very limited set of individuals:  Under the Mattis policy, persons who have transitioned, or who have gender dysphoria, or who have had gender dysphoria anytime in the preceding three years, will not be permitted to access into the military at all—and that access prohibition presumably covers a significant percentage of all prospective transgender applicants.

Some transgender people, however, might join the military under the Mattis policy—those who haven’t suffered dysphoria for three years and who haven’t transitioned, as well as those who don’t realize they’re transgender until after they join.  These persons would be subject to the same retention rules as the third group, described below.

3. The third category of persons subject to the Mattis retention policy would be transgender service-members who joined the armed forces before the Mattis policy went into effect but who hadn’t yet received a military, medical diagnosis of gender dysphoria as of that date (including, perhaps, some persons who transitioned successfully before joining the military).  This describes, for example, one of the plaintiffs in the Karnoski case in the Ninth Circuit (Jane Doe) and one of the plaintiffs in the Doe case in the D.C. Circuit (Jane Doe No. 6).  I do not have a good sense of how large this category might be.  I doubt it would be very large, however.  It’s fair to assume, I think, that the vast majority of current service-members who wish to transition while in service–and to act in accord with the uniform, grooming and facilities rules applicable to their experienced gender–will have already obtained a diagnosis of gender dysphoria (thereby placing them in the “exempted” category 1, above), because they know that their opportunity to transition might be cut off at any moment if the injunctions are stayed and the Mattis policy goes into effect before they’ve obtained such a diagnosis.  (Of course this doesn’t mean that the number would be zero: Presumably some current service-members would only make the significant decision to transition after the Mattis policy goes into effect.) 

* * * *

Now that we’ve identified the transgender service-members who might be subject to the Mattis retention policy—those in the second and third categories described above—we can compare how the Carter and Mattis policies, respectively, would treat persons in this discrete category of enlistees.

There’s one enormous, fundamental difference.  Under the Carter policy, if and when such service-members, as well as all other transgender service-members, receive “a diagnosis from a military medical provider indicating that gender transition is medically necessary,” they may take steps (e.g., by hormone treatments and/or surgery) to transition to their preferred gender while serving—indeed, the Department of Defense subsidizes the medical care and treatment for their diagnosed medical condition.

Under the Mattis policy, by contrast, a service-member may not, while serving, seek to undergo gender transition at all—social, medical or surgical.  If she does, she must leave the military.

In light of such a fundamental difference, what’s the similarity between the two policies that DOJ is now trying to emphasize?  It’s simply this:  Under both policies, the default rule is that if an enlisted transgender service-member has not yet fully transitioned (or has no plans to do so), the military will continue to assign that person a gender “marker” in the Defense Enrollment Eligibility Reporting System (DEERS) corresponding to his or her birth-assigned sex (what DOD calls the “biological” sex), and the member therefore must, at least presumptively, continue to conform to “standards for uniforms and grooming” for enlistees of that sex, and to use “berthing, bathroom, and shower facilities, associated with that gender,” at least during periods while he or she is stationed within the armed forces.

The point DOJ and DOD wish to exploit, in other words, is that both policies, Carter and Mattis, require at least some transgender service-members to comply with what the Mattis policy calls (p.199a) “standards associated with their biological sex” with respect to uniforms and grooming, and the use of sex-designated facilities, at least some of the time.  In a recent filing, DOJ now also emphasizes that under both policies, Carter and Mattis, “transgender individuals may serve openly, so long as they meet applicable standards [for grooming, uniforms and facilities], including standards associated with their biological sex.”

I believe DOJ is right that the two retention policies do have these discrete characteristics in common.  Even with respect to those similarities, however, the policies differ in at least two very significant respects.

First, even during the period in which a service-member is engaged in transitioning, the Carter policy affords the service-member’s commander discretion to “accommodate” the individual’s standards for uniforms and grooming, and to make adjustments respecting the use of berthing, bathroom, and shower facilities, as well (see Subsections 3.2(d)(1)(d)-(e)); see also Carter policy “Implementation Handbook” at 28 (“Exceptions for uniform and grooming standards may be considered per your Service’s policy.  You may consider current and preferred gender uniforms, form, fit and/or function, the Service member’s professional military image, as well as impact on unit cohesion and good order and discipline.”).  Under the Mattis policy, by contrast, the service-member would never be permitted to use the uniforms and grooming of his or her chosen sex, or to use or live in corresponding facilities.  (Therefore although such persons could serve “openly” as transgender in the limited sense that they’d be permitted to acknowledge that they’re transgender, they would permanently be precluded from adhering to the uniform, grooming and facilities rules applicable to their identified gender.)

Second, and more fundamentally, those prohibitions in the Mattis policy would, of course, be permanent, for the entire tenure of the person’s service in the military, whereas under the current Carter policy, by contrast, a service-member can work toward transition and will be assigned a new gender “marker” upon completion of that transition, at the latest—along with the corresponding changes in terms of uniforms, grooming, and use of facilities.[3]

With these distinctions in mind, we can now assess what appears to be an important disagreement between the parties in their recent Supreme Court filings.  In their response to the government’s cert. petition, the Doe plaintiffs write (p.11) that the Carter Policy “extends … to all transgender servicemembers” the permission “to serve in accord with their ‘preferred gender.’”  In the reply brief he filed on Friday, the Solicitor General takes sharp issue with that claim:  “Contrary to respondents’ assertion,” he contends (p.10 n.1), “the Carter policy does not permit ‘all transgender servicemembers’ to serve in their preferred gender.  Rather, the Carter policy permits only individuals with gender dysphoria who have undergone gender transition to do so; by contrast, transgender servicemembers without gender dysphoria or who have not transitioned must serve in their biological sex.”

The government is correct insofar as the Carter policy does not authorize all transgender service-members to serve “in” their preferred gender at all times they are serving.  Nevertheless, the whole purpose of the Carter policy retention rules is to facilitate the ability of any and all transgender service-members to transition, and to serve in accord with their gender identity, if and when they feel it is necessary to do so in order to prevent distress associated with not living in accord with that identity.  To be sure, the Carter policy sets out an orderly process for accomplishing that end—and in particular, it requires a diagnosis of gender dysphoria before any accommodations begin.  My understanding, however, is that virtually any person who has made the very difficult and consequential decision to transition (especially while serving in the military) will only do so when they suffer the sort of distress that supports such a diagnosis.  In such a case, the Carter policy—but not the Mattis policy—allows (indeed, facilitates) that person to transition so as to be able to live (and serve) in accord with their gender identity.  And, as noted above, the Carter policy encourages commanders to accommodate transgender service-members even during the transition process with respect to uniforms, grooming and use of facilities.

* * * *

I hope this discussion has helped clarify at least some of the confusion surrounding these cases.  Before signing off, I thought I would add a few brief reflections on the merits of the constitutional challenges.

Transition.  Let’s begin with the stark difference between the two policies when it comes to gender transition:  The Mattis accession policy would categorically exclude all persons who have successfully transitioned from entering the military, and the Mattis retention policy would prohibit current service-members from transitioning while serving.

This disparate treatment on the basis of “transitioning” is a form of discrimination on the basis of sex.  To see why that’s so, assume, for instance, two persons, each of whom identifies as a man, but only one of whom has transitioned to that condition—the other is a cisgender person who’s identified as male from birth.  The government would afford radically different treatment to these two individuals—one could join the armed forces; the other couldn’t—even though they are in all other respects similarly situated, based entirely on the fact that one of them, but not the other, was born with certain physical characteristics not commonly associated with a man—that is to say, based upon the external sexual anatomy with which each of them was born.

Biology, in other words, would determine destiny within the military.

As Sam Bagenstos, Mike Dorf, Leah Litman and I explained in an amicus brief in the Gloucester County v. G.G. case in 2017, that is a classic form of sex discrimination—discrimination on the basis of physical sex characteristics—that should be subject to heightened scrutiny because of the risk that it will perpetuate stereotypes that correlate physiological sex characteristics with other qualities and abilities that are not determined by such characteristics.  (Gloucester concerned Title IX, but a similar analysis should be applicable to such sex discrimination under the equal protection “component” of the Fifth Amendment.)[4]

And, largely for the reasons we offered in that amicus brief, it ought to be very difficult for such discrimination to withstand such scrutiny, even though some forms of segregation on the basis of sexual anatomy (such as providing separate men’s and women’s restrooms) generally satisfies such scrutiny as applied to cisgender persons, at least in certain contexts.

DOD offers two primary reasons for excluding “transitioned” individuals from the armed forces.

The first is a purported fairness concern with respect to persons who have transitioned to present themselves as female.  Although DOD briefly alludes to an alleged (but obviously pretextual) concern with other men’s perceptions of unfairness if a so-called “biological male” is required only to “meet the female physical fitness and body fat standards,” its principal argument in this respect (pp. 174a-175a) is that if such women are permitted “to compete against females in gender-specific physical training and athletic competition, it undermines fairness (or perceptions of fairness) because males [sic] competing as females will likely score higher on the female test than on the male test and possibly compromise safety.”

This strikes me as a manifest makeweight.  As far as I’m aware there’s no evidence of any such “fairness” problems, real or perceived, in the military, even though service-members have transitioned, and are transitioning, under the Carter policy (and others would do so even under the Mattis policy if they were diagnosed with gender dysphoria before that policy commenced—presumably without material effects on actual or perceived “fairness” within the unit).  Moreover, as a recent Palm Center Report on the Mattis policy explains:

The [DOD] Report [on which Mattis relied] assumes incorrectly that “biologically-based standards will be applied uniformly to all Service members of the same biological sex,” contrary to current practice in which gender-based presumptions are adjustable based on circumstances.  At the U.S. Military Academy, for example, the [Mattis] Implementation Report observes that “Matching men and women according to weight may not adequately account for gender differences regarding striking force.”  But the Report ignores that Cadets’ skill level and aggression, not just weight, are factored into safety decisions, and West Point allows men and women to box each other during training [citing Alex Bedard, Robert Peterson, and Ray Barone, “Punching through Barriers: Female Cadets Integrated into Mandatory Boxing at West Point,” Association of the United States Army, Nov. 16, 2017].

While sex-based standards are used in concert with other factors to promote fairness and safety, male-female segregation is not absolute—and it is not sufficient.  Ensuring fairness and safety in combative training is always a command concern because of the wide variation in body size and weight within gender even when gender is defined by birth.  Commanders at all levels are able to make judgments about how to conduct training in ways that adequately protect the participants, and they are able to do the same thing for transgender service members when and if needed.  This hypothetical scenario does not lend any credence to the contention that inclusive policy has compromised or could compromise cohesion, privacy, fairness, or safety.

Moreover, even if some minuscule number of service-members did have some concerns about perceived unfairness in e.g., boxing competitions, the response to this alleged “fairness” concern–categorical exclusion of very valuable and skilled persons from the armed forces–would be grossly disproportionate to the problem.

And so that brings us to DOD’s other rationale, which is almost surely the driving force behind the decision to exclude transitioned individuals—namely, the commonly heard “privacy in restrooms and showers” concern, particularly as applied to transgender women.  DOD argues (p.188a) that to allow transgender persons who have not undergone a full sex reassignment—persons who “retain at least some of the anatomy of their biological sex”—to use the facilities of their identified gender “would invade the expectations of privacy that the strict male-female demarcation in berthing, bathroom, and shower facilities is meant to serve.”

Of course, this rationale does not apply, even on its face, to transgender persons who have completed successful gender-confirmation surgery, as DOD acknowledges (see p.175a:  “These problems could perhaps be alleviated if a person’s preferred gender were recognized only after the person underwent a biological transition.”).[5]  But even as applied to transitioned persons who continue to have the external genitals with which they were born, this rationale leaves a lot to be desired.

To be sure, DOD has at least a snippet of evidence it can invoke in support of its rationale:  The DoD report to Mattis cited one instance (see p.37) under the Carter policy in which a commander received a complaint “from biological females in the unit who believed that granting a biological male, even one who identified as a female, access to their showers violated their privacy.”  As my co-amici and I explained in our brief in Gloucester County, however (see pp. 33-36), such expected yet relatively infrequent “privacy” complaints are hardly sufficient grounds to justify imposing a rule excluding transgender students from high school restrooms associated with their identified gender, let alone to categorically exclude transitioned persons from the U.S. military.  What’s more, as the Palm Center Report explains, DoD guidance for the Carter policy offers commanders tools that ought to be sufficient to resolve such matters in the rare cases they arise:

The situation closely matches scenarios 11 and 15 in the Commander’s Handbook, which emphasize that all members of the command should be treated with dignity and respect:  “In every case, you may employ reasonable accommodations to respect the privacy interests of Service members.”  Commanders are given the following guidance on reasonable accommodations:  “If concerns are raised by Service members about their privacy in showers, bathrooms, or other shared spaces, you may employ reasonable accommodations, such as installing shower curtains and placing towel and clothing hooks inside individual shower stalls, to respect the privacy interests of Service members.  In cases where accommodations are not practicable, you may authorize alternative measures to respect personal privacy, such as adjustments to timing of the use of shower or changing facilities.”

As that passage suggests, the most salient point here is that even if it would be reasonable for DOD to take some steps, such as “adjustments to timing of the use of shower or changing facilities,” to address concerns about privacy in shower rooms and the like, it’s gross overkill to address the problem by prohibiting transgender persons from being in the military at all.

For these reasons, the most difficult part of the Mattis policy for DOD to defend, even if the courts do afford extensive deference to military judgments, is its categorical exclusion of transitioned individuals from the military.

Dysphoria.  As explained above, when it comes to service-members with gender dysphoria who have not yet transitioned, the principal difference between the two retention policies is that whereas the Carter policy allows service-members with a dysphoria diagnosis to take medically indicated steps (e.g., hormone treatments and/or surgery) to transition to their preferred gender, and DOD pays for that transition, a service-member under the Mattis policy would be categorically barred from engaging in gender transition at all—social, medical or surgical—if he or she wishes to remain in the military.

The plaintiffs’ constitutional challenge to this aspect of the Mattis policy depends upon showing, at a minimum, that whereas the Carter Policy treats service-members with dysphoria equally with service-members who suffer from other medical conditions that are unrelated to gender identity or gender transition but that have an equivalent or greater impact on military readiness and cohesion, the Mattis policy would, by contrast, treat gender dysphoria more harshly than DOD treats other medical conditions that require equivalent or more extensive treatment and that have an equal or greater impact on military readiness and cohesion.

Assuming the plaintiffs can show that the Mattis policy, but not the Carter policy, treats gender dysphoria more harshly than the military treats such medical conditions unrelated to gender identity that have analogous impacts on the military,[6] the constitutionality of such disparate treatment would then depend upon why DOD does so—on whether DOD can establish, at the very least, plausible bases for such differences apart from (i) simple hostility toward transgender individuals or (ii) an objective to prevent service-members from becoming transitioned, which (as discussed above) would likely be a form of unjustifiable sex discrimination against transitioned individuals on the basis of their “biological” sex characteristics.

As the D.C. Circuit panel suggested on Friday, the courts are likely to afford a great deal of deference to military judgments in this regard.  Even so, however, it’s not clear DOD will be able to provide a legitimate explanation why it’s necessary to deviate from its ordinary policies and standards when it comes to this particular medical condition.  The recent Palm Center Report on the Mattis policy suggests that this could be a difficult showing for DOD to make.

I don’t know enough about the facts, or the records in the cases, to make any confident assessment about the likely outcome on the challenge to the retention conditions.  Nevertheless this much seems true:  If, as appears to be the case, DOD’s principal rationale here–as with the Mattis policy ban on accession of transitioned individuals–is based upon an alleged concern about allowing transgender women to share certain facilities with other women, and even if it would be justifiable to impose certain limited restrictions on such facility access, that would not begin to explain why it’d be reasonable for DOD also to prohibit transgender service-members from adhering to the uniform and grooming standards of their experienced gender, or for prohibiting those same valuable service-members from engaging in the process of, e.g., social and medical transitioning.  As DOJ emphasizes in its latest filings, even the Mattis policy would permit such persons to serve “openly” as transgender.  If that’s the case, then what would possibly explain why the Pentagon would prevent those same persons from dressing and grooming themselves in accord with their experienced, and self-proclaimed, gender?  Such a limitation would appear to be nothing more a form of simple, gratuitous cruelty.  If that’s right, then the Mattis retention limitations ought to be constitutionally dubious no matter what degree of scrutiny the Court ultimately applies, and regardless of the degree of deference it affords to reasonable military judgments.


[1] The APA site in question also refers to individuals whose “gender expression (outward performance of gender) differs from the sex or gender to which they were assigned at birth,” but I’m not sure that’s what’s at issue in these cases:  My understanding is that some persons may choose to outwardly “perform” as a particular gender without necessarily having a deeply felt sense of being of that sex or gender.  If I’m not mistaken, such persons are not the subject of the disputes in these cases, nor are they always or often referred to as “transgender.”  To be sure, the transgender persons at issue in these cases may often have such a gender expression or outward performance of particular characteristics traditionally associated with one sex, but it’s their “inner sense” of that gender that makes them “transgender,” at least for these discrete purposes.

[2] As I and my fellow amici noted in our brief in the recent G.G. case, in many “biological” respects a transgender person might have sex characteristics different from those assigned at birth.  Because of hormone treatment or surgery, for instance, their voice and physical appearance may correspond to their identified gender; and even at birth, a person’s chromosomal, anatomical, hormonal, and/or reproductive characteristics could be ambiguous or in conflict.  See Radtke v. Misc. Drivers & Helpers Union Local No. 638 Health, Welfare, Eye & Dental Fund, 867 F. Supp. 2d 1023, 1032 (D. Minn. 2012).  What DOD undoubtedly has in mind, then, is the sex that a hospital assigned to the person at birth, which is ordinarily a function of the newborn’s external genitalia.

[3] There’s a possible third distinction, as well.  By its terms, the Mattis policy provides (p.200a) that “service members who are diagnosed with gender dysphoria after entering military service may be retained without waiver, provided that[, inter alia,] … the Service member does not require gender transition.”  The phrase “does not require” there is ambiguous.  If it is intended to mean only that a service-member cannot serve if he or she “requires”gender transition in order to adequately perform his or her military functions, it would not differ from the Carter policy.  If, however, the phrase is construed to mean that persons who “require” transition in order to treat their gender dysphoria must be removed from service altogether, then of course it would amount to a virtually categorical ban on service-members with dysphoria (other than those who are subject to the “reliance” exception), which is virtually the opposite of what the Carter policy prescribes for such persons.

[4] DOJ asserts (see p.11) that making distinctions on the basis of whether a person has transitioned is a form of discrimination on the basis of “treatment,” and thus should be “subject only to rational-basis review.”  Obviously, however, the Mattis policy would not exclude transitioned individuals from the armed services because they’ve previously undergone a particular form of “treatment”—it would instead exclude them because the external sexual anatomy with which they were born does not correspond to the gender to which they have successfully transitioned, which is literally a form of discrimination on the basis of (biological) sex.

[5]  In response, DOD asserts (id.) that “[t]he concept of gender transition is so nebulous . . . that drawing any line—except perhaps at a full sex reassignment surgery—would be arbitrary, not to mention at odds with current medical practice, which allows for a wide range of individualized treatment,” and, “[i]n any event, rates for genital surgery are exceedingly low—2% of transgender men and 10% of transgender women.”  These factors purportedly “weigh in favor of maintaining a bright line based on biological sex” (id. at 176a).

[6] DOD asserts (see Pet. App. 206a) that the Carter policy actually treats service-members with a history or diagnosis of gender dysphoria more favorably than DOD treats similarly situated service-members with other medical conditions—that it “exempt[s] such persons from well-established mental health, physical health, and sex-based standards, which apply to all Service members.”  As far as I can tell, however, the only such purported “exemption” that DOJ identifies (see p.4) is that the Carter policy permits individuals with gender dysphoria who have undergone gender transition to adhere to the grooming, uniform and facilities rules for persons of their identified gender, thereby “exempt[ing] [them] from the uniform, biologically based standards applicable to their biological sex.”  But that “exemption,” of course, is precisely the way in which the medical condition in question is alleviated, thereby allowing the member to better serve the armed forces.  Moreover, as I discuss below, once service-members have successfully transitioned there is unlikely to be any significant military need to require them to continue to abide by the grooming, uniform and facilities standards “applicable to their biological sex.”



Read the whole story
13 days ago
File under “the cruelty is the point” with a side of “did you know military service is one of the only means of moving out of poverty that still mostly works?”
Pittsburgh, PA
Share this story

vintagegeekculture: The single greatest and most fascinating...

1 Share


The single greatest and most fascinating “futurist” architecture movement in the world right now is happening in Bolivia, where national prosperity and a dedication to works for the poor and public housing led to an explosion of colorful styles inspired by Aymara Indian art. There should be more articles about this, the interiors are just as amazing. Incidentally, most of these buildings are not for the rich or in trendy neighborhoods, but are public housing. I’ve heard this style referred to as “Neo-Andean” but like most currently thriving styles it doesn’t have a universally agreed on name yet.

Read the whole story
14 days ago
Pittsburgh, PA
Share this story

The Strange, Uplifting Tale of “Joy of Cooking” Versus the Food Scientist


The very first edition of “The Joy of Cooking” was self-published by the St. Louis hostess and housewife Irma Rombauer in the first years of the Great Depression. A relatively modest volume, it collected some four hundred and fifty recipes gathered from family and friends, garlanded throughout with chatty headnotes and digressions regarding the finer points of entertaining, nutrition, menu planning, and provisioning. Since that original edition, the book has become one of the best-selling cookbooks of all time. It also has undergone eight significant revisions: Rombauer’s list of recipes exploded into the thousands; entire chapters were added (frozen desserts) and dropped (wartime rationing). (“The” was dropped from the title in the mid-sixties.) The 1997 edition was a particular departure, replete with contributions from superstar chefs and celebrity food writers. “Joy” purists considered it something of a heresy (the Times memorably called it “the New Coke of cookbooks”), and were relieved when the 2006 edition returned to classic form.

Short of that hiccup, “Joy” has been subject to very little criticism in its eighty-seven-year life. Smart, bossy, funny, a little bit cornball, the book has been a staple in countless American kitchens, a go-to gift for newlyweds and recent grads, its adherents spreading the gospel to their own children. (When my parents’ ragged copy of the 1964 edition succumbed to water damage a few years ago, my mother delivered the news as if a relative had died.) About the worst that’s been said of the book is that it’s more useful as a general-reference volume than as a recipe go-to, which—given the cooking world’s overabundance of recipes and its shortage of genuinely useful reference books—is actually sort of a compliment.

So it came as a shock, in 2009, when the prestigious scholarly journal Annals of Internal Medicine published a study under the pointed headline “The Joy of Cooking Too Much.” The study’s lead author, Brian Wansink, who runs Cornell University’s Food and Brand Lab, had made his reputation with a series of splashy studies on eating behavior—in 2005, for instance, his famous “Bottomless Bowls” study concluded that people will eat soup indefinitely if their supply is constantly replenished. For “The Joy of Cooking Too Much,” Wansink and his frequent collaborator, the New Mexico State University professor Collin R. Payne, had examined the cookbook’s recipes in multiple “Joy” editions, beginning with the 1936 version, and determined that their calorie counts had increased over time by an average of forty-four per cent. “Classic recipes need to be downsized to counteract growing waistlines,” they concluded. In an interview with the L.A. Times, Wansink said that he’d decided to analyze “Joy” because he was looking for culprits in the obesity epidemic beyond fast food and other unhealthy restaurant cooking. “That raised the thought in my mind: Is that really the source of things? . . . What has happened in what we’ve been doing in our own homes over the years?”

John Becker, the great-grandson of Irma Rombauer, lives with his wife, Megan Scott, in Portland, Oregon, and they are the current keepers of the “Joy” legacy. When the results of Wansink’s research were released, they and their publishers were blindsided. With the help of Rombauer’s biographer, they posted a response on the “Joy” Web site criticizing some of Wansink’s methods and calling attention to his sample size—out of the approximately forty-five hundred recipes that appear in later editions, he’d chosen eighteen, a mere 0.004 per cent of the book’s content. But they stopped short of rejecting Wansink’s conclusions outright. “Joy” had always been an idiosyncratic operation, written and rewritten, over the years, by strong personalities who held forceful and often conflicting opinions. (Becker’s grandmother, Marion Rombauer Becker, and father, Ethan Becker, were each eventually added as co-authors.) “We assumed that he was probably correct, and that the recipes probably had increased in calories per serving,” Scott told me recently by phone. “If we had wanted to impugn the reputation of a sitting Cornell department head, I think we would’ve found a really tough row to hoe.”

But the study turned up again and again over the years, becoming part of the conventional wisdom on obesity—a “stand-in,” as Becker puts it, for the “Sad American Diet.” A cartoon that was commissioned by Cornell’s Food and Brand Lab and published with the original study depicts a beefy newer edition of the book haranguing an older edition, jeering at its brother, “I have 44% more calories per serving than you do!” Wansink’s tiny sample set, especially, gnawed at the couple. In his study report, Wansink explained the size as a methodological necessity, writing that “since the first edition in 1936, only 18 recipes have been continuously published in each subsequent edition.” But, in researching the cookbook’s ninth edition (scheduled for 2019), Becker and Scott had created an encyclopedic catalogue of thousands of legacy “Joy” recipes, and they counted several hundred recipes that had remained comparable from one edition to the next. When, in 2015, Wansink’s cartoon landed in Becker’s in-box yet again, he decided to conduct his own research. Becker started his analysis cautiously, hoping to find a few counterexamples in “Joy of Cooking” with which to push back against Wansink’s findings. Instead, he told me, “I was, like, ‘Oh, my God, there’s a lot more.’ I mean, the numbers are turning up in our favor, and they’re definitely not determining what Wansink’s got.”

Then, last month, the BuzzFeed reporter Stephanie Lee published a sweeping exposé of Wansink’s research. Academic standards call for researchers to articulate a hypothesis ahead of time, and then to conduct an experiment that produces data that will either prove or disprove the hypothesis. Lee’s article—which was based on interviews with Cornell Food and Brand Lab employees, and also private e-mails from within the lab, which were obtained through a public-records request—showed that Wansink regularly urged his staff to work the other way around: to manipulate sets of data in order to find patterns (a practice known as “p-hacking”) and then reverse-engineer hypotheses based on those conclusions. “Think of all the different ways you can cut the data,” he wrote to a researcher, in an e-mail from 2013; for other studies, he pressed his staff to “squeeze some blood out of this rock.” One of Wansink’s lab assistants told Lee, in regard to data from a weight-loss study she had been assigned to analyze, “He was trying to make the paper say something that wasn’t true.”

Lee’s report wasn’t the first time that doubt had been cast on Wansink’s work: in 2016, he published a blog post (which he later deleted) revealing that he had encouraged graduate students to do this sort of data fishing; the post resulted in a flurry of critical coverage toward his methods. But Lee’s was the most comprehensive and damning account. “Year after year,” she concluded, “Wansink and his collaborators at the Cornell Food and Brand Lab have turned shoddy data into headline-friendly eating lessons that they could feed to the masses.” Two days after Lee’s story was published, John Becker posted on the official “Joy of Cooking” Twitter account, “We have the dubious honor of being a victim of @BrianWansink and Collin R. Payne’s early work.”

Around the same time, Becker sent his own vast archive of material related to Wansink’s study—including a Microsoft Excel spreadsheet tracking the calorie count of hundreds of “Joy” recipes over time—to several academics, including to James Heathers, a behavioral scientist at Northeastern University. Heathers is one of a platoon of swashbuckling statisticians who devote time outside of their regular work to re-analyzing too-good-to-be-true studies published by media-friendly researchers—and loudly calling public attention to any inaccuracies they find. Heathers’s own work—particularly his development of a modelling tool called S.P.R.I.T.E., which allows likely data sets to be reconstructed from published results—has led directly to the amendment or retraction of a dozen academic papers in the past few years, including several authored by Wansink.

Brian Wansink, who runs the Food and Brand Lab at Cornell University.

Photograph by Ben Stechschulte / Redux

Heathers told me that the problems he’s found in Wansink’s studies are generally within the numbers themselves: faulty arithmetic, sloppy recording, subsets of data that disappear at times and then “magically reappear” later, and conclusions that reverse-engineer improbable samples. (Working backward from the results of a study about vegetable-eating habits in children, Heathers determined that Wansink’s conclusion was only valid if one child had devoured sixty carrots at once. Wansink published a lengthy correction, clarifying that the experiment was conducted with “matchstick carrots.” He later retracted the study altogether.) The methodological flaws Heathers found in “The Joy of Cooking Too Much” are of a different sort: because the recipes in question are fixed information, the actual data—ingredients, quantities, nutritional information—aren’t subject to manipulation. Instead, Heathers found issues with the study itself. “The problem is not that it was added up wrong,” he said of the data. “It’s that there’s no real way to add it up right.”

The recipes were compared on the basis of serving size, for instance, but ten of the eighteen recipes that were studied do not specify what counts as a serving. (“Joy” ’s chocolate-cake recipe yields simply “1 cake.”) The small sample size was especially problematic, Heathers explained, because the calorie changes in the eighteen recipes that were studied varied drastically, from a hundred and thirty-four per cent increase in the goulash to a thirty per cent decrease in the rice pudding. “That’s not a reliable pattern!” Heathers said. Wansink also insisted on only comparing recipes that bore identical names in different “Joy” editions, regardless of the accompanying recipes, which sometimes led him to compare two entirely different dishes. He liked to point to gumbo as one of the most egregious calorie gainers, but the recipe from 1936, a clear soup of chicken and sliced vegetables simmered in water, has almost nothing in common with the sausage-studded, roux-thickened chicken variety featured in the 2006 book. “It’s like comparing a Chateaubriand to a whole roast steer,” Heathers said, “and saying they’re both roast beef.”

When I reached Wansink this week by phone, in his office at Cornell, he told me that he stands by the analysis in “The Joy of Cooking Too Much.” “This is a really nice methodology that’s set out,” he said of the study. But he acknowledged that his team had faced challenges, saying, “You’ve got to be very careful that the recipes are comparable, and that the sample frame is one that can be acceptable to a journal, and be seen as fair.” When I suggested that Becker and Heathers found fault with his study on those very grounds, he said that the published data was an abbreviated version of a paper that was “really, really, really quite long,” and that ultimately had, at the request of the journal’s editors, several key elements removed, though he couldn’t recall what those elements were. (He had no comment on the findings of Lee’s BuzzFeed report.)

Studying what and how people eat is a messy science, in large part because it’s extremely difficult to control human behavior: in 2016, the British epidemiologist Ben Goldacre, discussing obstacles to his ideal experiment, noted, of its hypothetical subjects, “I would have to imprison them all, because there’s no way I would be able to force 500 people to eat fruits and vegetables for a life.” Even factors we assume to be absolute can fluctuate; the calorie content of a particular ingredient can change depending on the preparation method, and even on how well it’s chewed. The result is an academic literature full of often contradictory advice—Eating animal fats causes massive weight gain, avoid it! Eating animal fats is the only way to lose weight and keep it off, add it to your morning coffee!—that can amplify consumer anxiety toward how and what to eat.

One point that remains consistent across virtually every nutrition and health recommendation is that eating home-cooked meals prepared with fresh ingredients correlates with better health. This is something that Irma Rombauer seemed to understand instinctively. In an edition of “Joy of Cooking” from the early sixties, she and Marion Becker advised that “well-grown minimally processed foods are usually our best sources for complete nourishment.” With uncanny foresight, on the book’s very first page, they also issued readers a warning: “The sensational press releases which follow the discovery of fascinating fresh bits and pieces about human nutrition confuse the layman,” they wrote. “And the oversimplified and frequently ill-founded dicta of food faddists can lure us into downright harm.”

A previous version of this article incorrectly transcribed a line in an e-mail from Wansink.

Read the whole story
15 days ago
Pittsburgh, PA
15 days ago
Washington, DC
Share this story

STAMPing on event-stream


The goal of a STAMP-based analysis is to determine why the events occurred… and to identify the changes that could prevent them and similar events in the future. 1

One of my big heroes is Nancy Leveson, who did a bunch of stuff like the Therac-25 investigation and debunking N-version programming. She studies what makes software unsafe and what we can do about that. More recently she’s advocated the “STAMP model” for understanding systems. STAMP, she says, provides a much richer understanding of the problems and solutions than simple root-cause analysis. I really like the idea and wanted to try it out and have been looking for a good software accident to try applying STAMP to.

Back in November I got my wish. Some js engineers discovered that the npm package event-stream was stealing people’s bitcoin wallets. On investigation, they found the original maintainer had passed it over to an anonymous person because “he said he wanted to maintain it”. Naturally the internet erupted in a big argument about who was really at fault: the maintainer for giving it to a rando, society for not paying open source maintainers, or NPM for not preventing one extremely specific part of the attack.

I thought this would be a good exercise to try STAMP on, so I did. Then I got carried away and ended up writing way too much on it. You, uh, might want to grab a sandwich or something before reading.

You back? Good. Let’s do this.

Disclaimer: I’m not involved in the npm/js world and learned most of this stuff through research. I’m also not a security person. I’m presenting this as an example of what a STAMP analysis looks like. I do not have access to internal discussions or decisions by either Copay or NPM, which are the source of a lot of important analysis insights. In cases where I wasn’t certain whether a vulnerability was real or not, I erred on the side of including it: even if it had already been fixed, I want to show that following STAMP will discover it.

Intro: the Attack

event-stream is a js library that provides an event stream utility for JavaScript libraries. Almost 4,000 packages used it or a dependent package.2 While very popular, it was abandoned by its creator, Dominic Tarr, who had lost interest and moved on to other things. His last significant contribution was in 2014. Past that, he just merged other people’s PRs.

In September 2018, Dominic Tarr was contacted by “right9ctrl” who offered to take over maintenance of the package. Once Tarr signed over the access rights, right9ctrl added a malicious dependency to event-stream that, when included as a dependency of the Copay wallet, would steal the user’s private keys. This was only discovered when it made a depreciated crypto call and a different person noticed.

The accident is an example of a dependency attack, where the security of a system is compromised through its chain of dependencies. Other dependency attacks were the leftpad incident and the AndroidAudioRecorder incident (notable for the core repo not being compromised).

You can read more details about the event-stream debacle here and here.

Finding Fault

The belief that there is a root cause, sometimes called root cause seduction [32], is powerful because it provides an illusion of control.

At first, people blamed Tarr: he gave access to the repo over to an unknown, anonymous person who asked nicely. If he was more diligent, this never would have happened. Clearly, the root cause is that maintainers are lazy.

People quickly lept to his defense. Open-source is a tiring, thankless job and everything is provided without warranty. He hadn’t touched the repo for two years, somebody else wanted to maintain it, he said sure. He did not make the repo expecting to have to maintain it for multiple people. Clearly it’s all Copay’s fault, who used a no-warranty package without auditing it.

But wait, the attack was hidden really well! The attacker put the actual malicious code in the minified js, not the regular js, so somebody looking at it wouldn’t have seen it. Clearly it’s all NPM’s fault for not minifying everybody’s code themselves.3

Most people blamed one of these three things, but only one of them. One group has to be the root cause, and the others are irrelevant.


The biggest problem with hindsight bias in accident reports is not that it is unfair (which it usually is), but that an opportunity to learn from the accident and prevent future occurrences is lost.

“Who did this” is the wrong question. “How did this happen” is the wrong question. A better question is “why was this possible in the first place?”

An accident isn’t something that just happens. Accidents aren’t isolated failures. Accidents aren’t human error. Accidents aren’t simple. Accidents are complicated. Accidents are symptomatic of much deeper, more insidious problems across the entire system.

This is the core insight of Leveson. Instead of thinking about accidents as things with root causes, we think of them as failures of the entire system. The system had a safety constraint, something that was supposed to be prevented. Its controls, or means of maintaining the constraints, were in some way inadequate.4

The purpose of a postmortem should be to prevent future accidents. We don’t just stop the analysis once we find a scapegoat. Sure, we can say “Tarr transfered it over”, but why did that lead to an accident? Why did he want to abandon it? Why was he able to transfer it over? Why did nobody notice he transferred it? Why was a single dependency able to affect Copay? Why was a random internet dev so critical in the first place?

Leveson aggregated all of her safety approaches under the umbrella term STAMP.5 We’re going to analyse the attack via STAMP and see if we can get better findings than “Tarr don’t software good.”6

The Analysis


The goal of STAMP is to assist in understanding why accidents occur and to use that understanding to create new and better ways to prevent losses.

Doing a STAMP Accident Analysis is a super comprehensive task which I’m going to simplify for this post. Here’s what we’ll do:

  1. Identify the system constraints and how they are enforced.
  2. Identify the “proximal chain” in painful detail.
  3. Talk about which low-level controls failed and why.
  4. Talk about why inadequate controls were used in the first place.
  5. Keep repeating (3) and (4) at higher and higher levels.

The constant zooming-out is key here: it’s not enough to find out why things broke, but find out why “why things broke”. In theory you’re supposed to keep doing it: if someone skips a step because of managerial pressure, you ask why the manager was pressuring them in the first place. If the manager was worried about production quotas, find out how the quotas were decided. You just keep going and going and going.

For the sake of my sanity (and because I don’t have access to Copay’s secret diary) I’ll just stop at one zoom-out.

Base Constraints

Without understanding the purpose, goals, and decision criteria used to construct and operate systems, it is not possible to completely understand and most effectively prevent accidents.

We’ll start by identifying some constraints of the system:

  1. Package maintainers should be trustworthy.
  2. Packages should not be made malicious.
  3. Malicious packages should not be inadvertently used by users.

This is all enforced by “best practice”: it’s your as the user’s responsibility to only include packages you think are safe. You only include packages from maintainers you think are safe, and so they will never make their package malicious. If they modify the package, you are supposed to audit it, or check the changes to make sure they’re compatible with your code. npm helps by providing a few bits of tooling: Your library dependencies can be locked, package updates follow a convention (SemVer), and npm audit can identify known security problems. As we’ll see, they’re all inadequate for enforcing safety.

If a malicious package does get included, there are several more contraints we’d expect:

  1. Malicious packages should be identified and removed quickly.
  2. Malicious packages that are used should not reach production.
  3. Malicious code should not be able to steal private information.

It was several months between event-stream going bad and anybody noticing, by which point Copay had made several releases. It was about a week between somebody reporting the issue and NPM removing the package.

Proximal Chain

While the event chain does not provide the most important causality information, the basic events … do need to be identified so that the physical process involved in the loss can be understood.

The proximal chain is the accident timeline in as exacting detail as possible. The purpose here is not to pin blame, but to understand all the system controls involved. That gives us the launching point to start out initial investigation.

We’ve already talked about the beginning: right9ctrl contacts Tarr about taking over event-stream , Tarr gives him update rights in npm and tries to transfer the Github project to his control. However, r9c already forked the repo, meaning Github prevented signover. Instead Tarr gave r9c admin permissions on his own repo.

r9c starts by adding a few minor bugfixes to event-stream. Then, on September 9, he adds a “patch”. Here he adds a dependency, flatmap-stream. Three days later, he makes a new “major” version which inlines the dependency code to event-stream and removes the dependency. The malicious code is in flatmap-stream’s minified source code. The code is encrypted with the name of the using package as the key. For everything but Copay, it would do nothing, but would run malicious code if Copay (or a fork) is running in release mode. It would check if the user had 100+ Bitcoin in their wallet and, if so, upload the wallet private key to a remove server.

On September 25th, as part of trying to upgrade cordova-plugin-fcm, Copay accidentally updated all of their npm dependencies too. Based on the dependency chain, event-stream is patched but not pushed to the major version, meaning Copay now has the malicious code. Copay then released the new version on October 1st.

People first started noticing event-stream was throwing depreciation warnings around October 28. It was using crypto.createDecipher, which is not something a stream utility should be doing. Eventually “FallingSnow” investigated and, on November 20, discovered the malicious code. They immediately raised an issue on the event-stream Github repository and someone else emailed NPM support to get it removed. On the 26th NPM added it to npm audit and removed the package from npm. The internets promptly lost their minds.

NPM released an official statement but did not recommend any actions. Copay also released a statement, saying they would (1) implement a Content Request Policy and (2) only upgrade packages for new major versions. This, they believe, would be enough to fix the error.

The first controls

Analysis starts with the physical process, identifying the physical and operational controls and any potential physical failures, dysfunctional interactions and communication, or unhandled external disturbances that contributed to the events. The goal is to determine why the physical controls in place were ineffective in preventing the hazard.

Transfer of Rights

NPM and Github both do a lot to prevent non-maintainers from modifying packages. This is good. The controls, however, were simply not relevant here: Tarr made r9c an official maintainer. There are no controls in place to ensure that a new maintainer is trustworthy.

This is what most people focused on, despite being the most superficial bit. Problem: Tarr gave access rights to an internet rando. Solution: tell people to vet internet randos. This would presumably be enforced by demanding maintainers have better discipline.

This places additional responsibilities on the open source maintainer. One law we see time and time again is “you cannot fix things with discipline.” First of all, they simply don’t work: see all the data breaches at professional, “responsible” companies. Also, discipline approaches do not scale. This problem happened because a single contributor for a single package made an error. At the time of the attack, Copay had thousands of package dependencies. That means that thousands of maintainers cannot make any mistakes or else the system is in trouble. And even if they all have perfect discipline, this still doesn’t prevent dependency attacks. A malicious actor could seed a package and use it later, or steal someone else’s account.

Breaking Changers

There is one thing npm could have done here: it could have alerted people that the maintainer had changed. Then people could decide for themselves if they wanted to trust the new maintainer or not, or if they should pin the dependency. I have no idea how much signal vs noise this would produce, so people might not pay attention to this. More on that later. Also, I have no idea how many people would have acted on it, as r9c made several good changes before the bad one.

There is also something Github could have done: made it easier for Tarr to transfer event-stream into r9c’s namespace. While this wouldn’t have affected the attack, it would mean that, on discovery, people wouldn’t have wasted time going after Tarr, who had already givnen up his rights to the project.

There’s really not much else to examine here. We’ll see many more system faults by looking at why this transfer was so effective instead of why the transfer happened at all. So let’s move on to the next phase of the attack.

Getting to Copay

The “obvious problem” is this: event-stream turned malicious. Copay included it anyway. Either Copay should have audited the change to make sure it was safe, or Copay should have pinned their packages. As with the “Maintainers should be more careful” argument, it’s tempting because it places the blame squarely on one party. Problem: Copay did not audit all of their dependencies. Solution: tell people to audit all their packages. This would presumably be enforced by demanding developers have better discipline.

And just as before, this approach keeps us from having to dig into the details of why the controls, like audits and pinned packages, failed them.

Why did Copay use event-stream?

When there are multiple controllers (human and/or automated), control actions may be inadequately coordinated, including unexpected side effects of decisions or actions or conflicting control actions. Communication flaws play an important role here.

Copay didn’t really depend on event-stream. Copay depended on npm-run-all, which depended on ps-tree, which depended on event-stream, which hid the malicious code in flatmap-stream. Is the problem with ps-tree for not auditing event-stream, or with Copay for not auditing the entire chain of dependencies? Even if they checked every single line in npm-run-all and ps-tree and event-stream, they still wouldn’t have caught the error.

Leveson calls this multiple controllers, or boundary error: there are multiple different groups that could be responsible for auditing, but not a group that is responsible. Each one might independently assume that someone took care of it. Or it could lead to someone inadequately trained auditing and deciding it was safe, and everybody else believing them.

It’s tempting to make this hierarchical: if A depends on B depends on C, then A audits B and B audits C. This fails for two reasons. The first is that there isn’t actually a hierarchy here: A does not have any authority to make B audit C, so cannot guarantee that they will do so properly. Second, B can successfully identify C as compromised and the A still include it. This is because of how npm does updates.

A note on npm versioning

In complex systems, accidents often result from interactions among components that are all satisfying their individual requirements, that is, they have not failed.

All packages on npm follow Semantic Versioning, or SemVer. SemVer is format for versioning packages to make it easier to upgrade dependencies. Packages have major versions, minor versions, and patches, represented as Major.Minor.Patch. Major versions mean breaking changes, minor versions are significant nonbreaking changes, patches are as you’d expect. So if a package is on 3.4.2, you should be able to upgrade to 3.4.3 or 3.5.2 without any changes to your own code, while all bets are off for 4.0.0. This helps us keep dependencies manageable and upgrades less painful.

NPM only allows publishing a given name/version combination once. If you want to tweak something after publishing, you have to bump the version. This prevents somebody from replacing a good version with a malicious one.

Depending on your needs, you can express all sorts of version requirements. You can pin a package to a specific version, such as 1.2.1 only. You can express a range of packages, like 1.0.0 - 2.7.1. You can pin the major or minor versions while letting minor/patch version float. If you write ~1.2.3, then you’re saying you can use 1.2.5 or 1.2.19 but not 1.3.0.

Once you install a package, it’s added to your package-lock. From then on npm install will not upgrade it if a newer compatible version is out. If you run npm update, you will upgrade to the latest compatible version for all your packages. This includes transitive dependencies. If you have dependency A -> B -> C and C bumps a patch, you’d upgrade C even if B is unchanged. The exception to this is B is “shrinkwrapped”, which is explicitly discouraged for libraries.

Why did Copay upgrade?

ps-tree had a floating dependency on event-stream for version ~3.3.0. This means that they would not upgrade the package except for bugfixes. As mentioned before, this is normally good practice. The attacker exploited this by doing the following:

  1. Add the exploit to patch 3.3.6.
  2. Publish 4.0.0 without the exploit.

Everybody who transitively depends on ps-tree would, on upgrading, get the malicious version. However, people directly depending on it would presumably update it directly to 4.0.0. This means that the people most likely to miss it would be people assuming that ps-tree properly audited the package.

However, the ps-tree team might not have even realized there was a new version at all! The last package update before the incident was in March, several months before the attacker took over event-stream. If the maintainer didn’t specifically upgrade the dependencieso on their local version of ps-tree, they wouldn’t have seen there was a new patch for event-stream. And remember that the actual attack was in a package under a different username, so ps-tree could argue that they expected flatmap to do due diligence.

The only comprehensive solution here is to audit every package that changes, no matter how deep it is. This is what Copay now claims to be doing. This is 1) extremely resource-intensive and unviable for the majority of projects, and 2) means that you could have security vulnerabilities that would have been fixed in patches.

The attack itself

Why didn’t Copay notice?

Even in the best of industries, there is rampant attribution of accidents to operator error, to the neglect of errors by designers or managers.

The script only made HTTP requests in production. However, the package still threw depreciation warnings. Why didn’t they notice that?

Copay runs in Electron, a self-contained node environment. We’ll talk a bit more about Electron later, but the important thing here is that it distinguishes “client-facing” code from “main process” code. In particular, “client” code can be debugged fairly easily with Chrome Devtools, but to debug “main process” code you have to run Electron in a special mode and use an external debugger. Module imports are done as part of the main process, so Copay would not see the warnings if they weren’t specifically looking for them.

You could argue that “running a main process debugger” should be part of the normal release process. But Electron seems to discourage that.

Why could the package steal data?

Why was a single dependency, four layers deep, able to steal everybody’s bitcoin wallets?

The Principle of Least Privilege says every part of the system should just enough privileges to perform its role and nothing else. A stream processing library, for example, should not be able to make HTTP requests or access files. This is a fundamental constraint of security: nothing should be able to do things it is supposed to be doing.

In JavaScript, PoLP is entirely by convention. All functions have access to XmlHttpRequest, any script can dynamically load any module, anything can write to an existing object’s prototype. JavaScript can read files and do POST requests, so the malicious script can do that, too.

Content Security Policies

One of the fixes Copay is making is adding a Content Security Policy, which restricts http requests to a whitelist. This happens at the browser/Electron level so JS can’t subvert it. This would have prevented this particular attack but not dependency attacks in general. The malicious code has access to everything the primary code does, too. If, for example, Copay was using JavaScript to generate Bitcoin wallets, the attacker could maliciously reduce the key space to 20 billion keys.

Why did it take a week for people to react?

FallingSnow first alerted everybody about the exploit on November 20. It was only November 26 that packages started to mass remove event-stream. That was when NPM published a security advisory saying that event-stream was malicious and pulled it from the registry.

However, people informed NPM by Nov 22 at the latest, and likely by Nov 20. So NPM took 4-6 days to actually publish the advisory. Some of this is probably due to Thanksgiving as most of NPM is US-based. Nonetheless, it’s a pretty long delay for such a critical issue.

Zooming Out

Fully understanding the behavior at any level of the sociotechnical safety control structure requires understanding how and why the control at the next higher level allowed or contributed to the inadequate control at the current level.

We now have a very immediate set of control failures:

  1. Dominic Tarr gave rights to another person
  2. SemVer did not prevent r9c from including evil stuff
  3. Floating pins do not prevent a malicious patch
  4. It’s not clear whose responsibility it is to audit packages
  5. Copay didn’t audit the packages
  6. No PoLP in JavaScript
  7. Copay didn’t debug the main process
  8. CSP was off by default
  9. NPM responded slowly

Now it’s time to zoom out. We need to ask why Tarr was in a position to do so much damage, why nobody audits packages, why NPM responded so slowly. We need to understand why we’re in the situation we’re in. Saying “pin your packages” is completely useless if we don’t know why people use floating dependencies in the first place!

As we go higher, control failures become less and less about specific operational processes and more and more about cultural, organizational, or economic forces. The problems stop being things like “Copay was using an unsafe language” and start becoming “Cross-OS development is difficult without using an unsafe language” or “Copay’s existing workforce was almost entirely Node developers.”

Why don’t people audit?

As safety efforts are successfully employed, the feeling grows that accidents cannot occur, leading to reduction in the safety efforts, an accident, and then increased controls for a while until the system drifts back to an unsafe state and complacency again increases… This complacency factor is so common that any system safety effort must include ways to deal with it.

Auditing is a waste of time.

Most packages aren’t going to be malicious. Copay had 2700 dependencies. After the 200th time auditing an update and going “yup, checks out”, are you really going to be as diligent with the 201st? Remember, “diligent” here means knowing the code well enough to find security holes. This is all on top of maintaining your code, as in the code that’s your actual job.

In theory you could have heuristics, like “only audit packages who changed owners.” But heuristics, if known, are circumventable. r9c made several “good” commits both before and after the bad commit. How long would you be suspicious of r9c until you stopped paying attention?

Also, you probably won’t find the attack even if you were auditing it. It was pretty well hidden! Maybe a professional could find it, but not the average fullstack dev.

In order to make auditing not a waste of time, we’d need to reduce the number of packages we need to audit and make auditing actually likely to turn up bugs. The explanation for why “people don’t audit” is actually threefold:

  1. There are too many dependencies to audit them all
  2. Almost all of them are safe anyway
  3. Of the ones that aren’t safe, it’s extremely hard to discover they’re evil

(1) has two parts to it: there is a very high number of absolute dependencies, and a high percentage of them are risky. (2) is a good thing, but makes it easy to get complacent. For now let’s focus on (3).

Why was the attack so hard to find?

If you looked at the code for event-stream, you wouldn’t see anything malicious. If you instead looked at flatmap-stream, you still wouldn’t find anything malicious. Instead, you would have to look at the minified version hosted on Github. This is different from what you’d get if you minified it yourself. Minified code is difficult to reverse-engineer, but if you did it, it would definitely look suspicious enough to raise concerns of sketchiness.

While it’s in general hard to tell if code is malicious or not, it’s a lot easier to tell if code is suspicious or not. Presumably we could focus our audits on suspicious code, which will produce some false positives but that’s much better than the alternative. Then this problem reduces to “the sketchy code was in the minified version”, and it’s impossible to tell anything really about minified code.

Many people have said this is the core issue: that npm doesn’t verify your minified code matches the regular code. npm should either check your minification or minify the code for you. Then this attack couldn’t have happened!

There is a minor and a major problem with this approach. The minor is that there is no one minification tool, so you’d have to provide npm with the steps to minify, which kind of defeats the point. I don’t even know if all of the minification tools are deterministic.

The bigger problem is that npm doesn’t do any validation anyway.

npm/Github mismatch

npm lets you specify a corresponding Github page for the project. It does not, however, validate that they actually match. It’s perfectly fine to upload one version of the file to Github and another to npm. So instead of being a “you have to look at the minification of the dependency of a dependency” attack, it could have instead been a “you have to look at the minification of the dependency of a dependency, but in npm and not github” attack.

This seems to be by design, as npm_ignore overrides gitignore. It seems that npm expects it to be common for the two versions to be different. This was raised as an issue before, but as far as I can tell there are no plans to change this, nor do I know what the relative tradeoffs are. However, it does mean that “force the minified and main versions to sync” would be insufficient at plugging this specific style of attack.7

Let’s ask a different question: why are people including minified files in the first place?

Why Minify?

Accidents, particularly component interaction accidents, most often result from inconsistencies between the models of the process used by the controllers (both human and automated) and the actual process state.

People unfamiliar with JS might ask why there was a minified file in the first place. JavaScript is primarily used for web clients, which means the client needs to be download it from a server first. Minification reduces the size of the file, for example by removing indentation and replacing var foobarbaz with var a. Minified react is about 500 kb smaller than unminified version, meaning faster downloads and script starts. Most people use the full version in develop and compile the minified versions for use in production.

Recently we’ve seen a lot of interest in Electron, a framework for running JavaScripts like “native” “apps”. Each Electron app must come with a copy of Chromium and Node.js, meaning Electron apps are dozens or even hundreds of megabytes large. All the js scripts are downloaded at once as part of the app. Copay was an Electron app, meaning anybody using it would have already downloaded all of the necessary JavaScript. There was no benefit to using the minified package over the regular one. They just used it because that was what what everybody did, and everybody did it because it used to be a good idea. Now, though, following best practice opened a security hole. 8

Leveson calls this model drift: The existing rules were ideal for the system in the past, but the system itself has changed. Copay was doing something that made sense in the original context of JavaScript. In the Electron context, though, blindly using minified dependencies is a performance hit and security vulnerability.

Client apps are still vulnerable to dependency attacks, and minification is still a way to obfusciate malicious dependencies, but there’s absolutely no reason it should be so effective against a standalone app.

Why was Tarr’s library so critical?

If the analysis determines that the person was truly incompetent (not usually the case), then the focus shifts to ask why an incompetent person was hired to do this job and why they were retained in their position.

One thing we didn’t talk about yet is why Tarr had such an influential package. He had no intention of having so much responsibility when he originally published it in 2011. So why did Copay rely on it?

Node.js came out in 2009. It’s had exponential growth, though, starting 2014. One npm philosophy is “don’t reinvent the wheel.” Since an event stream package already existed, don’t write your own, use it instead. So people added event-stream as a dependency, and continued doing so for seven years. Things are further compounded by the transitive dependencies. If A depends on B and B depends on C, then A also depends on C. The number of users of your package can grow exponentially.

Tarr didn’t do any advertising. He is not a famous person. He just happened to be using Node a little before everybody else, needed to write a utility, and decided to publically release it. Suddenly it was critical to thousands of projects.

We’ve seen something similar happen with left-pad. Leftpad wasn’t part of JS at the time, somebody made a package in 2014, and everybody used it. Even now, after padStart was added to the JS core library, over 400 packages still directly depend on left-pad and it is downloaded more than 2 million times a week.9

This all seems an intentional part of the system. One consequence of this, as we’ve seen, is it means people who have neither the resources, abilities, or inclination are suddenly responsible for the security of thousands of projects they’ve never heard of. Any dependency of this form is extremely suspect to hijacking and must be trreated with suspicion.

This wouldn’t be as big a problem as it is if only a few packages were of this style: maintained by a single person who never wanted or expected a ton of responsiblity. Unfortunately, almost all small packages are like this. And there are a lot of small packages.

Why so many dependencies?

Each local decision may be “correct” in the limited context in which it was made but lead to an accident when the independent decisions and organizational behaviors interact in dysfunctional ways. Safety is a system property, not a component property, and must be controlled at the system level, not the component level.

Copay has approximately 2700 dependencies. Most of these are single purpose dependencies, or included because another package needed something in it. For example, ps-tree used event-stream because it provided a tidy interface pipelines and map-stream. In total, event-stream had 7 dependencies… all of which were made by Tarr. The largest of these event-stream dependencies had more direct users than event-stream itself, while the smallest had only 30 other users. The npm community encourages making packages as small and isolated as possible, so that one functionality should be split into several unit packages and an integration package. This is good for quality, reusability, and file sizes: if you need only a small part of the package, you could instead include the corresponding micropackage and clients need to download less.

This is a good example of what Leveson considers local optimization: library authors have pressing business needs- increase rate of feature development, reduce the bugginess of their packages, and reduce the size of client scripts. Trying to locally meet these needs leads to a greater client attack surface and so less system safety.

It is very difficult to get more information on the dependency tree. For example, there’s no easy way to get the authors of all of the dependencies. This makes even basic data analysis of dependencies extremely tedious. Very roughly 1700 people managed Copay’s various dependencies. Presumably any of these would be equally vulnerable to a dependency attack, whether by transfering permissions, having their keys stolen, etc.

One way to reduce the attack surface would be to reduce the number of dependencies. This can either happen at the library side, by reducing the number of packages people need, or at the user side, by reducing the number of packages people decide to use.

User-centric reductions

Not happening.

Library-centric reductions

Most of Copay’s requirements are small utility functions, deep down in the dependency chain. Part of the reason there are so many is that JavaScript doesn’t have a standard library. For example, the canonical i18n package for js has 224 dependencies, while the python one has 0. While they likely have different features, the difference is still an indication of how much a standard library reduces dependencies.10 If I wanted to dependency attack i18n, there are 224 maintainers I could compromise. If I wanted to attack the Python package, there’s only one.

One way to reduce the number of maintainers, then, is to use more centralized packages: large packages which provide a diverse array of utilities, somewhat akin to a standard library. This both reduces the number of packages and the relative number of untrusted packages: the standard libraries could be maintained by people with explicit responsibility. Presumably these fewer maintaners would also have a change bureaucracy and get paid for their open source contributions. Paying one organization is a lot easier than paying 42 organizations.

This is in contrast with the current culture of many small modules, of course, and npm devs have said that there were deep problems with large packages. However, we also have seen that this would work for js. lodash is downloaded over 15 million times a week. It’s also part of the JavaScript foundation, implying a degree of security, responsibility, and auditing.

By contrast, something like glob-parent is a utility downloaded 10 million times a week and is maintained by one person, and has a dependency maintained by two other people. Both of these would be prime candidates for a dependency attack, and so are prime candidates for combining into an aggregated utilities package.

Why don’t people pin packages?

Not only do safety constraints sometimes conflict with mission goals, but the safety requirements may even conflict among themselves.

There are many reasons why people have floating dependencies. Here are three of the more relevant ones:

  1. Multiple separate packages might use the same dependency. If all of them pin to a specific version, you will need multiple copies of the same dependency, adding bloat and making things harder to audit. But if they all had floating versioning, then you could install a version that satisfies all of them.
  2. Version requirements and transitive dependencies can interact in really strange and unintuitive ways.
  3. Dependencies can have bugs and security vulnerabilities, and you should be updating them as soon as these are fixed. By floating your requirement you can automatically include patches whenever you upgrade.

(3) is especially interesting. It doesn’t only encourage people to automatically include patches. It also can encourage people to automatically update major and minor versions. Maintainers often patch the latest version of the package as well as some older versions of the package. Eventually, older versions can be “end-of-lifed”, meaning it will no longer get even critical security patches. If you are using an EOLed dependency and need to patch it, you will have to upgrade to a supported version first, which may break the public API. So people will regularly upgrade to new versions, even if they don’t need it, just to make sure that they can painlessly add in security patches.

This puts two safety constraints in conflict. On one hand, you want to audit package updates to make sure they are safe, which means slow, infrequent upgrades. On the other hand, you want to include critical security patches ASAP, which means fast, regular upgrades. Leveson considers these conflicts a sign that you need to think very carefully about your system design before building it. Is there a way to design npm package management to satisfy both constraints?

No idea. One thing I think might help is if maintainers could put information beyond “major, minor, patch” in their package versioning. Then users could pin packages but quickly identify which ones need to be updated for security reasons. I don’t think npm currently provides this.

In general, npm provides very little help with analyzing packages. There’s no way to distinguish high-risk vs low-risk packages in your setup, or say “upgrade this package unless it added a new dependency” or anything like that. This makes auditing much harder than it already is, as there’s pretty much no official tooling designed to help you.

Why is JavaScript insecure?

Usability and safety, in particular, are often conflicting; an interface that is easy to use may not necessarily be safe.

There are some attempts to make JavaScript more secure, like object.preventExtensions or strict mode. These assume one of two things:

  1. The developer is trying to prevent unintentional encapsulation mistakes by users.
  2. The developer is trying to prevent security holes via code submitted by clients.

In a dependency attack, it’s neither: the malicious code is directly included as part of the final package. This means it has the same privilege as everything else and can subvert any attempts to enforce script security.

This is another case of model drift. JavaScript was originally designed under the assumption that scripts would be small with few dependencies under the constraint of “never break browser backwards compatibility.” As the use case changes (to server use and eventually native apps) and the style changes (using thousands of small packages), JavaScript requires new security constraints. However, the compatibility constraint is even more important, limiting what changes we can make. The mission constraints directly conflict with the security constraints.

As with the minification, backwards compatibility is not a major constraint for Electron apps. They all run in the same browser and do not need to support IE10. Electron could conceivably run a more locked-down version of JavaScript. In fact, Electron does support this, but almost all of these features are disabled by default. Opt-in security is much less effective than opt-out security.

It’s important to point out that this problem isn’t unique to JavaScript. The runtime itself has to enforce PoLP too: if it f.ex doesn’t restrict which modules can make http requests you can get something like the AndroidAudioRecorder attack even though Java has good modular encapsulation. But I get the impression that it’s impossible to enforce runtime PoLP if the language has powerful runtime metaprogramming. This potentially means that any interpreted dynamically-typed language (Ruby, Python, etc) can’t completely prevent this kind of attack.11

Why was the NPM response so slow?

Safety starts with management leadership and commitment. Without these, the efforts of others in the organization are almost doomed to failure.

People in the thread immediately emailed A few others tweeted at them. Neither of these are the official security channels. According to the NPM Security Policy, the appropriate channel is, which is “the best and fastest way to contact npm about any security-related matter.” Matters are triaged in one business day.

this is also the wrong channel. is only for security issues related to NPM software. Third-party package vulnerabilities are handled by Security Working Group, who very explicitly is not responsible for Rather, you are supposed to either submit at their HackerOne page or email them at Neither of these channels is documented anywhere on the official NPM page.

To make matters even more confusing, as of 01/01/19 the HackerOne page currently isn’t accepting new reports. The WG Github page also links to the private Node HackerOne, which recommends people report security vulnerabilities to the package maintainer. Finally, the NPM security advisory page suggests you report vulnerabilities… by emailing The instructions on what to do are inconsistent and contradictory.

Responses can take a long time, and updating the security advisories can take even longer. For one “critical” vulnerability, the issue was submitted in August, acknowledged as a critical issue in September, and submitted as an advisory in November – a time lag of several months. event-stream was actually on the fast end of things.

If I was doing a proper STAMP analysis, I’d have to zoom out here and investigate why NPM places so little emphasize on package security. Is it a manpower issue? A priority conflict? Something they just didn’t think about? But I’d need NPM internals to figure this out, which I don’t have, so we’ll have to stop here.


NPM CTO saying another leftpad would be impossible

When accidents happen, we often try to find “the root cause”, the one thing that can be fixed to prevent the problem happening again. In the case of Leftpad, it was that people could freely unpublish their packages and break dependencies. Here, the “root cause” is usually either “no maintainer responsibility” or “no user audits”. Fixing either of these (if they are fixable at all) may prevent this specific attack from happening. But it would not prevent any variations on the attack, just as fixing “the root cause” of Leftpad didn’t prevent the event-stream attack. We need to examine the entire system to find what made it unsafe.12

The attack started when Tarr transfered control of the package and succeeded because Copay didn’t audit the change. But there were many, many system properties that made it unsafe. Here are just a few:

  1. The node ecosystem favors lots of small packages with one or two maintainers
  2. Most maintainers are random folk who did not expect or want the responsibility
  3. Heavy dependence on legacy, often-obsolete packages
  4. Dependencies are transitive
  5. Very difficult to audit packages, or get additional information on them
  6. No way to distinguish high-risk from low-risk packages
  7. What’s uploaded to npm doesn’t need to match what’s uploaded to Github
  8. Most bundlers default to including minified code, even in standalone applications
  9. People use preminified code instead of globally minifying code
  10. Electron used configuration assumptions from legacy JS that were inapplicable in the new context
  11. Electron does not enable most security features by default
  12. Users are encouraged to regularly and automatically patch
  13. No way to restrict JavaScript module privileges
  14. Inconsistent information on how to report security vulnerabilities
  15. NPM doesn’t prioritize addressing security issues in third-party packages

Fixing the “root cause” is fast and cheap. Changing the fundamental system properties is slow, expensive, and risky. It may conflict with the system’s goals, such as ease-of-use and backwards compatibility, and it may require a lot more money thrown at open source. But we should at least acknowledge that these properties exist, and that they influence how easy and common these attacks are. The system cannot be made safe by root cause fixes alone.

So that was my first STAMP analysis! I don’t know how good it is: I think I ended up focusing too much on specific components and not enough on the social forces. I also think in a couple places I got hung up on auditing and/or let my biases shine through. But I think this identified a lot of interesting system issues. That tells me that the STAMP process is useful: if an outsider to npm and security can find interesting stuff this way, then it’d probably be super useful for actual domain experts, too! Or maybe I just identified boring surface-level stuff. I have no way of knowing!

Oh, and this barely scratched the surface of STAMP. This is just the easiest parts of accident analysis. She has a lot more to say about both accidents and the broader safety system in her book. In the lower-left sidebar there’s an option to download the book for free. Most people miss that. You can also see her homepage here and learn more about all the cool stuff she did.

Anyway, if you got this far, might as well plug my business. I teach companies how to use formal methods to build complex systems more quickly, cheaply, and safely. It probably wouldn’t have helped at all here but it’s still pretty useful! Feel free to email me if you’re interested in learning more.

Thanks to Richard Whaling , Richard Feldman , and Marianne Bellotti for feedback.

  1. All quotes, unless otherwise noted, are from Engineering a Safer World. You can get it for free here. [return]
  2. For this analysis I’ll say “dependent” to mean a dependent package, “user” to mean a library that depends on the package, and “client” to mean a person using the final product/app. [return]
  3. NPM is in allcaps when it refers to NPM Inc, the company that develops npm (node package manager). [return]
  4. Regular readers of this blog might notice that this is very similar to the formal methods I’m so fond of. I think my love of FM is a reason why STAMP is so interesting to me, but I’m probably presenting it in a way that’s more sympathetic to that interpretation. [return]
  5. STAMP is short for System-Theoretic Accident Model & Processes. It’s an umbrella term for an array of different techniques. The one we’re applying here is STAMP accident analysis, which Leveson calls CAST, for Causal Analysis Based on STAMP. Could you tell that Leveson did a lot of work for the government? [return]
  6. One caveat: in STAMP Leveson assumes the system is hierarchal: even if the organization is distributed, there is at least one group that everybody indirectly reports to. event-stream involves a few different independent actors. I tried to adapt the ideas as best I could. [return]
  7. Why didn’t the attacker exploit the npm/github mismatch? They actually did: the minified file loaded more code from ./test/data.js, which was uploaded to npm and not Github. I have no idea why they didn’t do the same thing with the minified file. [return]
  8. Okay this isn’t totally accurate. Electron stores the source code for all JavaScript in memory. So there’s a good reason to minify Electron code, too: it reduces the memory footprint. But if that’s an issue, you’d be better off globally minifying your code as opposed to including minified packages, so using minified dependencies is an even worse idea than it already is. [return]
  9. There’s also an npm package for a padstart shim, which has been completely unnecessary for two years now. It’s still downloaded 500k times a week. [return]
  10. Earlier, though, I said that many transitive dependencies might be owned by the same person. This might be the case with i18n too. We can also compare the direct dependencies: 9 developer and 6 user dependencies for js, 2 developer and 0 user dependencies for python. [return]
  11. I’m generally super skeptical of the idea that static typing or pure FP reduce software bugs. With that in mind, I’m going to bite the bullet and say that this particular attack would not have been possible in a pure typed FP language, like Elm. Adding a side effect would mean changing the type signature, so users of the module would get a type error. (Also, Elm forces you to bump the major version if you change the types, you couldn’t hide it in a patch.) [return]
  12. I don’t think the NPM team was acting in bad faith, and I don’t think they were incompetent or anything. I think that property analyzing an accident takes a lot of skill and most engineers (including me) don’t have that skill. Which is why I’m practicing STAMP. [return]
Read the whole story
18 days ago
Pittsburgh, PA
18 days ago
Washington, DC
16 days ago
"""“Who did this” is the wrong question. “How did this happen” is the wrong question. A better question is “why was this possible in the first place?”"""
Share this story

What, No Python in RHEL 8 Beta?

1 Comment and 2 Shares

What, No Python in RHEL 8 Beta?

What, No Python in RHEL 8 Beta?

TL;DR Of course we have Python! You just need to specify if you want Python 3 or 2 as we didn’t want to set a default. Give yum install python3 and/or yum install python2 a try. Or, if you want to see what we recommend you install yum install @python36 or yum install @python27. Read on for why:

For prior versions of Red Hat Enterprise Linux, and most Linux Distributions, users have been locked to the system version of Python unless they got away from the system’s package manager. While this can be true for a lot of tools (ruby, node, PERL, php) the Python use case is more complicated because so many Linux tools (like yum) rely on Python. In order to improve the experience for RHEL 8 users, we have moved the Python used by the system “off to the side” and we introduced the concept of Application Streams based on Modularity.

Through Application Streams, in combination with Python’s ability to be parallel installed, we can now make multiple versions of Python available and easily installable, from the standard repositories, into the standard locations. No extra things to learn or manage. Now, users can choose what version of Python they want to run in any given userspace and it simply works. For more info, see my article, Introducing Application Streams in RHEL 8.

To be honest, the system maintainers also get some advantages of not being locked to an aging version of Python for our system tools. With users not relying on a particular version of Python coming with the system installation, we have the freedom to take advantage of new language features, performance improvements, and all the other goodness a developer gets when tracking near the upstream version.

However, this has resulted in a dilemma. When a user sits down at a fresh installation of RHEL 8 they will naturally expect that /usr/bin/python will run some version of Python. If you follow the recommendation in Python Enhancement Proposal (PEP) 394, that will be Python 2. However, at some point, a new PEP will likely want to change that recommendation to Python 3, probably during, the typically *10* year, life of RHEL 8!  To put this in perspective, consider that RHEL 7 was released in 2014, and will be supported until 2024!  

So, what do we do? Well, if we follow the current recommendation, we make some present day users happy. However, when the Python Community shifts to recommending Python 3 as the default, we will make new users unhappy.

As a result, we came to the tough conclusion, don’t provide a default, unversioned Python at all. Ideally, people will get used to explicitly typing python3 or python2. However for those that want an unversioned command,  let them chose from the beginning which version of Python they actually want. So, yum install python results in a 404.

However, we do try to make it as easy as possible to get Python 2 or 3 (or both) on to your system. We recommend using yum install @python36 or yum install @python27 to take advantage of the recommended set of packages to install. If all you really need is *just* the Python binaries, you can use yum install python3 or yum install python2.

We have also setup the alternatives infrastructure so that when you install either (or both) you can easily make /usr/bin/python point to the right place using alternatives --config python. However, as we explained above, and aligned with the Python PEP, we don’t recommend relying on /usr/bin/python being the correct python for your application.

Note: the same issue arises for Python wrapper scripts like pip. Installing Python 3 will put pip3 in your path, but not unversioned pip. With Python modules like pip, venv, and virtualenv, you can avoid confusion and get the right version by running those as a module: python3 -m pip and avoiding the wrapper scripts. Using Python virtual environments is a best practice that also avoids the issues with version ambiguity, see How to install Python 3 on Red Hat Enterprise Linux 7 for virtual environment details and advice.

To conclude, yes, Python is included in RHEL 8! And, it will be even better than in the past! If you want more details on anything in this post, please see the How To Guide on Red Hat Developers.

Oh and if you haven’t downloaded RHEL 8 yet—go to now.

Additional Information


Take advantage of your Red Hat Developers membership and download RHEL today at no cost.

Read the whole story
31 days ago
Oh for crying out loud. There is only one correct answer to this "dilemma" and it's none of the above: `/usr/bin/python` must remain permanently reserved for Python 2. It's fine if that version isn't installed, but the pathname can never be reassigned to Python 3 or any other future version of Python.
Pittsburgh, PA
Share this story
Next Page of Stories