security research, software archaeology, geek of all trades
292 stories
·
7 followers

When Did Western Cuisine Come to Japan?

1 Comment and 3 Shares

(The following is a post by Eiichi Ito, Reference Specialist for the Japanese Collection, Asian Division.)

An American friend once told me how, on the first day of his visit to Japan, he received an invitation to dine at the home of a Japanese family. While he enjoyed the meal, he was surprised to see that the dishes that his hosts served did not resemble those in a typical Japanese restaurant in the United States. There was no sushi, sashimi (sliced raw fish) or soba (buckwheat) noodles. Instead, he found dishes of korokke (croquette), tori no kara’age (deep-fried chicken), and tonkatsu (pork cutlet) arranged beautifully on the table with ohashi (chopsticks) and a small bottle of shoyu (soy sauce). This scene of seemingly familiar, but also different, foods made him curious about what defined home style cooking in Japan as opposed to America or Europe. He wondered when Western cuisine was first introduced to Japan and came to me for some answers.

The introduction of European food to Japan is closely linked to the history of ports designated for foreign trade. Portuguese merchants were the first Europeans to visit Japan, landing on the small island of Tanegashima in 1543. Just six years later, the Jesuit priest Francis Xavier arrived at the southwestern port city of Kagoshima. Local authorities opened the port of Nagasaki for trade to the Portuguese in 1571. In addition to introducing goods like matchlock rifles and tobacco, the Portuguese also brought new foods, such as tempura (fresh fish, shellfish, or vegetables dipped in a batter of flour mixed with egg and water and deep-fried) and kasutera (castella or sponge cake). Tempura grew in popularity over the following centuries, but it’s not exactly clear that Japanese viewed it as a “Western” dish. While it may have been the first example of European cooking in Japan, it wouldn’t be accurate to describe this as the introduction of Western cuisine more generally.

Figure 1: Dejima, a part in bottom center of etching illustration in “Vue et description de la ville de Meaco capitale du Japon avec d’autres particularitez du pays,” [between 1714 and 1720]. From: Chatelain, Henri Abraham. Atlas Historique. Amsterdam: L’Honoré & Châtelain, 1714-1720, vol. V, no. 156, p. 192. Library of Congress Prints and Photographs Division.

Concerned about the potentially subversive influence of the Portuguese merchants and Jesuit priests on the local Japanese people, the Tokugawa shogunate (military government) first isolated them in Dejima, an artificial island constructed in Nagasaki Harbor in 1636. Then, after the Shimabara Rebellion in 1637, which involved a number of Kirishitan, or Christian converts, the shogunate banned Christianity and expelled the Portuguese from Japan. Only the Dutch and the Chinese, who were solely interested in trading rather than proselytizing, were allowed to enter Japan. Even so, the Dutch were also confined to Dejima. This small island remained the only channel for commercial and cultural exchange between Japan and the West until Japan gave up its isolation policy in 1854 by signing the Treaty of Peace and Amity or “Kanagawa Treaty.” Under mounting Western pressure, the Japanese authorities opened a few ports for trade including the ports of Hakodate and Yokohama in 1856.

One testament to Japan’s new openness to the West after the Kanagawa Treaty is a plaque engraved with the phrase “Birthplace of Western cuisine in Japan.” This plaque now stands on the land of the Thomas Glover Mansion in Nagasaki, the oldest standing Western style house in Japan built by Japanese carpenters. Thomas Blake Glover was a Scottish businessman who founded Glover and Company in Nagasaki in 1859. Given that the Glover Mansion was completed in 1863, was this the year in which Western cuisine was officially introduced to Japan? Probably not, as Glover’s dinner table was not open to the public.

While some types of Portuguese food, like tempura and castella (sponge cake), clearly made their way into the Japanese diet a few centuries ago, when did the idea of Western cuisine enter into popular thinking? Were any Japanese books published to introduce Western cuisine or cooking to the general public? One particular book from the over 1.2 million items in the Library’s Japanese collection offers some more concrete answers to this question.

Figure 2: Illustration on the wrapper of “Seiyo ryori shinan = Cookery” (1872). Library of Congress Asian Division.

Seiyo ryori shinan” (“西洋料理指南,” “A guide to Western cooking”) by Keigakudo Shujin was published in 1872. In two volumes, this book not only described Western cuisine and recipes but also featured illustrations of a dining scene, cutlery, and kitchen utensils. Several dishes were introduced that were completely foreign to the Japanese palate and included items such as veal, pork, and cow milk. The author also wrote about the importance of establishing the habit of having a regular diet with three meals a day instead of two, which the Japanese easily adapted to. “Seiyo ryori shinan” appeared at the beginning of the new Meiji government’s “civilization and enlightenment” (bunmei kaika) campaign, when the government adopted a policy of modernization that introduced Western civilization to Japan. During this time the government encouraged changes to many aspects of Japanese daily life, including the adoption of Western manners, clothing, and food. Authorities promoted the idea that Western cuisine, based on nutritious ingredients and detailed recipes, would strengthen and improve the health of the Japanese and, in turn, enable them to build a modern state to catch up with the West. Along with many other customs, Western cooking was considered by statesmen and intellectuals to be superior to their own traditional cooking. It was just one example of the then commonly held idea that Western culture was superior and Japanese culture lagged behind.

Figure 3: Japanese in Western style outfits dining at a table. From “Seiyo ryori shinan = Cookery” (1872).

Nowadays, many Japanese may not even consider korokke and tonkatsu as Western dishes. Those originally foreign dishes, introduced more than a century ago, have been fused and blended into the Japanese diet. Since then, Japanese cooking has become a hybrid cuisine that includes elements of both Western and Japanese cultures. When one sees that Americans crave “real” Japanese food, like sushi and sashimi, and that these foods can easily be found in supermarkets around the world, it is clear that something of a global fusion of various food cultures has been taking place for quite some time. The publication of “Seiyo ryori shinan” in 1872 is an important milestone in recording the changes taking place in this global fusion of food cultures.

Further reading:

Bestor, Theodore C. 2004. “Tsukiji: The Fish Market at the Center of the World.” Berkeley: University of California Press.
Cwiertka, Katarzyna Joanna. 2014. “Modern Japanese Cuisine: Food, Power and National Identity.” London: Reaktion Books.
Rath, Eric C, and Stephanie Assmann. 2010. “Japanese Foodways, Past and Present.” Urbana: University of Illinois Press.

Figure 4: Kitchen utensils and cooking pans. From “Seiyo ryori shinan = Cookery” (1872).

Figure 5: Instructions on how to place a fork and knife on a plate to indicate when a dish is finished or not. From “Seiyo ryori shinan = Cookery” (1872).

Read the whole story
zwol
3 days ago
reply
TIL tempura is not aboriginally Japanese
Mountain View, CA
acdha
3 days ago
That was an automatic share once I read that
acdha
3 days ago
reply
Washington, DC
Share this story
Delete

Over 50 and Looking for a Job? We Want to Hear From You

2 Shares

How do Americans live the last third of their lives? What we hear, especially when it comes to working, is that this usually is a time of stability, increased flexibility and widening opportunity. The kinds of work that people 50 and older do are often gamely called “encores,” “re-careers” or “third acts.”

But “encore” doesn’t exactly fit my own experience. My aim at ProPublica is to find out whether it doesn’t fit others’ as well and to learn how people entering their later careers are faring.

I was laid off at 63. It took me 15 months to find a new job. In the interim, my twins, then 18, headed for college. The money was (and still is) flying out the door.

Getting laid off may be the price of a dynamic economy. Getting stuck out wasn’t part of the deal, especially if, like me, you depend on wages to pay your daily expenses. And to add to the pot for when you no longer work.

Building that reserve isn’t getting any easier. American employers are ratcheting back on their contributions. Rules aimed at protecting retirement funds are under attack by the new administration.

I’ve already done stories on court battles over age discrimination and want to delve deeper into the issue. If you know of a company or organization that has made major cuts of older workers, I’d like to hear about it.

I want to do stories about people moving through their 50s and 60s who are hit with demotions, layoffs or business closings. I want to find out what was behind the blows and how everyone coped. If you or someone you know has had one of these experiences, I’d like to talk.

I want to hear from people who’ve received a buyout or other parting package they thought would set them up for life only to discover it wasn’t enough, and then had trouble getting new work.

In short, I want your views on these issues — and others that you’d like addressed. We can build a community around what we learn together.

Do you have a story about age discrimination in the workplace? Help us with our reporting by answering some questions here.

Of course, I’m not your person if you’re looking for help with your particular job hunt. Or what to wear after 50. But with your help I can provide a realistic report about the challenges, setbacks and victories that real people face living out the rest of their lives. Please contact me at: peter.gosselin@propublica.org, or by leaving a message at 917-512-0258.

I’m also dusting off my Facebook account, where I’ll post my stories and anything useful I find along the way. So please don’t be shy.

Read the whole story
zwol
3 days ago
reply
Mountain View, CA
acdha
3 days ago
reply
Washington, DC
Share this story
Delete

Announcing JSON Feed

3 Comments and 4 Shares
samuel shared this story from Daring Fireball:
They emailed me about it. I'm happy to support it but I'd prefer if support came from the Python feedparser library. If that happens, I'll integrate it.

Brent Simmons and Manton Reece:

We — Manton Reece and Brent Simmons — have noticed that JSON has become the developers’ choice for APIs, and that developers will often go out of their way to avoid XML. JSON is simpler to read and write, and it’s less prone to bugs.

So we developed JSON Feed, a format similar to RSS and Atom but in JSON. It reflects the lessons learned from our years of work reading and publishing feeds.

I think this is a great idea, and a good spec. I even like the style in which the spec is written: for real humans (much like the RSS spec). If you want to see a real-life example, Daring Fireball has a JSON Feed. I’ve got a good feeling about this project — the same sort of feeling I had about Markdown back in the day.

Read the whole story
zwol
6 days ago
reply
It is good to see people working on syndication protocols again
Mountain View, CA
Share this story
Delete
2 public comments
jepler
6 days ago
reply
'I’ve got a good feeling about this project — the same sort of feeling I had about Markdown back in the day.' while I'm worried that in 20 years somebody will actually try to standardize json feed and somebody else will be a big crybaby
Earth, Sol system, Western spiral arm
JayM
6 days ago
reply
Good deal.
Atlanta, GA
lukeburrage
6 days ago
As someone who just had to custom generate an RSS feed for a podcast archive... me like!

“If only there were a vast empirical literature…”

2 Comments and 4 Shares

Paul Krugman blogged on that, with initial impetus from Noah Smith.  Here is Noah:

If you and your buddies have a political argument, a vast literature can help you defend your argument even if it’s filled with vague theory, sloppy bad empirics, arguments from authority, and other crap. If someone smart comes along and tries to tell you you’re wrong about something, just demand huffily that she go read the vast literature before she presumes to get involved in the debate. Chances are she’ll just quit the argument and go home, unwilling to pay the effort cost of wading through dozens of crappy papers. And if she persists in the argument without reading the vast literature, you can just denounce her as uninformed and willfully ignorant. Even if she does decide to pay the cost and read the crappy vast literature, you have extra time to make your arguments while she’s so occupied. And you can also bog her down in arguments over the minute details of this or that crappy paper while you continue to advance your overall thesis to the masses.

…My solution to this problem is what I call the Two Paper Rule. If you want me to read the vast literature, cite me two papers that are exemplars and paragons of that literature. Foundational papers, key recent innovations – whatever you like (but no review papers or summaries). Just two. I will read them.

If these two papers are full of mistakes and bad reasoning, I will feel free to skip the rest of the vast literature. Because if that’s the best you can do, I’ve seen enough.

Those are both interesting posts, but my perspective is different, probably more as a matter of temperament than thinking they are objectively wrong.  Here are a few comments:

1. The best two papers on ethics are not very convincing.  Nonetheless people who have worked their way through a good amount of that literature are much better at ethical reasoning than those who have not.

2. The best two papers on global warming are not very convincing.  What is convincing is how many different perspectives and how many different branches of science point toward broadly similar conclusions.  In fact the aggregate effect here is quite overwhelming (don’t debate gw in the comments, not today; I’ll delete).  It is a question of many moats, not all of them being entirely muddy.

3. I see the Smith-Krugman standard as fairly economistic, and fairly MIT-late 20th century at that.  It is one vision of what a good literature looks like, and a fairly narrow one.  It will elevate simple answers in status, whether or not that is deserved.  It discriminates against dialogic knowledge, book-based knowledge, historical knowledge, and knowledge when the answers and methods are not very exact.  There is the risk of ending up too certain about one’s knowledge.

That all said, I do understand that specialized top researchers, including Nobel Laureates, often may do better holding relatively narrow methodological visions.  Look at all the Nobel Prizes that have been awarded to Chicago.  It might be entirely correct to insist that Becker’s treatise on the family pay more attention to anthropology, but that doesn’t mean he should have followed that advicee.

4. The standard seems to discourage reading, and I would not want to teach it to my students.  I teach something more like “always read more, unless you are writing or doing relevant quantitative work.  And one reason you write is to improve the quality of your reading.  Read more and write more, all the time.”  I still think that is better advice for most (not all) people.

5. Isn’t there a lot to be said for deferring to the opinions of those who have read through the “muddy moat”?  By no means are they all partisans, and the non-partisan ones care most of all about the truth.  After all, they did all that reading!  Defer, rather than trust so much in your ability to pick you the right two papers, or have someone pick them out for you.  I have a much more positive view of survey articles than does Noah, while understanding they do often leave you fairly agnostic on major issues.

6. If the truth of the matter is in fact muddy, you may need to dip into the muddy moat to learn that.

7. The difference between total value and marginal value may be relevant.  You might conclude a field literature has low total value, but the marginal value of learning more about that area still could be quite high.  That is in part because muddy fields and results don’t spread so readily, and so dipping into the muck can yield some revelations.  That is another reason why I would not offer the “two paper standard” as practical advice.

8. If anything, I would put the reading pressure on the other side, namely more rather than less.  Rather than encouraging readers to dismiss or downgrade fields, I would urge them to consult different disciplines altogether, including political science, sociology, and anthropology, others too.  This is much easier to do if you take a more positive attitude toward survey articles.

9. This is quite a subjective impression, but I worry that the dogmatic will use the two paper standard to dismiss or downgrade particular lines of investigation.

10. I don’t know if Noah and Paul were referring to my colleague Garett Jones, who frequently tweets “…if only there were a vast empirical literature” when he sees claims that he regards as empirically false.  Now, I am not the Garett Jones oracle, but I always took his use of the word “vast” to be slightly sarcastic.  Usually these are cases where even a fairly cursory knowledge of the literature in question would indicate something is wrong with the claim at hand.  In my view, Garett is not demanding “vastness” of effort, rather he is criticizing those who don’t grasp what the effort space looks like in the first place.

The post “If only there were a vast empirical literature…” appeared first on Marginal REVOLUTION.

Read the whole story
zwol
6 days ago
reply
I sympathize with both sides of this argument, and I wonder if it would work better to demand a textbook recommendation.
Mountain View, CA
Share this story
Delete
1 public comment
dmierkin
6 days ago
reply
surprisingly relevant conversation.
do you demand "completely thought out " proposal or should a manager look for a dialogue

Jane Jacobs, the tyranny of experts and Brexit

2 Shares

Last night I watched Citizen Jane, a recent biopic about Jane Jacobs and her long fight against Robert Moses’s plans for New York. Of course, Jacobs was largely correct: Moses’s grand utopian schemes wrecked the ecologies of street and community and eventually produced neighbourhoods worse than the ones they replaced, whilst failing to solve even the problems, like traffic congestion, they seemed best suited to. But being already familiar with the substance of the dispute, and with Jacobs’s great work, The Death and Life of Great American Cities, what struck me most forcefully was the rhetoric. On the one hand, there were the self-proclaimed “experts”, on the other, ordinary people with their lived experience, sceptical about whether the “experts” had their best interests at heart (or if they did, whether they shared the same conception of their interests). A great irony of the Jacobs case is that though she was right about Moses and his plans, the net result of her activism has not been, in the end, to preserve those neighbourhoods for the kinds of people who lived there then, but rather to give them an afterlife to be enjoyed by the people who can now afford to live in them.

The film made somewhat uncomfortable viewing, because the rhetoric around “experts”, the post-war urban planners, was so similar to that around Brexit. A year on, I’m more convinced than I ever was that Leave was the wrong decision, and more distressed about the loss of vital freedoms and the political fallout than I was then. Still, parallels are parallels. The European Union is a high modernist scheme often administered by “experts” who know better than the people (or peoples) what is in their best interest. The promise of the European Union, for real improvements in people’s lives, has been tarnished, to say the least, by the experience of millions of people both in the deindustialized North (northern England, Wallonia, Picardy …) and in the southern periphery. And as Peter Mair demonstrated in Ruling the Void, the last few decades have seen a disengagement of people from political parties and political life, reinforced by the sense that the experts (perhaps in Brussels) would take the decisions for them anyway, so what would the point be?

Scepticism about “experts”, memorably voiced by Michael Gove during the Brexit referendum campaign, has been much mocked by Remainers (including me). But that unwillingness to believe the experts, even when they’re right, isn’t based on nothing, but rather in the repeated overpromising of those who know best together with the failure of anything like the radiant future to arrive. The Euro is perhaps the worst example of a plan promoted by “experts” on the basis of their vision of the future which has had real costs for everyone outside the strongest economies such as Germany. (One of the reasons I passionately opposed Maastricht and ERM was reading a passage in another, slightly odd book by Jacobs, Cities and the Wealth of Nations.) One hopes that experts might learn the lessons of this, and recognize that Brexit is also the result of a loss of confidence for which they (we?) bear responsbility too and that continuing to push for utopian schemes without making a connection with ordinary people will only promote the populist reaction. Things to think about for “progressive” politics both in the EU and in post-Brexit Britain too: we won’t get anywhere unless we can talk to and eventually mobilize, many of the people currently tempted by the populist right (and now fleeing to the comforting embrace of Mother Theresa, freak lovechild of Stanley Baldwin and Eva Peron).

Read the whole story
zwol
9 days ago
reply
Mountain View, CA
acdha
9 days ago
reply
Washington, DC
Share this story
Delete

Don't tell people to turn off Windows Update, just don't

2 Comments and 3 Shares
Don't tell people to turn off Windows Update, just don't

You know what really surprised me about this whole WannaCry ransomware problem? No, not how quickly it spread. Not the breadth of organisations it took offline either and no, not even that so many of them hadn't applied a critical patch that landed a couple of months earlier. It was the reactions to this tweet that really surprised me:

When you position this article from a year ago next to the hundreds of thousands of machines that have just had their files encrypted, it's hard to conclude that it in any way constitutes good advice. I had the author of this post ping me and suggest that people should just manually update their things if they disabled Windows Update. That's fine in, say, a managed desktop environment such as many organisations run and let's be clear - disabling Windows Update isn't the issue in that situation because there are professionals managing the rollout of patches (with the obvious exception of the organisations that just got hit by WannaCry). But your average person is simply not going to keep on top of these things which is why auto-updaters are built into so many software products these days. Obviously they're in Windows, same with Mac OS and iOS, same with browsers like Chrome and Firefox and same again with the apps themselves on a device like your iPhone by virtue of the App Store automatically keeping them current.

Often, the updates these products deliver patch some pretty nasty security flaws. If you had any version of Windows since Vista running the default Windows Update, you would have had the critical Microsoft Security Bulletin known as "MS17-010" pushed down to your PC and automatically installed. Without doing a thing, when WannaCry came along almost 2 months later, the machine was protected because the exploit it targeted had already been patched. It's because of this essential protection provided by automatic updates that those advocating for disabling the process are being labelled the IT equivalents of anti-vaxxers and whilst I don't fully agree with real world analogies like this, you can certainly see where they're coming from. As with vaccinations, patches protect the host from nasty things that the vast majority of people simply don't understand.

This is how consumer software these days should be: self-updating with zero input required from the user. As soon as they're required to do something, it'll be neglected which is why Windows Update is so critical. Let's start there:

Leave your automatic updates on

The frustrating part of the debate that ensued after that tweet is not that people weren't proactive in protecting themselves, rather that they were proactively putting themselves at risk by disabling security features. Windows Update is the default position; you install the operating system (or receive it pre-installed from your hardware vendor of choice) and it looks like this:

Don't tell people to turn off Windows Update, just don't

And then you go about your business. By pure coincidence, I rebuilt my desktop machine over the weekend and left Windows Update to do its thing which consequently meant getting a bunch of patches:

Don't tell people to turn off Windows Update, just don't

I start work early in the morning and often finish late which means I don't want things restarting on me while I'm busy so I customised my active hours:

Don't tell people to turn off Windows Update, just don't

And there's a bunch of other configurability as well which I won't go into here. Point is that straight out of the box, updates are being applied and it's easy to minimise the adverse impact by virtue of defining those active hours. But this is not perfect - far from it - and believe me, I've felt the pain of Windows Update on many occasions so let's acknowledge that:

Sometimes, updates will annoy you

I've had Windows Update make me lose unsaved work. I've had it sitting there pending while waiting to rush out the door. I've had it install drivers that caused all manner of problems. I've had it change features so that they work differently and left me confused. I've had it consume bandwidth, eat up storage capacity and do any number of unexplainable things to my machines.

Those of us who've felt Windows Update-inflicted pain will all agree on this:

Microsoft needs to make Windows Update better.

Because let's face it, all of these things are fundamentally annoying. Some of them are even costly and if you haven't yet had an occasion where you've sworn at an update, you either haven't spent enough time with Windows or you're much more even-tempered than me. Microsoft needs to reduce the frequency of updates, reduce the occasions where it breaks things and reduce the times where updates happen at the most inconvenient times.

Now that has been improving over recent versions and certainly Win 10 has made many positive steps forward in this regard. But it's not iOS-slick yet; I'd love it if my Windows boxes updated as smoothly as my iThings. I can see why, of course: we're talking about Apple managing updates in an ecosystem where they control both hardware and software and have a very limited number of combinations of the two to worry about. The number of mixes and matches of Windows hardware and drivers is unfathomably large, and that's before you throw in all the various software packages that are distributed in all sorts of strange ways. It's a very complex ecosystem they're dealing with.

Regardless, it's important to acknowledge how frustrating the experience can be when stuff goes wrong. But right now, you need to decide what to do about it:

This is how it is, now make a decision

We agreed that Microsoft need to make this better, right? Cool, now what? I mean it's all well and good for some people to be unhappy with the way updates run today but what are you actually going to do about it? A number of people engaged in discussion after that tweet which was very, well, "enthusiastic" and that's fine, but what would they actually tell people to do?

Last year US-CERT wrote about ransomware and one of their recommendations was as follows:

Keep your operating system and software up-to-date with the latest patches. Vulnerable applications and operating systems are the target of most attacks. Ensuring these are patched with the latest updates greatly reduces the number of exploitable entry points available to an attacker.

And in the wake of WannaCry, Microsoft's President and Chief Legal Officer wrote about the need for urgent collective action:

This attack demonstrates the degree to which cybersecurity has become a shared responsibility between tech companies and customers. The fact that so many computers remained vulnerable two months after the release of a patch illustrates this aspect. As cybercriminals become more sophisticated, there is simply no way for customers to protect themselves against threats unless they update their systems. Otherwise they’re literally fighting the problems of the present with tools from the past.

I doubt that even the most ardent opponents of my earlier tweet will disagree with either of the quotes above, so how are you going to achieve it if not by Windows Update? A portion of them will monitor the various patches and apply them as required, for example organisations with managed desktop environments (although again, as WannaCry demonstrated, there are some serious shortcomings in many orgs). Another portion will disable Windows Update and do nothing after which they might be ok. Maybe. For a while. They may also be WannaCry'd or Locky'd or whatever else but that's their prerogative and so long as they know the risk they were taking, I'm kinda ok with that. But your average Windows user doesn't know the risks and nor should they need to if they don't go turning fundamental security defences off!

So here's what I'll leave you with: there's no point chiming in about how you had a bad experience with an update once (or regularly), because other than vocalising those experiences to Microsoft and configuring updates within the scope of what Windows provides, you and I have no control over that (and no, I don't work for Microsoft). We've also just had a stark demonstration of what goes wrong when people don't patch so there's no point arguing about whether it may or may not be necessary. So what will you do? And perhaps more importantly, what will you advise other people to do?

Read the whole story
zwol
10 days ago
reply
While I agree that people should leave the automatic updates on, it would be nice if the author at least acknowledged a major reason people turn them off: because neither Microsoft nor any other big industry player has shown itself trustworthy about pushing updates that _only_ fix bugs and don't introduce new weird things they didn't ask for (sometimes known as "features"). Most people are not neophiles.
Mountain View, CA
acdha
10 days ago
Agreed – I think this really should be seen as a call for the industry growing up on how we handle updates. It's ridiculous that I've had my Debian systems set to auto-update for two decades with no issues but no consumer OS has that kind of stability or unobtrusiveness
acdha
10 days ago
reply
The IT equivalent of saying a good driver doesn't need a seatbelt
Washington, DC
Share this story
Delete
Next Page of Stories