Paying To Be Good

Who knew that raising the price of a given product would dissuade people from buying it?

Well, surprise or not, New York’s Metropolitan Transportation Authority announced last week that its experiment to levy a one-dollar surcharge on every new MetroCard has resulted in a precipitous decline in the number of new MetroCards purchased by riders of New York public transit.

Who’d a-thunk?

Here’s the scheme. The cost of a single ride on the New York Subway costs $2.50, charged to an aforementioned plastic MetroCard, which can be reloaded and reused forever (or at least until global warming submerges Manhattan in a sea of sludge and kelp).

However, most subway patrons are either unaware of, or uninterested in, the card’s temporal durability and opt instead to deposit it in the trash (or, by the looks of it, everywhere other than the trash) after one trip.

In response to the resulting mess, the MTA in March 2013 imposed a $1 fee for the card itself, meaning that every subway ride would suddenly cost $3.50 instead of $2.50—that is, unless customers took the radical step of actually hanging on to the damn thing and recycling it for all subsequent journeys.

As it turns out, they have done exactly that. In the first 14 months of the surcharge being in effect, the number of new cards in circulation dropped 71 percent, with $24 million in new revenue for the MTA along the way.  Presto.

To be sure, New York is hardly the first municipality to think of this. The Metro in Washington, D.C., charges $2 for a SmarTrip card, which, like a MetroCard, can be recycled ad infinitum and whose use, unlike a MetroCard, lowers the per-trip fee by a dollar, compared to using a disposal one-time-only ticket. In my hometown of Boston, the procurement of a CharlieCard, itself free of charge, grants a 50-cent discount on all subways and buses, plus free transfers.

In all these cases—and plenty more besides—the premise is one and the same: The most surefire way to make people behave in a certain way is to make it economically propitious for them to do so.

We don’t want people to smoke, so we raise the cigarette tax. We don’t want people to drive so much, so we raise the gas tax and the highway tolls. We don’t want people to soil the subway station floor with expired fare cards, so we charge them a dollar every time they do so. And so forth.  (Raising money for the government is, of course, a concurrent motivation.)

To be precise, the above are all instances of using monetary incentives to limit bad behavior, not to encourage good behavior. But then again, most such price-based government policies operate in exactly this way, leading to the common gripe about the so-called “nanny state,” which presumes to know what is best for us and compels us to act accordingly. (See: Michael Bloomberg, entire mayoralty of.)

But what if our elected overlords took a more, shall we say, wholesome approach to influencing the public’s behavior, rather than merely punishing or restricting it? If the state is going to be a nanny, why not be a fair and rewarding one?

There is often talk about super-taxing sugary drinks and snacks to dissuade kids (and grown-ups) from over-consuming them. Why do we so seldom raise the prospect of subsidizing healthy foods instead? It may be true that a bag of grapes will never be as enticing as a Snickers bar, but let us not pretend that the relative costliness of the former does not have a commanding role in wafting our shopping carts toward the latter.

With how much we, the people, currently pay farmers to transform soil into dinner, don’t tell me we couldn’t coax them into, say, growing more carrots and less corn, thereby tweaking the market prices of both and astronomically improving the nation’s overall health—itself the single greatest means of solving the puzzle of unaffordable healthcare.

Likewise, rather than merely punishing companies that emit too much pollution (as necessary as that is), how about actually rewarding those that make the greatest effort to emit the least—much in the manner of the Obama administration’s “Race to the Top” program to raise public education standards?

Am I being naïve? You bet I am. I probably couldn’t know less about the root mechanics of these subjects if I tried.

What I do know is a sentiment expressed by historian David McCullough in defending government funding of the arts: “People say we don’t have the money. Of course we have the money. It all depends on what we want to spend our money on. You can tell an awful lot about a society by how it spends its money.”

What kind of a society do we want to be? One that actively strives for good, or one that merely avoids doing bad?

Mind you, these are not mutually exclusive ambitions. Just ask the MTA.

We Don’t Do That Already?

This past week at City Hall in Newton, Massachusetts, there was a flag-raising ceremony on the front lawn to commemorate ten years of gay marriage here in the commonwealth.  Speakers included Ellen Wade and Maureen Brodoff, who were among the handful of same-sex couples who wed on the very day it became legal in May of 2004.  With June being Gay Pride Month, the flag hoisted at the conclusion of the event was striped not in red and white, but in the six colors of the rainbow.

Another speaker, noting that this year marks the first such appearance of the gay pride flag on the City Hall grounds, dryly recalled the moment a few months back when such an idea was proposed, and someone else in the room piped in, “You mean we don’t do that already?”

Yup, even in über-liberal Newton—the highly-affluent city immediately west of Boston—no one had thought to fly the unofficial symbol of all things gay until now.  Even though City Hall itself was the site of Wade’s and Brodoff’s wedding.  Even though neighboring Boston has done so for years.  Even though Newton, flag or not, has long been about as gay-friendly as a medium-sized metropolis can possibly be, even by New England standards.

For all this, the notion of formally flaunting the city’s inclusiveness with a rainbow banner had somehow gotten lost in the shuffle—even though such an act, as this speaker put it, is “something you’d just expect us to do.”

Obviously, on the menu of demands by the gay rights movement, the securing of a rainbow flag on government grounds is extremely small potatoes.  The flag is but a symbol, and its presence is thereby strictly symbolic.

However, we would do well to take the point about what is “expected” of the government by the people, and how things that come to be taken for granted did not magically appear overnight.

Quite to the contrary.  The gap between what our representatives ought to accomplish and what they actually accomplish is often so great as to warrant outrage and despair, and the realization that even the would-be no-brainers of public policy require real effort and perseverance is enough to eviscerate one’s faith in the whole concept of working for the public good.

That Alabama took until 2000 to officially remove its anti-miscegenation laws from the books.  That the U.S. Congress did not truly enforce the fundamental right to vote, as guaranteed by the 15th Amendment, until 1965.  That the Department of Veterans Affairs has been so slow to computerize its files that a floor of one VA office nearly buckled under the weight of all the paperwork.

These are not shining examples of American ingenuity at work.  Indeed, they are embarrassments.  And we are entitled to ask, how could these problems possibly have gone unfixed for as long as they did?  What was everyone thinking?  Indeed, were they thinking anything at all?

But that’s all in the past, and thus only confronts half of the question.  The other half concerns the present and the future, and all the supposedly obvious things that we, the people, have yet to get done on our own behalf.

As observed by yet another speaker at the shindig in Newton, for all the legal protections now afforded gays and lesbians, the rights for transgender people are miles behind.  As of now, 16 states and the District of Columbia explicitly prohibit employment discrimination on the basis of gender identity.  This means that if you live in one of the remaining 34 states, you can lose your job for no reason except that you were born male and now consider yourself female (or vice versa), and there’s nothing you can do about it.

Soon enough this will change, either through passage of the proposed Employment Non-Discrimination Act or, like gay marriage, through continued and exhaustive advocacy on a state-by-state basis.  In any case, these basic legal protections, when they come, will elicit nothing so much as a shrug and the query, “Those laws didn’t exist already?”

The takeaway message in all of this—particularly unfortunate to preternaturally lazy folks like me—is never to assume that someone else will come up with all the good ideas.  It just may be up to you.

This, in turn, underlines both the power and importance of imagination when doing the people’s business.  The need, in the words of George Bernard Shaw, to “dream things that never were and […] say, ‘Why not?’”

Not everything is as easy as flying a flag.

The Great Un-leveler

For a politician—or, really, for anyone—all the most interesting and revealing questions are the ones that come not from your enemies, but from your friends.  It’s easy enough for someone who doesn’t like you to attempt to catch you in a contradiction or make you look like a fool.  But when someone in basic agreement and sympathy with your views manages to trip you up and put you in a defensive crouch—well, now we’re getting somewhere.

That is essentially what occurred recently when Hillary Clinton faced Terry Gross in an interview on National Public Radio, and the subject turned to same-sex marriage.  On this, Gross asked Clinton what can only be characterized as a straightforward and obvious question:  Did Clinton’s present support for gay marriage, which she announced in March 2013, come about organically or as a consequence of political calculations?  Did her views “evolve” gradually or did she, in fact, privately support equal marriage rights long before saying so publicly?

Clinton’s history on the subject is as follows:  In 1996, her husband, President Bill Clinton, signed into law the Defense of Marriage Act, which prevented same-sex couples from being recognized as legal spouses by the federal government.  Hillary, for her part, publicly supported DOMA for at least the next seven years, both as first lady and as a New York senator, even as she spoke in favor of “domestic partnership measures” for gay people and introduced legislation to that end.

Her view, in effect, was that same-sex unions should be equal to opposite-sex unions in everything but name.  (Never mind that DOMA made even this compromise impossible.)

Clinton still opposed gay marriage rights during the 2008 presidential race, although she did repudiate Section 3 of DOMA in a 2007 questionnaire.  Her overall position—anti-marriage, pro-civil unions—remained more or less consistent until March of last year, when she joined Team Gay, fully and unequivocally, at last.

A reasonable conclusion to draw from this narrative, when considered piecemeal and in full, is that Hillary Clinton has always regarded homosexuals as morally equal to heterosexuals, and therefore would probably have openly supported same-sex marriage—the concept and the word itself—many years earlier than she did, had the issue not been made such a radioactive “wedge” from the mid-2000s onward.  (Only 42 percent of Americans favored gay marriage in 2004; support didn’t reach 50 percent until 2011.)

Because the subject has been so politically fraught, the theory goes, Clinton made the strategic decision to postpone a formal endorsement of marriage rights until the opinion polls made it politically safe to do so—even though this meant suppressing deeply-held convictions that, after all, placed her firmly on the right side of history.

This, in so many words, is what Terry Gross was attempting to get Clinton to acknowledge.  That even the defense of basic civil liberties is not immune to political calculations, and that Clinton, of all people, understands that there is a political component to everything, and has learned to act accordingly.

Realistically, Clinton had two possible ways to respond.  First, by affirming the charge with some version of, “Yes, I was in favor of gay marriage before 2013, but didn’t think it prudent to get involved in a domestic debate while being secretary of state.”  Or second, by rejecting the whole premise and insisting her private and public views are, and have always been, one and the same.

How did she actually respond?  With good old option number three:  By becoming paranoid and evasive, and accusing the interviewer of the lowest possible motives.  In this case, by accusing Gross of “playing with my words” and “trying to say that […] I used to be opposed and now I’m in favor and I did it for political reasons.”  (Gross did no such thing.  There is a big difference between timing your public views to suit political realities and inventing public views from whole cloth.)

And so I humbly ask:  Presented with a direct question, to which the only possible answers are A and B, what are we to make of someone who responds with anything other than A or B?  If the true answer to, “Are your stated views on gay marriage genuine?” is “Yes,” then what exactly is the disadvantage to simply saying so and moving on?  Why does the probe require an indignant straw man speech and an assumption of bad faith?

If you’re being defensive about something that (according to you) you have no reason to be defensive about, are we not duty-bound to infer a guilty conscience of one kind or another?

The issue here, finally, is not gay marriage, as such, but rather Hillary Clinton’s behavior when faced with a not-so-challenging line of inquiry, along with her apparent inability to level with the American public about any number of things, and her tendency to make enemies when there is no earthly reason to do so.

By no means should these troubling qualities prevent her from becoming president, as many of the previous 43 officeholders would affirm.  But nor should they prevent us from wondering if they would impinge upon her ability to be a good one.

Being led by a paranoid, calculating liar can, on occasion, have a downside.

Primary Schooled

I’m sorry, but I still don’t care about Eric Cantor.

I know I’m supposed to. I know that the House majority leader’s defeat in a primary election is a politically seismic event—something, in fact, that had not happened in the 115 years that the position of “House majority leader” has existed. I know that it supposedly demonstrates the continued pull of the Tea Party in American public life. And I know that, as a consequence, this alters the entire narrative of the 2014 midterms.

All three of these assertions may be true, and Cantor’s loss may well come to embody the final death twitches of the Republican Party establishment. In the fullness of time, we might be able to quantify these assumptions in one direction or another.

Until that happens, however, I reserve the right to remain agnostic and slightly indifferent about what the electoral demise of Eric Cantor portends for the future of the republic, and I recommend the same for everyone else.

I say this not merely out of annoyance and exhaustion with the political-media-industrial complex—although I most certainly do. I would be very happy, indeed, if all TV and Internet coverage of the 2014 races were to be unceremoniously sucked into a black hole until, say, the day before the polls open in November.

As well, it is possible that I have become so conditioned to expect our political system to descend ever-further into lunacy and chaos that whenever it happens, I simply forget to be surprised.

Whatever the secondary considerations, my skepticism toward the significance of the anti-Cantor vote is finally reducible to one figure: 14 percent. That was the number of eligible voters in Virginia’s 7th congressional district who actually turned out last Tuesday. (Virginia allows registrants of any party to vote in primaries.) So when we speak of the people who tossed out a House majority leader in a primary for the first time in history, we are talking about one-seventh of those who had the opportunity to register an opinion. In other words: Practically no one.

By no means is this to suggest the election was illegitimate or anomalous. To the contrary, just about every non-presidential election year features pathetic turnout in every pocket of the country—doubly so in party primaries—meaning that all such showdowns are subject to small sample sizes and are therefore poor (or at least unreliable) representations of what the broader American public actually thinks.

As such, primary votes, like off-year elections in general, are largely determined by the political factions with the most passion, rather than the most people. The continued success of the Tea Party, such as it is, comes not from its size, per se, but from the fact that its members make a concerted effort to actually get off the couch and vote, particularly in elections (such as Cantor’s) in which hardly anyone else bothers to do so.

That, in so many words, is my point. Statistically speaking, House primary votes are inherently volatile. They are liable to change on a dime for seemingly no reason at all and can be rationalized in any number of ways, most of them laughably wrongheaded.

As such, it is foolhardy to draw any grand, nation-spanning conclusions from the results of any one race. Heck, if last week’s opinion polls in Virginia are any indication (Cantor had been miles ahead in all of them), you’d be pushing your luck trying to derive any such meaning about the district in which the vote occurred. Sometimes, there is less than meets the eye.

In this case, all we know for sure is that 55 percent of one-seventh of the eligible voters in Virginia’s 7th district decided that the man representing them in Congress did not deserve another term. We are free to infer any of a million theories as to why this occurred and what it means, but everything must begin with the basic fact that this big story was created by an incredibly small pool of citizens, thereby increasing the possibility that there is no story at all.

And if we absolutely insist on buggering on about this “historic” result, let us reflect on how pitiful it makes us voters look.

After all, in a Congress with an approval rating of 16 percent and in which only 7 percent of Americans profess a “great deal” or “quite a lot” of confidence, why should it come as a shock when any incumbent, no matter how high-ranking, finds himself suddenly out of a job? Shouldn’t this be the rule, rather than the exception?

While it is historically true that being a sitting congressperson carries an enormously high level of job security from one election to the next—a fact that, by rights, shouldn’t make any sense at all—perhaps the final lesson in the fall of Eric Cantor is that this tendency is finally starting to change, and that our elected representatives can no longer take their posts and their constituents for granted.

It’s a long shot, perhaps. But then again, so was David Brat.

The Vineyard Paradox

The sign of a good annual vacation destination is the strong tug of melancholy one feels upon leaving, knowing that one will not have the pleasure of taking in the locale’s unique and magical charms for a whole ’nother year.

The sign of an exceptional vacation spot is when this feeling kicks in shortly after you arrive, as the overwhelming sense of joy in having landed at your own personal Heaven on Earth gives way to the realization that it’s only a matter of time before you’ll have to leave it all behind and return to your boring Real Life.

That is roughly what happened last week when I set foot on Martha’s Vineyard, where my family has set up camp for a week of every summer since I was born.  (Or rather, since before I was born.)  It’s our proverbial home-away-from-home, and none of us can envision a summer without it.

Why do we feel this way?  What is it about this wealthy tourist magnet seven miles off the coast of Cape Cod that draws us back year after year?

In part, it’s the Vineyard’s sheer predictability.  The way the ferry from the mainland always casts off at precisely 10:45.  The way Menemsha Harbor and the Edgartown docks look exactly as they did when Steven Spielberg filmed Jaws there in 1975.  The way a scoop of Mad Martha’s ice cream and the fresh fruit at Among the Flowers Café hit the spot like nothing else—not least because of the picturesque setting in which they are enjoyed.  It’s been years since I’ve ridden the Flying Horses Carousel—America’s oldest—but I’m pleased as punch to see that it’s still there.

The Vineyard is also notable—and, in my view, laudable—for its stubborn refusal to adapt to many aspects of contemporary society.  Despite a midsummer crowd of some 100,000 souls (the year-round population is 20,000 at most), the island contains not a single traffic light.  Nor does it play host to any Starbucks or Dunkin’ Donuts.  Most of its famed beaches don’t contain even one porta-potty, and most businesses don’t bother posting their hours, perhaps because they tend to change on a near-daily basis.  (This phenomenon is known as “island time.”)

Not that Martha’s Vineyard is the land that time forgot.  There is, for instance, a pair of Stop & Shops and, for some reason, a Dairy Queen.  All the main thoroughfares are paved (although many side roads are not).  There are ritzy shops and restaurants out the wazoo, and a state-of-the-art film center to boot.  In recent years, the condo in which my family stays has introduced free Wi-Fi, which runs just as well as the service back home.

In our view, then, the Vineyard is the best of all worlds:  Exotic and remote, yet accessible and modern.  An island paradise with electricity and a gift shop.  (Don’t even get me started on the hiking trails, seafood and sunsets.)

For that reason, we have long entertained the possibility that we would eventually move there for good—or at least stay for a full month each summer, rather than a week.  After all, we love being on the Vineyard so very much, how could we possibly tire of it?

Which brings us to the most terrifying prospect of all:  The likelihood that, in fact, we would.

Is it not reasonable to surmise that sticking around in our idealized Vineyard haven would gradually come to be slightly less than ideal?  Wouldn’t the magic inevitably wear off?  Isn’t our unconditional infatuation with the island’s beauties and quirks a function of the fleeting nature of our encounters with them?

Everyone knows the cliché, “Familiarity breeds contempt.”  In the present context, we should be equally concerned with the notion that familiarity breeds boredom.

Case in point:  As we popped into a bakery on our last day on the island, expressing our sadness about leaving and our (fanciful) wish to stay forever, the young woman behind the counter—a year-rounder who has lived on the Vineyard her whole life—dryly quipped, “Want to trade places?”

My suspicion—which I take no pleasure in formulating—is that our favorite summer getaways are so pleasurable precisely because they are so novel and rare.  We may yearn to live in paradise all the days of our lives, but the truth is that if we actually did, it would cease being paradise in a depressingly short period of time.  The key is to savor the blissfulness while it lasts, understanding that the ephemeral nature of bliss is what lends it its power in the first place.

I am well aware of the old Mae West line, “Too much of a good thing can be wonderful.”  As well, I have no particular desire to abandon my hometown of Boston—a “destination” city in its own right—where I have resided for some eight years and whose offerings I still enjoy immensely.

And yet I am as certain as I can be that were I to up and leave and not return for a long time, my love for the city of beans would only increase.  Just like the Vineyard.

Everybody Might Get Stoned

I must confess I’ve never ingested marijuana in chocolate form.  From what I’ve been reading lately, I’m not sure I’d ever want to.

It turns out the cannabis baked into pot brownies and other such “edibles” is far more concentrated and potent than we inexperienced noobs had previously grasped.  One need hardly take more than a small nibble to become buoyantly blazed for the better part of the evening.

With my relationship with sugar being what it is—I take a few bites, everything goes dark, and suddenly the whole box of Tagalongs is empty—I would be liable to inadvertently gorge myself into a stoned oblivion from which I might never completely return.

A sugar high is disorienting enough.  One need not pile an actual high on top of it.  (To say nothing of pot’s well-known ability to direct one’s hand deep into the cookie jar.)

But of course many people do exactly that, and with Colorado having become the first state to legalize the recreational use of marijuana, the buying and selling of cannabis-infused baked goods has entered the legal free market for the first time.  As such, the country has been compelled to think more critically and carefully than ever before about precisely how this new industry should operate.

While this question has myriad angles—many of which mirror those about the regulation of legal marijuana overall—perhaps the most essential involves the wide dissemination of basic scientific facts.  Namely, how much pot does one need to eat in order to achieve the desired effect?

It’s a rather important piece of information to possess if one has even the pretense of wanting to make intelligent consumption decisions.  And yet, an alarmingly high number of potential marijuana users are completely clueless.

They can’t be blamed too much:  When a substance is totally banned, discussions about proper dosage tend not to pop up all that frequently.  (Much like how abstinence-only sex education doesn’t bother teaching how to operate a condom.)  And so, when it then becomes legit, there is a lot of catching up to do.

(We should also not fail to note that, thanks to the Controlled Substances Act of 1970, the prohibition of a particular drug effectively prevents the scientific community from conducting all kinds of research into how the drug works.)

In any case, the pot edibles debate barged into the mainstream press in the last week after New York Times columnist Maureen Dowd recounted a recent evening in a Denver hotel room, during which she unwittingly ate several times more chocolate weed than necessary (at least for a first-timer like her) and proceeded to experience something resembling Leonardo DiCaprio’s Quaaludes adventure in The Wolf of Wall Street.  (Thankfully, she did not get behind the wheel of a car.)

Dowd’s column went viral, thereby alerting the masses to what is an entirely legitimate critique of the pro-pot push:  Widespread legal marijuana will necessarily invite its use by a sizable pool of new customers—perfectly intelligent in all other respects—who have no idea what they’re getting themselves into, and who will very predictably make highly regrettable decisions that will not be completely their fault.

That is, unless the marijuana-smoking-and-eating community makes a considered effort to educate the public about exactly what its product does.  Do pot proponents not have an obligation—moral, if not legal—to not simply assume that everyone else is as informed about the powers of weed as they are?

This wouldn’t seem to be an especially arduous challenge.  If the wrapper of a regular candy bar is capable of quantifying a “serving size,” then why can’t a weed-laced version of the same bar?  It may be true that marijuana, like alcohol, affects everyone in a slightly different way, but surely it is possible for a label to explain, “If you eat this whole bar at once, terrible things will happen.”

I speak from relative ignorance on this subject, owing to my aforementioned lack of interest in patronizing the “edibles” industry myself.  (Honestly, can’t y’all just smoke it from a pipe like everyone else?)

But then I might change my mind one day, and I would rather the relevant dosage information be planted directly in front of my nose—not to mention the noses of my countrymen, some of whom are not nearly as cautious or clever as I.

Plus, it would clearly be in the interests of the marijuana industry overlords to see that this happens.  It would, after all, relieve them of most of the culpability for when their customers ignore the warnings and eat the whole brownie anyway.

Stupid people can always be counted upon to do stupid things.  But when smart people start doing them, too—well, that’s quite a high risk to take.

Stupidity Kills

Five-plus years into the Obama administration, America’s foreign policy analysts have lately been in search of an “Obama doctrine”—that is, an overarching philosophy of how the United States ingratiates itself abroad under the current commander-in-chief.

According to these experts, there isn’t one.

Instead, they conclude, President Obama has taken a purely pragmatic approach on the world stage, summarized by a piece of advice the president himself has occasionally given, “Don’t do stupid stuff.”  (The last word isn’t always “stuff.”)

I don’t know about you, but that sounds like a pretty solid doctrine to me.  Compared to the most recent alternative—“Invade countries we don’t like using money we don’t have, because freedom”—the notion of using our brain instead of our gut seems just what the doctor ordered.

But of course, I say this with my tongue planted at least partially in my cheek.  In truth, using “don’t be stupid” as one’s modus operandi carries a fundamental flaw, which is that stupidity is in the eye of the beholder.  To evoke the word as if we all agree on what constitutes a good or bad idea reeks of arrogance and condescension—much like the incessant pleas to apply “common sense” in legislating this or that (as in “common sense gun laws” or “common sense tax rates”).

I underline the insufferable, patronizing nature of reducing humanity’s most complicated and intractable conflicts to mere rank idiocy on other people’s parts because, in recent times, I have increasingly found myself doing exactly that.  And I worry, at least some of the time, that I may just be right.

Case in point:  There was a horrible shooting in California two weeks ago, apparently fueled by the assailant’s violent hatred of women.  This event amplified a national conversation, already long underway, about the fact that nearly every woman in America has suffered one form or another of sexual harassment by men, and that this trend will not abate until our culture does a much better job of making such behavior utterly unacceptable.

I have no trouble accepting both of these premises as true.  And yet I cannot help but view the epidemic of sexism against women, finally, as nothing more than the result of certain men not grasping the concept of treating people as individuals rather than objects—something I understood when I was five, when Princess Jasmine explained to Prince Ali, Jafar and the sultan, “I am not a prize to be won!”

In other words, this is a crisis—however real and immediate—that would very nearly vanish from the Earth if all of my fellow males would merely summon the emotional maturity of, say, me.  If only these men weren’t so—sorry, but there’s no other word—stupid.

Because the antidote to misogyny is so readily available—particularly when compared, say, to the correctives for genocide, poverty or cancer—it is all the more frustrating that the plague should continue, and that the more enlightened among us—that is to say, nearly everyone—should have to go on arguing about something that, for us, is all too obvious.

It’s not just sexism where abject, preventable idiocy presents as not merely an annoyance and a hindrance to social progress, but as a lethal toxin in its own right.

There is also, for instance, the small but passionate group of American parents who would rather their children die from measles than develop autism.

That’s not how they would put it, of course.  They take the view that vaccinations against disease cause autism, and so they decline to have such vaccines administered to their kids and hope that nothing bad will happen.

There are at least two holes in this line of reasoning.  First, there is no evidence whatsoever that vaccines cause autism in the first place.  Period, full stop.  And second, there is abundant evidence that America’s current spike in measles cases—this year’s numbers mark a 20-year high—is a direct consequence of more people not having been vaccinated against it.

In short, hundreds of children are now sick and at risk of dying for absolutely no reason except that their parents made a conscious, stupid decision—namely, to ignore the entire medical profession in favor of a few crackpots—about something that should have been (and, for everyone else, is) a no-brainer.

I could continue plucking exhibits from the headlines—the denial of basic facts about climate change would have been next—but if I have not made an impression by now, the effort is probably futile.

“Don’t do stupid stuff” will not go down in history as one of America’s great doctrines.  Nonetheless, stupidity itself can, in fact, be objectively measured in some circumstances, and one must never underestimate the power of willful ignorance, particularly on a large scale, to inflict immediate and devastating harm upon the general public.

So be smart.