Secretary on Defense

Chuck Hagel was confirmed this week by the U.S. Senate to be our next secretary of defense.

For the last month and a half, a group of Republicans and others in Washington, D.C., mounted an unprecedented effort to ensure that sentence would never be written.  They ultimately failed, but they sure gave it the old college try.

The final tally for Hagel’s confirmation in the Senate was 58-41.  It was the closest vote for any Cabinet nominee since George W. Bush’s third attorney general, Michael Mukasey, was confirmed by a score of 53-40 in 2007.

The reasons for the Hagel holdup ranged from the legitimate (his views about Iran) to the ridiculous (a “connection” to a pro-Hamas group that, it turned out, does not actually exist).

At all points, the question that underlay the proceedings more than any other concerned the nature of Senate confirmations themselves.  Namely, in grilling the unholy heck out of the former Republican senator, did the Senate abuse its authority to review a president’s Cabinet nominee before giving its ultimate approval?

There is a school of thought—and a large one at that—that would answer rather thunderously in the affirmative.  For the past several decades in politics, the prevailing view has been that once someone is elected president, he is entitled to appoint pretty much anyone he wants to key high-ranking jobs in the executive branch, and that the Senate’s role in “confirming” such appointees is a mere formality.

The U.S. Constitution is of limited help on this point.  The relevant clause is Article II, Section 2, paragraph 2, which stipulates that the president “shall nominate, and by and with the Advice and Consent of the Senate, shall appoint Ambassadors, other public Ministers and Consuls, Judges of the supreme Court, and all other Officers of the United States.”

The contention relates to “Advice and Consent,” which is one of those phrases that can mean whatever you want it to mean.

Depending on one’s reading, the clause could empower U.S. Senators to assume a boldly assertive role in determining an appointee’s aptitude for a particular job.  However, it could just as plausibly be a mere acknowledgment of the president’s appointment power, with the mention of Congress nothing more than a nod to the “checks and balances” principle in the separation of powers.

Indeed, it was in a spirit of compromise that “Advice and Consent” was plugged into the Constitution in the first place—a means of placating both sides in the great argument at America’s founding over the relative powers of the executive and legislative branches.

As with other controversial Constitutional assertions—the Second Amendment leaps to mind—we might allot ourselves the right to reevaluate the clause based on more than two centuries of putting it into practice.

We can safely conclude, for instance, that one purpose for congressional crosschecks on Cabinet nominees is to prevent the appointment of folks who are plainly incompetent—the stooges and political hacks the president might try to sneak in as part of some quid pro quo.

To be sure, the Hagel case is a trifle more complex than that.  It should surprise no one that the loudest objections were ideological rather than practical—a dynamic reminiscent of recent hearings for prospective Supreme Court justices.

Just as members of the Judiciary Committee devote much of their probing to how a judge might rule on a rematch of Roe v. Wade, questions to Hagel by the Arms Services Committee, instead of assessing his overall experience in government, focused on what he allegedly thinks about Israel, Iran and possible cuts to the defense budget.

Are these not valid concerns?  If they are, are members of the legislature—the branch responsible for declaring war—not entitled to sufficient responses to them?  And if so, are they then entitled to vote “nay” should such responses fail to alleviate such concerns?

The lament is that confirmation hearings have become overwhelmingly partisan affairs.  More and more, Senators will vote against any nominee of a president of the opposing party, almost as a reflex.  Call it confirmation bias.

The solution, then, is not for the Senate to become less involved in rendering judgment on presidential nominees for high office, but simply to become more principled in the manner thereof.

With great power comes great responsibility.  So long as our Senate is cursed with one, it might as well exercise the other.

Too Soon?

“Comedy is tragedy plus time,” says Alan Alda in Crimes and Misdemeanors.  “See, the night when Lincoln was shot, you couldn’t joke about it—you just couldn’t do it.  Now, time has gone by and now it’s fair game.”

As Seth MacFarlane found out last Sunday, apparently not.

At one point during his Oscar hosting gig, MacFarlane ran off a list of men, prior to “Best Actor” Daniel Day-Lewis, who had portrayed America’s 16th president on the silver screen, culminating in the punch line, “The actor who really got inside Lincoln’s head was John Wilkes Booth.”

To this, the audience at the Dolby Theatre emitted a collective groan, in turn leading MacFarlane to remark, “A hundred fifty years and it’s still too soon?”

Of course, it was only a few weeks earlier, at the Screen Actors Guild Awards, when Day-Lewis himself deadpanned that the practice of actors recreating Lincoln is perhaps compensation for the fact that “it was an actor that murdered [him].”  Rather than too edgy, MacFarlane’s joke could just as easily be dismissed as too stale.

Regardless, “tragedy plus time equals comedy” is a formula that has long been with us, and about which it is always worth asking certain questions.

For instance:  Is the equation even true?  Is it ever really “too soon” to joke about anything?

Lincoln assassination jokes are funny for the same reason most funny things are funny.  They are subversive; they defy political correctness and good taste; and, crucially, they conjure a sense of danger in the mind of the audience, as if merely hearing the joke could get you into trouble.

None of these considerations would seem to require any great temporal distance.  Au contraire:  If anything, they suggest immediacy is the key to a particularly cutting quip.

Following the unholy carnage of September 11, 2001, there was a great debate about when it might become appropriate to reintroduce humor into American life.  Officially, the fateful moment arrived on Saturday Night Live on September 29, when New York Mayor Rudolph Giuliani responded to Lorne Michaels’ inquiry, “Can we be funny?” by asking, “Why start now?”

Unofficially, however, there never was any such comedy embargo in the first place.  The Onion, America’s satirical pamphlet of record, waited all of a week before beginning work on its 9/11 issue, which would feature headlines such as “American Life Turns Into Bad Jerry Bruckheimer Movie” and “God Angrily Clarifies ‘Don’t Kill’ Rule.”

As it turned out, the only real restriction on 9/11-related humor was that it not be at the expense of the victims themselves.  On this point, one could argue such a constraint is not a function of time so much as a general principle of comedy.  Some years back, when Howard Stern got himself into a bother over an ill-considered joke about the Rutgers women’s basketball team, Bill Maher helpfully explained, “[Stern] broke two rules of comedy.  It wasn’t true, and he picked on not the powerful but the weak.”

In other words, some things are simply not funny, no matter how long you wait.

Whether the “tragedy plus time” formula is genuinely true, there are certainly cultural consequences to the mere perception that it is, often manifest in excessive and ridiculous ways.

Last summer, for instance, a movie called Neighborhood Watch was compelled to change its title to, simply, The Watch, in order to avoid being associated with the then-recent killing of a teenager named Treyvon Martin by “neighborhood watch” vigilante George Zimmerman.  Never mind that the movie was a sci-fi comedy about an alien invasion; apparently the term “neighborhood watch” carried such cultural weight that audiences would have been unable to tell the difference.

More recently, in light of the elementary school shooting in Newtown, Connecticut, Judd Apatow faced pressure to remove a scene from his film This is 40 in which Albert Brooks pretends to “murder” his children with a water hose.  (Apatow expressed regret about the timing, but did not cut the scene.)

If I may assume the risk of reaching a neat conclusion to the “too soon” quandary, I would raise the possibility that some people simply will not allow themselves to be amused by jokes about tragic subjects, regardless of the temporal proximity to the tragedy itself.

The notion of a particular event being comedy-proof on the basis of time, while not completely false, is tremendously overblown, and not a useful or proper way to judge the value of a particular joke.

Tragedy does not require time to become comedy.  It merely requires a decent comedian and a game audience.  Unfortunately, last Sunday we were given neither.

Culinary Merchants of Death

I remember Lunchables, and the memories are very fond, indeed.  As a kid, I’m sure I tried all the original varieties, but my favorite was always their pizza:  The cracker-sized crusts and little vacuum-sealed packets of sauce and cheese that you assembled yourself.  For an unfussy fourth grader, it was the perfect lunch.

It never occurred to me that the people behind it were evil.

But that is the essence of a positively spellbinding article in this week’s New York Times Magazine, titled, “The Extraordinary Science of Addictive Junk Food.”  Excerpted from a forthcoming book by Michael Moss called Salt Sugar Fat: How the Food Giants Hooked Us, the article surveys some three decades’ worth of efforts by the packaged food industry to sell horribly unhealthy products to an unwitting public.

What makes the story so compelling is the prevalence of the word “addiction” in the context of food marketing, as used both by the author and by the marketing magicians themselves.  Moss draws a parallel with Big Tobacco, but he hardly needs to—the connection is unmistakable.

Recall the scene in Thank You For Smoking in which representatives for the tobacco, liquor and gun lobbies—“merchants of death,” they call themselves—meet for dinner and boast about the number of fatalities their respective products are responsible for causing?

Moss’s thesis, more or less, is that the snack food trade operates under a similarly callous ethos, viewing every consumer as a useful dolt, potential meat for slaughter.

Of course, the industry operatives themselves frame their business a bit more diplomatically than that.

One key term of theirs is “bliss point.”  As described by Howard Moskowitz, holder of a Ph.D. in experimental psychology and maestro of food “optimization,” this is the concept of engineering a food product to its greatest potential for satisfaction, as derived from taking a pile of considerations—taste, smell, texture and so forth—and running them through a focus group until a magic formula is attained.

At this point you may fairly ask:  Well, what’s wrong with that?

Indeed, it seems reasonable enough for a food company to invest its resources in figuring out how best to gratify its potential customers.

That is, until you wade into deeper waters, as Moss does, and realize the underlying object of finding this apex of culinary pleasure.

What do the seekers of this “bliss point” mean by calling it “optimal”?  What is their overriding consideration?

It is, in short, “How can we make this product as addictive as humanly possible?”

In one passage, Moss offers a précis about the alchemy of creating the perfect potato chip (hint: it involves salt) and quotes a food scientist who pinpoints Frito-Lay’s Cheeto as “one of the most marvelously constructed foods on the planet, in terms of pure pleasure.”  He cites a phenomenon called “vanishing caloric density,” whereby the tendency for Cheetos to melt in your mouth fools you into thinking they contain practically no calories and, therefore, “you can just keep eating [them] forever.”

The result, of course, is a country that is as fat and unhealthy as ever it has been.  The difference is that certain food companies—like tobacco companies in years past—are now suddenly being called to account, to assume responsibility for knowingly perpetuating a culture of destructive consumption.

The point at which Big Snack Foods becomes a mirror image of Big Tobacco—the “tell,” as it were—is the endless refrain by higher-ups that they are simply giving the public what it wants.  That if Americans have a hankering for crunchy cheese puffs made of sugar, salt and fat, then by God the crunchy cheese puff industry will provide them!  Is that not what capitalism is all about?

As we learned the hard way during the great showdown with the cigarette companies in the 1990s, it depends on precisely when “want” becomes “need”—on when a purchase is less an act of free will and more the expression of an uncontrollable impulse.

When someone pops into a 7-Eleven to grab his fourth pack of Marlboro Lights since breakfast, can he truly be said to be making a free spending decision in pursuit of his own happiness?  If not, does the entity that produced the addictive product bear any moral responsibility for the product’s impact on its customers?  Finally, and in any case, have we reached a point in which we ought to view eating habits in the same way?

We might agree that each of us is responsible for our actions.  But what happens when those actions are no longer truly in our control?

Faith, À La Carte

A common trope of atheism is the assertion that all the best aspects of religion—the bits that are truly worth saving—do not require religion in the first place.

Christopher Hitchens phrased it as a challenge:  Can you identify a moral action or statement, typically made by believers, that could not be executed by a nonbeliever?

Surely things such as giving to charity and treating others with respect are not the sole holdings of any one faith, or faith in particular.  They are virtues that are common to all upstanding persons and, dare I say, would have (or did) come about in organized religion’s absence.

To the extent that this is true—no one has ever convincingly argued to the contrary—it is equally true that religion has given the world certain worthwhile concepts that might not ever have materialized from any other source.

One such creation is Lent, the Christian bridge between Ash Wednesday and Easter Sunday that began last week.  For the last several years, I have tried my best to “keep” Lent, choosing a facet of my daily life to surrender for the six-and-a-half weeks the holy period lasts, as a means of self-discipline and recognizing that some things are more important than my own comfort.

I am not always successful in my Lenten sacrifices.  But then again, I am not even Christian.  Technically speaking, I am under no obligation to even participate in the ritual, let alone endure it in its entirety.

But I try it anyway, because at some point I decided the idea of abstaining from a certain behavior or temptation for an extended period was a good one.  That the practice is otherwise engaged in by members of a church to which I do not belong has never much bothered me.  On the contrary, it licenses me to devise my own rules and provisos without fear of incurring the wrath of a humorless deity.

Of course, what I am describing is essentially “cafeteria Catholicism” by another name.

A “cafeteria Catholic” is defined broadly as a member of the Catholic Church who disagrees with and/or ignores certain bits of Catholic doctrine—in effect, someone who takes religion into his own hands and shapes it to his own purposes.

The term is often used derisively.  I don’t see why it should.

The charge is that à la carte religion is not religion—that if one is to sign on with a particular church, one necessarily assumes the entirety of the church’s teachings and preachings, and that any wholesale disagreements should be kept duly under wraps.

This has always been a fascinating standard, insomuch as it is impossible to meet—first because the injunctions are often so challenging in the context of the modern world, and second because of the many ways in which they contradict each other.

If we are to be honest with ourselves, we would acknowledge that all of us are guilty of a cafeteria-style exercise of religion all the time, and we might then further deduce—if only for sanity’s sake—that this is not such a bad thing for our species.

To pick and choose which pieces of one’s religion one takes seriously is to maximize its utility to one’s life, and is that not (in so many words) the very point of religion in the first place?  To what possible end, and for what possible good, does one defer to doctrine with which one does not truly believe in one’s heart?

Should we accept the validity of this argument up to this point, it stands to reason that one is not transgressing all that much in adopting choice practices of other religions, provided that they don’t clash with those of one’s own that one also takes to heart.

Picturing it as a literal cafeteria:  If you descend from a long line of meat eaters, but you happen also to enjoy peas and carrots, who is everyone else to prevent you from tossing a salad alongside your burger?

That is, unless you have decided to give up beef for Lent.

Not Going Quietly

“They say the No. 1 killer of old people is retirement,” says Budd in Kill Bill: Vol. 2.  “People got a job to do, they tend to live a little longer so they can do it.”

Might this explain the apparent indestructibility of Dick Cheney?

One would think that four decades in politics and five heart attacks would constitute enough excitement for one career, at which point a person might opt to take it easy for the balance of one’s natural life.

(Hillary Clinton, for her part, was only half-joking when she recently said, “I am looking forward to finishing up my tenure as Secretary of State and then catching up on about 20 years of sleep deprivation.”)

Yet there was Cheney, speaking with Charlie Rose last week as if not a week had lapsed since he departed the Naval Observatory and the halls of power, offering his views on everything from President Obama’s Cabinet appointments to the legacy of the Iraq War.

At all points, the former vice president made it plain that his official departure from Washington, D.C., in 2009 did not mean he was done discussing the business therein.  “Retirement” is a word with which he has yet to establish relations.

For all sorts of reasons, such is the case for an increasing number of Americans.

His Holiness Pope Benedict XVI has drawn uncommon praise for his recent announcement that he will relinquish the keys to St. Peter before the Angel of Death removes them by force, becoming the first head of the Catholic Church in some six centuries to do so.

The notion of a high-ranking official hanging it up when he feels his job is done used to be regarded as the highest of virtues, exemplified by George Washington and, before that, Cincinnatus.

The practice has very nearly gone extinct in the meanwhile, particularly in the United States, where true retirement of high office holders has progressively gone out of style.

In contrast to the papacy (or judgeship on the U.S. Supreme Court), the presidency is not a lifetime gig.  Before Franklin Roosevelt, U.S. presidents limited themselves to two terms by tradition; after Franklin Roosevelt, by way of the Twenty-second Amendment, it became the law.

Accordingly, for all but the eight chief executives who happened to die in office (four of natural causes; four of unnatural causes), the question has always presented itself:  What does the most powerful man in the world do with his time once his power is relinquished?

America’s living ex-presidents constitute what is sometimes called the “most exclusive club in the world.”  There are currently four such persons—Jimmy Carter, Bill Clinton and the Georges Bush—and collectively they exemplify the myriad approaches to the post-presidency that one might take.

Unlike his deputy, our most recent retiree-in-chief, George Bush 43, has all but vanished from the scene, writing an obligatory memoir and promptly hauling himself away into a genuinely private daily existence.  His father, Bush 41, has kept a similarly low profile, devoted largely to jumping out of the occasional airplane and fishing in Kennebunkport.

Bill Clinton, meanwhile, has proved as irrepressible as ever, remaining in the political sphere by way of his wife, as well as accruing international goodwill through his self-titled foundation and support for various causes and disaster relief efforts in the last decade.

Then there is Jimmy Carter, now with the longest post-presidency in history, who has hardly shut up since being booted from the White House in 1981, writing 21 books and becoming a spokesperson for everything from Habitat for Humanity to the eradication of pancreatic cancer.

However, it is in his post-presidential political activities that Carter has generated the most controversy—and from which Dick Cheney seems to have drawn the most inspiration—by regularly offering critiques of U.S. foreign and domestic policy, solicited or not, and not always appreciated by the public at large.

Is such behavior by such a distinguished figure right and proper?  Or is it, rather, inappropriate and undignified?  Do Carter’s and Cheney’s unique insights into the executive branch necessarily license them to hurl tomatoes at those who follow in their footsteps?  Or do the awesome responsibilities of high office make such criticisms especially petty and beneath the stature of those who utter them?

One thing of which we can be sure, as demonstrated by George Washington and all his successors, is that a public figure can be judged by history as much for his behavior out of office as for his actions in office.  A president’s (or vice president’s) final legacy is a matter that is settled long after retirement, sometimes not until after he has shuffled off this mortal coil, and sometimes not at all.

Walled Off

When I was in Israel last December, my tour group made a stop at the Western Wall.  After we passed through security, we were left to roam the plaza and approach the Wall itself, dividing into two groups:  Men to the left, women to the right.

I had not been aware such a system existed, but indeed it does:  The Western Wall Plaza is partitioned so that men and women pray in separate quarters.

Can you guess which area is bigger?

As we face a changing of the guard in the Vatican with the pending retirement of Pope Benedict XVI, it is worth reflecting that the Catholic Church is hardly alone among the world’s monotheisms in treating its womenfolk like dirt.

Since 1988, the Western Wall Plaza has fallen under the jurisdiction of the Western Wall Heritage Foundation, itself a wing of the Israeli government.  In addition to its policy of physically (and inequitably) dividing the sexes, the foundation maintains a dress code within the plaza’s perimeter whereby women are forbidden from wearing the traditionally male prayer shawl known as a “talit.”

As reported in the New York Times this week, a group that calls itself “Women of the Wall” is seeking to ensure that this is no longer so.

Last Monday, ten members of this renegade group were detained by Israeli police after praying at the Wall decked in the aforementioned illicit garb, as the organization has done regularly since its formation in 1988.

The battle for gender equality is decidedly uphill.  In 2003, Israel’s Supreme Court upheld the government’s right to prohibit women from enjoying the praying privileges extended to men.

The court’s rationale, interestingly enough, was one of keeping the peace.  In past incidents, “Women of the Wall” representatives were met with physical intimidation and howls of protest from ultra-Orthodox men who were praying nearby.  Suppressing women’s dress, the argument goes, would prevent such outbursts in the future.

You heard right:  The high court of the Middle East’s only stable democracy ruled that the unregulated presence of women at the Western Wall was a provocation and, in effect, an infringement of the men’s right to not have to pray alongside women.

Indeed, this line of reasoning is perfectly consistent with the traditions of Orthodox Judaism.  Most Orthodox synagogues—in Israel, the United States and everywhere else—contain some form of mechitza, or division, to separate the sexes during services.  Some mechitzot place women in the back of the sanctuary while others simply split the room into left and right halves, but the principle is the same:  Men cannot be made to catch women’s cooties.

One is reminded, for instance, of the way various organized religions attempt to frame themselves as the oppressed party whenever the threat of gay equality pops up.  This week, when the Illinois State Senate voted to legalize same-sex marriage, it included the proviso that, should the bill be endorsed by the State Legislature and become law, Illinois houses of worship would retain the right to deny such unions under their roofs.

Most pro-gay marriage bills have included such a provision as a way to neutralize a clash with clergy who view gay equality as an infringement upon their right to practice and preach gay inequality.

Natan Sharansky, a government official tasked by Prime Minister Benjamin Netanyahu to try to resolve the “Women of the Wall” conundrum, expressed genuine ambivalence as to which side—the women or the Orthodox men—presents the stronger argument.  Sharansky implored that, in any case, “We do have to find a solution in which nobody will feel discriminated against.”

In my own experience, I have found the most effective way to ensure nobody feels discriminated against is not to discriminate against anybody.  The ultra-Orthodox community can rationalize from here to kingdom come, but prohibiting women from wearing prayer shawls that are freely worn by men is discrimination in its very design.

If avoiding discrimination is truly the goal in this case—“if” is indeed the key word—there is only one possible resolution, and that is for the Israeli Supreme Court to reverse its 2003 decision and acknowledge that a democratic state cannot favor one gender over the other so far as the law is concerned.

Would such an eventuality annoy the ultra-Orthodox powers that be, leaving them feeling their way of life is being trampled?  I suppose it would.  In 1960, the white folks in Greensboro, North Carolina could not have been terribly pleased to learn they would henceforth need to share Woolworth’s lunch counter with patrons who were black.

In a free society, some things are more important than tradition.

Cultural Coercion

It was in the first scene of the first film directed by Quentin Tarantino in which Steve Buscemi famously explained why he will never tip a waitress.

“I don’t believe in it,” Buscemi, aka Mr. Pink, proclaims to his fellow “reservoir dogs” around a coffee shop table.  Challenged on this—how cheap can one possibly be?—he clarifies, “I don’t tip because society says I have to.  I’ll tip if somebody really deserves it—if they really put forth the effort.  But this tipping automatically, it’s for the birds.”

It’s not paying the extra 15 percent itself that so peeves Mr. Pink, you see, but rather the notion that he is somehow obligated to do so.  That American society—without ever asking his opinion—deemed the wait staff at a diner or restaurant “tip-worthy” while not extending such an honor to, say, a cashier at McDonald’s.

Why should Mr. Pink be pressured into going along with this seemingly arbitrary social custom?  Who is everyone else to so push him around?

That brings us to St. Valentine’s Day.

The popular assumption is that the holiday we celebrate every February 14 is the creation of the American greeting card industry.  While the history of the festival is a bit more interesting and complicated than that—at minimum, the relevant chronology stretches back to Geoffrey Chaucer’s Parlement of Foules, written in 1382—in the holiday’s present form, this view is essentially correct.

Or at least the jaded sentiment behind it is.

What is Valentine’s Day if not the American culture telling you when you are required to express your love for your boyfriend or girlfriend, whether you want to or not?

Never mind that every relationship is different and operates on its own timetable, at its own pace.  The fourteenth of February—that’s the day you must formally observe that what you and your sweetheart have is special!

As a person who is presently single, I can speak with relative objectivity about our national day of celebrating couples.  I recognize, however, that many of my friends and acquaintances are not so lucky.

Conventional wisdom says that most men secretly hate Valentine’s Day, with the remaining men hating it openly.  In recent years there has been a mild shift, with many women now hating it as well, but it remains a particularly male bugaboo.

And why shouldn’t it be?  Men are enjoined to produce trinkets for their womenfolk, and should they fail to do so, the girlfriends are entitled to inflict bottomless torment upon them.

Yes, there are some couples who agree to forgo the usual traditions of dinner, chocolate and roses and do Valentine’s in their own way, but even this is a tacit acknowledgment that the holiday is a thundering cultural force that cannot be ignored.

That is what makes the whole business so unnerving and so fascinating.

Like many other markers on the American cultural calendar—the entire Christmas season springs to mind—St. Valentine’s Day is an attempt to collectivize an otherwise profoundly personal concept.

We are faced, then, with two great American values in conflict.  Individualism vs. community.  The private vs. the public.  The specific vs. the universal.

Like Christmas, Valentine’s Day is a sterling expression of commercialism run amok.  But one cannot help seeing something more sinister at work:  Popular culture telling us the true and proper meaning of love, as if such a concept could possibly be universalized.

We can draw some measure of comfort that most of us have long ceased to take St. Valentine’s Day all that seriously.  My consternation and annoyance remains, however, at the fact that all of us are compelled to take it at all.  That for the duly shackled-up, this day, for all its overt silliness, is one that must be regarded with a revered deference, overlooked at one’s extreme peril.

How terribly unfair this all is.  One should not be made to feel obligated to follow a minor social custom with any major effort.  You are free to do so of your own accord, of course, if that’s your thing and you can suit it to you and your significant other’s own purposes.

But celebrating Valentine’s Day automatically?  For the birds.

Civil Disobedience, Inc.

Last April, the residents of Concord, Massachusetts voted to deny themselves the right to buy bottled water.

The town wrote and enacted a relevant bylaw that went into effect on the first of this year, and retailers within Concord’s town limits have ceased selling plastic containers of H2O ever since.

As you will no doubt be shocked to learn, controversy endures.

Upon the fateful town meeting vote, Concord became the first municipality in the United States to execute such a prohibition, although the initiative’s leaders hope to unleash a national trend.  No other towns or cities have yet followed suit, although dozens of college campuses have.

The motivation for the ban is ecological.  Water bottles are made from material that harms the environment, and so prohibiting their sale will diminish their use and lighten Concord’s carbon footprint.  Specifically, the ban applies to “single-serving polyethylene terephthalate (PET) bottles of 1 liter (34 ounces) or less,” including cases of the same.

The idea, proponents explain, is to nudge consumers into more environmentally-friendly behavior.  In this case, it means drinking water from the tap, carrying around your own bottle or, if you insist, buying it in containers made from more agreeable material.

Since the bylaw has yet been in effect for a scant six weeks, it is a bit early to judge its impact (a point of protocol that the ban’s critics do not seem to agree with).

On the other hand, the Boston Globe ran a report at the end of January that is just too amusing to ignore.

As stipulated in the bylaw’s text, enforcement of the ban has fallen to a “designee” in the person of the town’s public health director, Susan Rask, who spent January checking in on Concord’s various beverage-peddling establishments to ensure they have removed bottled water from their shelves.  At press time, all of them had complied except for one:  The convenience store chain Cumberland Farms.

As Rask soon discovered, this was no oversight.  Unlike a handful of shops that initially misunderstood which types of containers were verboten and then promptly fell into line, Cumberland continued stocking and selling the forbidden fruit on purpose.

The first point to observe here is one of irony.  Here in Concord—the land of Henry David Thoreau, author of “Civil Disobedience”—we find the act of willfully defying local laws to be alive and kicking.

The irony?  Well, when Thoreau refused to pay poll taxes in 1846, it was from principled opposition to slavery and the Mexican-American War.  It was an instance of an individual rebelling against the mighty forces of government.

By contrast, the present water ban was affirmed by popular vote at a town meeting—a gathering of individuals—and is now being resisted by a mighty corporation.

Roles reversed.  Fancy that.

Of course, as surely as the nature of the civilly disobedient has changed since the innocent antebellum days, so too has the principle being defended.  Thoreau was defending the right of every man to shape his own destiny.  Cumberland Farms is defending its right to turn a profit.

The dynamic is as follows:  The first time a business is found to violate the bottle ban, it is issued a warning.  For every subsequent violation, the business is fined $50.

Given the facts on the ground, we can see the sneaky calculation at play:  If demand for bottled water remains at its pre-ban levels and Cumberland Farms is the only place supplying it, would it not be worth the occasional $50 fee to keep its monopoly going?

Unless and until the town decides to take further, harsher action against plucky Cumby’s, what reason does it have to cease flouting the law?  It has managed to turn civil disobedience into a smart business decision.  Score one for capitalism.

This is a morally hazardous lesson, to say the least, but such is the nature of many such acts of rebellion.  It is a fascinating story to follow, because of the various expressions of human behavior at work and in conflict with one another—most of which I have utterly failed to mention, but which deserve (and have elsewhere received) our full consideration.

The debate surrounding Concord has only just begun, and water is but the tip of the iceberg.

What’s In a Name?

Indeed, the rumors are true:  The Northeastern United States has just gotten hammered by a massive winter storm forever to be known as Nemo.

The Mid-Atlantic Corridor brought to its knees by a meteorological force named for an adorable animated clownfish.

OK, so the Weather Channel insists the moniker is not principally inspired by America’s favorite aquatic Pixar protagonist, but rather by some combination of its Latin origin, meaning “no one,” and the gruff sea captain from Jules Verne’s Twenty Thousand Leagues Under the Sea.

This explanation is plausible enough, as a glance at the full list of past and future names for this year’s large-scale winter storms finds a distinctly classical and literary tinge.

Noticed or not, we have already experienced snowy extravaganzas with names such as Athena, Caesar, Helen and Jove.  Still to come are Winter Storms Plato, Virgil and—presumably as the grand finale—Zeus.  (Wouldn’t that last one be more appropriate for a lightning storm?)

This is the first year the Weather Channel has employed a predetermined roster of designations for wintertime events, although the practice has existed for tropical storms and hurricanes since the 1940s.  This new addition seemed slow to catch on at first, but has suddenly become ubiquitous—undoubtedly due both to the current storm’s size as well as the aforementioned Pixar connection.

Officially, the purpose of extending the practice of personifying weather systems is precisely for ease of identification.

“Naming winter storms will raise the awareness of the public, which will lead to more pro-active efforts to plan ahead, resulting in less impact and inconvenience overall,” writes the Weather Channel’s Tom Niziol.  “Coordination and information sharing should improve between government organizations as well as the media, leading to less ambiguity and confusion when assessing big storms that affect multiple states.”

Some have accused the whole business of being an elaborate marketing ploy by the meteorological media behemoth.  In the uber-capitalist society we inhabit, one can only respond, “Why shouldn’t it be?”

The move can be both a savvy PR tactic and a smart social innovation.  The pertinent question is whether the latter is in fact true.  Jeer as we might, I dare say that it is.

To be sure, it is rather alarming to consider that coordination and allocation of resources in a major snowstorm would be affected by whether the storm is given artificial human characteristics.  One would very much hope to the contrary—that such essential services would take care of themselves based on conditions on the ground.

Yet one can nonetheless understand the logic underpinning Niziol’s justification for the system, which thus far has shown to be reasonably accurate, at least in the realm of social networking.

The deeper explanation for why this might be relates to the greater power of names in general.

The existence of a name engenders a close and more personal relationship between two or more entities than might otherwise come about.  It generates empathy and understanding.  It is an identity that is, by definition, relatable.

It is why the savvier anti-abortion advocates attach the pronouns “he” and “she” to descriptions of a fetus, or why the parents of a kidnapped child will appeal to the kidnapper on TV using the child’s name as often as possible.  The title of David Pelzer’s memoir A Child Called “It” is instinctively chilling, whether or not one knows what the book is about.

Of course, in the case of a nor’easter the identification is a negative one, but is no less personal:  The idea is to demystify and equalize, making a grand act of nature seem more manageable and less alien to those who will need to deal with it.  “Nemo” is somehow less threatening than “blizzard” or “snowpocalypse.”

Whether this naming initiative will prove to truly make a difference beyond the psychological, we have yet to determine.  Indeed, it is possible such a thing can never be assessed with any real certainty, and we reserve the right to remain skeptical.  The more successful aspects of the management of Hurricane Sandy, for instance, were much more the result of dogged personalities such as Governor Chris Christie than the personality of the storm itself.

Even if the appellation is shown to have had a negligible impact on the Great Blizzard of 2013, we can expect the new practice to have staying power, if for no other reason than our amusement.  After all, linguistic comic relief could hardly be more germane than when an entire region of the country is buried under several feet of snow.  It will serve, if only figuratively, to lighten the load.

The Fallacy of Good Taste

The Grammy Awards are this Sunday, when the National Academy of Recording Arts and Sciences will reveal which of the past year’s musical compositions we should have listened to.

After a year of waiting, it will be such a relief to finally find out which music from 2012 was good and which was bad.  I’ve been stumbling around in the dark this whole time, spinning my iPod wheel like Russian Roulette, hoping it lands on something decent.

But no more after Sunday.  As everyone knows, the word of the Grammy music gods is final.

This being February, as per tradition, we are being inundated by awards shows of every size and shape.  ‘Tis the season when America engages in one of its greatest mass conspiracies:  Pretending that every popular art form contains a group of philosopher kings whose tastes reign supreme, and whose judgment carries far greater weight than that of us mere mortals.

One of the crucial lessons I gleaned from college film classes is that, when it comes to popular culture, no one’s opinions are any better than anyone else’s.  Everything is a matter of taste, and taste, by definition, cannot be qualitatively measured in any objective way.

Everybody knows this to be true at one level or another, yet we continue to invest ourselves in this season of golden statuettes as if they mean something.

They don’t.

Later this month, when and if a plurality of the Academy of Motion Picture Arts and Sciences proclaims Lincoln the best movie of 2012, it will signal precisely one thing:  That a plurality of the Academy of Motion Picture Arts and Sciences marked Lincoln for “best movie” on its Oscar ballots.  That’s about it.

All awards shows are meaningless, but the Grammys really take the cake.

I don’t know about you, but my taste in music changes by the hour.  I have an “official” favorite song, but after that it’s about a 200-way tie for second place.

It all depends on the mood.  What hits the spot as I’m careening down the highway might not necessarily work as I’m sitting quietly at my computer.  Some days I prefer hard rock; other days I surrender to Top 40.

My musical preferences are protean, entirely a function of how I feel at the moment.  Is there anyone for whom this is not the case?

If not, how could we possibly presume to pick “the best” that the recording industry has to offer?  For an art form that is so personal, based on the ever-changing emotions of its listeners—indeed, whose very purpose is either to complement or counteract those emotions—what exactly does it mean to be “the best” anyway?

To partially answer my own question, I think the explanation of awards shows’ enduring popularity can be traced to the American tendency toward consensus, including in subjects that do not require or would necessarily be enhanced by such a thing.

To wit:  The most useful article I have found about Beyoncé’s performance at last Sunday’s Super Bowl is from Jay Caspian Kang of the blog Grantland, who writes, “[Beyoncé] is popular because she’s easy to like and she’s something everyone has decided to agree upon across race, class, and creed.”

I like Beyoncé, but the point is taken and worth pondering in broader terms.  Facile is the artist who is accessible to all audiences at all times.  Where is the edge?  Where is the danger?  On the subject of movies, Roger Ebert makes a related critique, writing, “What does it say about you if you only want to see what everybody else is seeing?”

None of this is to say the Grammys cannot be enjoyed as an entertaining television event.  The Academy of Recording Arts and Sciences has certainly taken greater steps than the Motion Picture Academy in making its program watchable.

But Grammy voters should not be mistaken for objective arbiters of musical quality.  They are not, for no such persons exist.

There is no reason there should, for it misses the whole point about the purpose of music.

You like what you like, and that’s just how I like it.