Secretary on Defense

Chuck Hagel was confirmed this week by the U.S. Senate to be our next secretary of defense.

For the last month and a half, a group of Republicans and others in Washington, D.C., mounted an unprecedented effort to ensure that sentence would never be written.  They ultimately failed, but they sure gave it the old college try.

The final tally for Hagel’s confirmation in the Senate was 58-41.  It was the closest vote for any Cabinet nominee since George W. Bush’s third attorney general, Michael Mukasey, was confirmed by a score of 53-40 in 2007.

The reasons for the Hagel holdup ranged from the legitimate (his views about Iran) to the ridiculous (a “connection” to a pro-Hamas group that, it turned out, does not actually exist).

At all points, the question that underlay the proceedings more than any other concerned the nature of Senate confirmations themselves.  Namely, in grilling the unholy heck out of the former Republican senator, did the Senate abuse its authority to review a president’s Cabinet nominee before giving its ultimate approval?

There is a school of thought—and a large one at that—that would answer rather thunderously in the affirmative.  For the past several decades in politics, the prevailing view has been that once someone is elected president, he is entitled to appoint pretty much anyone he wants to key high-ranking jobs in the executive branch, and that the Senate’s role in “confirming” such appointees is a mere formality.

The U.S. Constitution is of limited help on this point.  The relevant clause is Article II, Section 2, paragraph 2, which stipulates that the president “shall nominate, and by and with the Advice and Consent of the Senate, shall appoint Ambassadors, other public Ministers and Consuls, Judges of the supreme Court, and all other Officers of the United States.”

The contention relates to “Advice and Consent,” which is one of those phrases that can mean whatever you want it to mean.

Depending on one’s reading, the clause could empower U.S. Senators to assume a boldly assertive role in determining an appointee’s aptitude for a particular job.  However, it could just as plausibly be a mere acknowledgment of the president’s appointment power, with the mention of Congress nothing more than a nod to the “checks and balances” principle in the separation of powers.

Indeed, it was in a spirit of compromise that “Advice and Consent” was plugged into the Constitution in the first place—a means of placating both sides in the great argument at America’s founding over the relative powers of the executive and legislative branches.

As with other controversial Constitutional assertions—the Second Amendment leaps to mind—we might allot ourselves the right to reevaluate the clause based on more than two centuries of putting it into practice.

We can safely conclude, for instance, that one purpose for congressional crosschecks on Cabinet nominees is to prevent the appointment of folks who are plainly incompetent—the stooges and political hacks the president might try to sneak in as part of some quid pro quo.

To be sure, the Hagel case is a trifle more complex than that.  It should surprise no one that the loudest objections were ideological rather than practical—a dynamic reminiscent of recent hearings for prospective Supreme Court justices.

Just as members of the Judiciary Committee devote much of their probing to how a judge might rule on a rematch of Roe v. Wade, questions to Hagel by the Arms Services Committee, instead of assessing his overall experience in government, focused on what he allegedly thinks about Israel, Iran and possible cuts to the defense budget.

Are these not valid concerns?  If they are, are members of the legislature—the branch responsible for declaring war—not entitled to sufficient responses to them?  And if so, are they then entitled to vote “nay” should such responses fail to alleviate such concerns?

The lament is that confirmation hearings have become overwhelmingly partisan affairs.  More and more, Senators will vote against any nominee of a president of the opposing party, almost as a reflex.  Call it confirmation bias.

The solution, then, is not for the Senate to become less involved in rendering judgment on presidential nominees for high office, but simply to become more principled in the manner thereof.

With great power comes great responsibility.  So long as our Senate is cursed with one, it might as well exercise the other.

Advertisements

Too Soon?

“Comedy is tragedy plus time,” says Alan Alda in Crimes and Misdemeanors.  “See, the night when Lincoln was shot, you couldn’t joke about it—you just couldn’t do it.  Now, time has gone by and now it’s fair game.”

As Seth MacFarlane found out last Sunday, apparently not.

At one point during his Oscar hosting gig, MacFarlane ran off a list of men, prior to “Best Actor” Daniel Day-Lewis, who had portrayed America’s 16th president on the silver screen, culminating in the punch line, “The actor who really got inside Lincoln’s head was John Wilkes Booth.”

To this, the audience at the Dolby Theatre emitted a collective groan, in turn leading MacFarlane to remark, “A hundred fifty years and it’s still too soon?”

Of course, it was only a few weeks earlier, at the Screen Actors Guild Awards, when Day-Lewis himself deadpanned that the practice of actors recreating Lincoln is perhaps compensation for the fact that “it was an actor that murdered [him].”  Rather than too edgy, MacFarlane’s joke could just as easily be dismissed as too stale.

Regardless, “tragedy plus time equals comedy” is a formula that has long been with us, and about which it is always worth asking certain questions.

For instance:  Is the equation even true?  Is it ever really “too soon” to joke about anything?

Lincoln assassination jokes are funny for the same reason most funny things are funny.  They are subversive; they defy political correctness and good taste; and, crucially, they conjure a sense of danger in the mind of the audience, as if merely hearing the joke could get you into trouble.

None of these considerations would seem to require any great temporal distance.  Au contraire:  If anything, they suggest immediacy is the key to a particularly cutting quip.

Following the unholy carnage of September 11, 2001, there was a great debate about when it might become appropriate to reintroduce humor into American life.  Officially, the fateful moment arrived on Saturday Night Live on September 29, when New York Mayor Rudolph Giuliani responded to Lorne Michaels’ inquiry, “Can we be funny?” by asking, “Why start now?”

Unofficially, however, there never was any such comedy embargo in the first place.  The Onion, America’s satirical pamphlet of record, waited all of a week before beginning work on its 9/11 issue, which would feature headlines such as “American Life Turns Into Bad Jerry Bruckheimer Movie” and “God Angrily Clarifies ‘Don’t Kill’ Rule.”

As it turned out, the only real restriction on 9/11-related humor was that it not be at the expense of the victims themselves.  On this point, one could argue such a constraint is not a function of time so much as a general principle of comedy.  Some years back, when Howard Stern got himself into a bother over an ill-considered joke about the Rutgers women’s basketball team, Bill Maher helpfully explained, “[Stern] broke two rules of comedy.  It wasn’t true, and he picked on not the powerful but the weak.”

In other words, some things are simply not funny, no matter how long you wait.

Whether the “tragedy plus time” formula is genuinely true, there are certainly cultural consequences to the mere perception that it is, often manifest in excessive and ridiculous ways.

Last summer, for instance, a movie called Neighborhood Watch was compelled to change its title to, simply, The Watch, in order to avoid being associated with the then-recent killing of a teenager named Treyvon Martin by “neighborhood watch” vigilante George Zimmerman.  Never mind that the movie was a sci-fi comedy about an alien invasion; apparently the term “neighborhood watch” carried such cultural weight that audiences would have been unable to tell the difference.

More recently, in light of the elementary school shooting in Newtown, Connecticut, Judd Apatow faced pressure to remove a scene from his film This is 40 in which Albert Brooks pretends to “murder” his children with a water hose.  (Apatow expressed regret about the timing, but did not cut the scene.)

If I may assume the risk of reaching a neat conclusion to the “too soon” quandary, I would raise the possibility that some people simply will not allow themselves to be amused by jokes about tragic subjects, regardless of the temporal proximity to the tragedy itself.

The notion of a particular event being comedy-proof on the basis of time, while not completely false, is tremendously overblown, and not a useful or proper way to judge the value of a particular joke.

Tragedy does not require time to become comedy.  It merely requires a decent comedian and a game audience.  Unfortunately, last Sunday we were given neither.

Culinary Merchants of Death

I remember Lunchables, and the memories are very fond, indeed.  As a kid, I’m sure I tried all the original varieties, but my favorite was always their pizza:  The cracker-sized crusts and little vacuum-sealed packets of sauce and cheese that you assembled yourself.  For an unfussy fourth grader, it was the perfect lunch.

It never occurred to me that the people behind it were evil.

But that is the essence of a positively spellbinding article in this week’s New York Times Magazine, titled, “The Extraordinary Science of Addictive Junk Food.”  Excerpted from a forthcoming book by Michael Moss called Salt Sugar Fat: How the Food Giants Hooked Us, the article surveys some three decades’ worth of efforts by the packaged food industry to sell horribly unhealthy products to an unwitting public.

What makes the story so compelling is the prevalence of the word “addiction” in the context of food marketing, as used both by the author and by the marketing magicians themselves.  Moss draws a parallel with Big Tobacco, but he hardly needs to—the connection is unmistakable.

Recall the scene in Thank You For Smoking in which representatives for the tobacco, liquor and gun lobbies—“merchants of death,” they call themselves—meet for dinner and boast about the number of fatalities their respective products are responsible for causing?

Moss’s thesis, more or less, is that the snack food trade operates under a similarly callous ethos, viewing every consumer as a useful dolt, potential meat for slaughter.

Of course, the industry operatives themselves frame their business a bit more diplomatically than that.

One key term of theirs is “bliss point.”  As described by Howard Moskowitz, holder of a Ph.D. in experimental psychology and maestro of food “optimization,” this is the concept of engineering a food product to its greatest potential for satisfaction, as derived from taking a pile of considerations—taste, smell, texture and so forth—and running them through a focus group until a magic formula is attained.

At this point you may fairly ask:  Well, what’s wrong with that?

Indeed, it seems reasonable enough for a food company to invest its resources in figuring out how best to gratify its potential customers.

That is, until you wade into deeper waters, as Moss does, and realize the underlying object of finding this apex of culinary pleasure.

What do the seekers of this “bliss point” mean by calling it “optimal”?  What is their overriding consideration?

It is, in short, “How can we make this product as addictive as humanly possible?”

In one passage, Moss offers a précis about the alchemy of creating the perfect potato chip (hint: it involves salt) and quotes a food scientist who pinpoints Frito-Lay’s Cheeto as “one of the most marvelously constructed foods on the planet, in terms of pure pleasure.”  He cites a phenomenon called “vanishing caloric density,” whereby the tendency for Cheetos to melt in your mouth fools you into thinking they contain practically no calories and, therefore, “you can just keep eating [them] forever.”

The result, of course, is a country that is as fat and unhealthy as ever it has been.  The difference is that certain food companies—like tobacco companies in years past—are now suddenly being called to account, to assume responsibility for knowingly perpetuating a culture of destructive consumption.

The point at which Big Snack Foods becomes a mirror image of Big Tobacco—the “tell,” as it were—is the endless refrain by higher-ups that they are simply giving the public what it wants.  That if Americans have a hankering for crunchy cheese puffs made of sugar, salt and fat, then by God the crunchy cheese puff industry will provide them!  Is that not what capitalism is all about?

As we learned the hard way during the great showdown with the cigarette companies in the 1990s, it depends on precisely when “want” becomes “need”—on when a purchase is less an act of free will and more the expression of an uncontrollable impulse.

When someone pops into a 7-Eleven to grab his fourth pack of Marlboro Lights since breakfast, can he truly be said to be making a free spending decision in pursuit of his own happiness?  If not, does the entity that produced the addictive product bear any moral responsibility for the product’s impact on its customers?  Finally, and in any case, have we reached a point in which we ought to view eating habits in the same way?

We might agree that each of us is responsible for our actions.  But what happens when those actions are no longer truly in our control?

Faith, À La Carte

A common trope of atheism is the assertion that all the best aspects of religion—the bits that are truly worth saving—do not require religion in the first place.

Christopher Hitchens phrased it as a challenge:  Can you identify a moral action or statement, typically made by believers, that could not be executed by a nonbeliever?

Surely things such as giving to charity and treating others with respect are not the sole holdings of any one faith, or faith in particular.  They are virtues that are common to all upstanding persons and, dare I say, would have (or did) come about in organized religion’s absence.

To the extent that this is true—no one has ever convincingly argued to the contrary—it is equally true that religion has given the world certain worthwhile concepts that might not ever have materialized from any other source.

One such creation is Lent, the Christian bridge between Ash Wednesday and Easter Sunday that began last week.  For the last several years, I have tried my best to “keep” Lent, choosing a facet of my daily life to surrender for the six-and-a-half weeks the holy period lasts, as a means of self-discipline and recognizing that some things are more important than my own comfort.

I am not always successful in my Lenten sacrifices.  But then again, I am not even Christian.  Technically speaking, I am under no obligation to even participate in the ritual, let alone endure it in its entirety.

But I try it anyway, because at some point I decided the idea of abstaining from a certain behavior or temptation for an extended period was a good one.  That the practice is otherwise engaged in by members of a church to which I do not belong has never much bothered me.  On the contrary, it licenses me to devise my own rules and provisos without fear of incurring the wrath of a humorless deity.

Of course, what I am describing is essentially “cafeteria Catholicism” by another name.

A “cafeteria Catholic” is defined broadly as a member of the Catholic Church who disagrees with and/or ignores certain bits of Catholic doctrine—in effect, someone who takes religion into his own hands and shapes it to his own purposes.

The term is often used derisively.  I don’t see why it should.

The charge is that à la carte religion is not religion—that if one is to sign on with a particular church, one necessarily assumes the entirety of the church’s teachings and preachings, and that any wholesale disagreements should be kept duly under wraps.

This has always been a fascinating standard, insomuch as it is impossible to meet—first because the injunctions are often so challenging in the context of the modern world, and second because of the many ways in which they contradict each other.

If we are to be honest with ourselves, we would acknowledge that all of us are guilty of a cafeteria-style exercise of religion all the time, and we might then further deduce—if only for sanity’s sake—that this is not such a bad thing for our species.

To pick and choose which pieces of one’s religion one takes seriously is to maximize its utility to one’s life, and is that not (in so many words) the very point of religion in the first place?  To what possible end, and for what possible good, does one defer to doctrine with which one does not truly believe in one’s heart?

Should we accept the validity of this argument up to this point, it stands to reason that one is not transgressing all that much in adopting choice practices of other religions, provided that they don’t clash with those of one’s own that one also takes to heart.

Picturing it as a literal cafeteria:  If you descend from a long line of meat eaters, but you happen also to enjoy peas and carrots, who is everyone else to prevent you from tossing a salad alongside your burger?

That is, unless you have decided to give up beef for Lent.

Not Going Quietly

“They say the No. 1 killer of old people is retirement,” says Budd in Kill Bill: Vol. 2.  “People got a job to do, they tend to live a little longer so they can do it.”

Might this explain the apparent indestructibility of Dick Cheney?

One would think that four decades in politics and five heart attacks would constitute enough excitement for one career, at which point a person might opt to take it easy for the balance of one’s natural life.

(Hillary Clinton, for her part, was only half-joking when she recently said, “I am looking forward to finishing up my tenure as Secretary of State and then catching up on about 20 years of sleep deprivation.”)

Yet there was Cheney, speaking with Charlie Rose last week as if not a week had lapsed since he departed the Naval Observatory and the halls of power, offering his views on everything from President Obama’s Cabinet appointments to the legacy of the Iraq War.

At all points, the former vice president made it plain that his official departure from Washington, D.C., in 2009 did not mean he was done discussing the business therein.  “Retirement” is a word with which he has yet to establish relations.

For all sorts of reasons, such is the case for an increasing number of Americans.

His Holiness Pope Benedict XVI has drawn uncommon praise for his recent announcement that he will relinquish the keys to St. Peter before the Angel of Death removes them by force, becoming the first head of the Catholic Church in some six centuries to do so.

The notion of a high-ranking official hanging it up when he feels his job is done used to be regarded as the highest of virtues, exemplified by George Washington and, before that, Cincinnatus.

The practice has very nearly gone extinct in the meanwhile, particularly in the United States, where true retirement of high office holders has progressively gone out of style.

In contrast to the papacy (or judgeship on the U.S. Supreme Court), the presidency is not a lifetime gig.  Before Franklin Roosevelt, U.S. presidents limited themselves to two terms by tradition; after Franklin Roosevelt, by way of the Twenty-second Amendment, it became the law.

Accordingly, for all but the eight chief executives who happened to die in office (four of natural causes; four of unnatural causes), the question has always presented itself:  What does the most powerful man in the world do with his time once his power is relinquished?

America’s living ex-presidents constitute what is sometimes called the “most exclusive club in the world.”  There are currently four such persons—Jimmy Carter, Bill Clinton and the Georges Bush—and collectively they exemplify the myriad approaches to the post-presidency that one might take.

Unlike his deputy, our most recent retiree-in-chief, George Bush 43, has all but vanished from the scene, writing an obligatory memoir and promptly hauling himself away into a genuinely private daily existence.  His father, Bush 41, has kept a similarly low profile, devoted largely to jumping out of the occasional airplane and fishing in Kennebunkport.

Bill Clinton, meanwhile, has proved as irrepressible as ever, remaining in the political sphere by way of his wife, as well as accruing international goodwill through his self-titled foundation and support for various causes and disaster relief efforts in the last decade.

Then there is Jimmy Carter, now with the longest post-presidency in history, who has hardly shut up since being booted from the White House in 1981, writing 21 books and becoming a spokesperson for everything from Habitat for Humanity to the eradication of pancreatic cancer.

However, it is in his post-presidential political activities that Carter has generated the most controversy—and from which Dick Cheney seems to have drawn the most inspiration—by regularly offering critiques of U.S. foreign and domestic policy, solicited or not, and not always appreciated by the public at large.

Is such behavior by such a distinguished figure right and proper?  Or is it, rather, inappropriate and undignified?  Do Carter’s and Cheney’s unique insights into the executive branch necessarily license them to hurl tomatoes at those who follow in their footsteps?  Or do the awesome responsibilities of high office make such criticisms especially petty and beneath the stature of those who utter them?

One thing of which we can be sure, as demonstrated by George Washington and all his successors, is that a public figure can be judged by history as much for his behavior out of office as for his actions in office.  A president’s (or vice president’s) final legacy is a matter that is settled long after retirement, sometimes not until after he has shuffled off this mortal coil, and sometimes not at all.

Walled Off

When I was in Israel last December, my tour group made a stop at the Western Wall.  After we passed through security, we were left to roam the plaza and approach the Wall itself, dividing into two groups:  Men to the left, women to the right.

I had not been aware such a system existed, but indeed it does:  The Western Wall Plaza is partitioned so that men and women pray in separate quarters.

Can you guess which area is bigger?

As we face a changing of the guard in the Vatican with the pending retirement of Pope Benedict XVI, it is worth reflecting that the Catholic Church is hardly alone among the world’s monotheisms in treating its womenfolk like dirt.

Since 1988, the Western Wall Plaza has fallen under the jurisdiction of the Western Wall Heritage Foundation, itself a wing of the Israeli government.  In addition to its policy of physically (and inequitably) dividing the sexes, the foundation maintains a dress code within the plaza’s perimeter whereby women are forbidden from wearing the traditionally male prayer shawl known as a “talit.”

As reported in the New York Times this week, a group that calls itself “Women of the Wall” is seeking to ensure that this is no longer so.

Last Monday, ten members of this renegade group were detained by Israeli police after praying at the Wall decked in the aforementioned illicit garb, as the organization has done regularly since its formation in 1988.

The battle for gender equality is decidedly uphill.  In 2003, Israel’s Supreme Court upheld the government’s right to prohibit women from enjoying the praying privileges extended to men.

The court’s rationale, interestingly enough, was one of keeping the peace.  In past incidents, “Women of the Wall” representatives were met with physical intimidation and howls of protest from ultra-Orthodox men who were praying nearby.  Suppressing women’s dress, the argument goes, would prevent such outbursts in the future.

You heard right:  The high court of the Middle East’s only stable democracy ruled that the unregulated presence of women at the Western Wall was a provocation and, in effect, an infringement of the men’s right to not have to pray alongside women.

Indeed, this line of reasoning is perfectly consistent with the traditions of Orthodox Judaism.  Most Orthodox synagogues—in Israel, the United States and everywhere else—contain some form of mechitza, or division, to separate the sexes during services.  Some mechitzot place women in the back of the sanctuary while others simply split the room into left and right halves, but the principle is the same:  Men cannot be made to catch women’s cooties.

One is reminded, for instance, of the way various organized religions attempt to frame themselves as the oppressed party whenever the threat of gay equality pops up.  This week, when the Illinois State Senate voted to legalize same-sex marriage, it included the proviso that, should the bill be endorsed by the State Legislature and become law, Illinois houses of worship would retain the right to deny such unions under their roofs.

Most pro-gay marriage bills have included such a provision as a way to neutralize a clash with clergy who view gay equality as an infringement upon their right to practice and preach gay inequality.

Natan Sharansky, a government official tasked by Prime Minister Benjamin Netanyahu to try to resolve the “Women of the Wall” conundrum, expressed genuine ambivalence as to which side—the women or the Orthodox men—presents the stronger argument.  Sharansky implored that, in any case, “We do have to find a solution in which nobody will feel discriminated against.”

In my own experience, I have found the most effective way to ensure nobody feels discriminated against is not to discriminate against anybody.  The ultra-Orthodox community can rationalize from here to kingdom come, but prohibiting women from wearing prayer shawls that are freely worn by men is discrimination in its very design.

If avoiding discrimination is truly the goal in this case—“if” is indeed the key word—there is only one possible resolution, and that is for the Israeli Supreme Court to reverse its 2003 decision and acknowledge that a democratic state cannot favor one gender over the other so far as the law is concerned.

Would such an eventuality annoy the ultra-Orthodox powers that be, leaving them feeling their way of life is being trampled?  I suppose it would.  In 1960, the white folks in Greensboro, North Carolina could not have been terribly pleased to learn they would henceforth need to share Woolworth’s lunch counter with patrons who were black.

In a free society, some things are more important than tradition.

Cultural Coercion

It was in the first scene of the first film directed by Quentin Tarantino in which Steve Buscemi famously explained why he will never tip a waitress.

“I don’t believe in it,” Buscemi, aka Mr. Pink, proclaims to his fellow “reservoir dogs” around a coffee shop table.  Challenged on this—how cheap can one possibly be?—he clarifies, “I don’t tip because society says I have to.  I’ll tip if somebody really deserves it—if they really put forth the effort.  But this tipping automatically, it’s for the birds.”

It’s not paying the extra 15 percent itself that so peeves Mr. Pink, you see, but rather the notion that he is somehow obligated to do so.  That American society—without ever asking his opinion—deemed the wait staff at a diner or restaurant “tip-worthy” while not extending such an honor to, say, a cashier at McDonald’s.

Why should Mr. Pink be pressured into going along with this seemingly arbitrary social custom?  Who is everyone else to so push him around?

That brings us to St. Valentine’s Day.

The popular assumption is that the holiday we celebrate every February 14 is the creation of the American greeting card industry.  While the history of the festival is a bit more interesting and complicated than that—at minimum, the relevant chronology stretches back to Geoffrey Chaucer’s Parlement of Foules, written in 1382—in the holiday’s present form, this view is essentially correct.

Or at least the jaded sentiment behind it is.

What is Valentine’s Day if not the American culture telling you when you are required to express your love for your boyfriend or girlfriend, whether you want to or not?

Never mind that every relationship is different and operates on its own timetable, at its own pace.  The fourteenth of February—that’s the day you must formally observe that what you and your sweetheart have is special!

As a person who is presently single, I can speak with relative objectivity about our national day of celebrating couples.  I recognize, however, that many of my friends and acquaintances are not so lucky.

Conventional wisdom says that most men secretly hate Valentine’s Day, with the remaining men hating it openly.  In recent years there has been a mild shift, with many women now hating it as well, but it remains a particularly male bugaboo.

And why shouldn’t it be?  Men are enjoined to produce trinkets for their womenfolk, and should they fail to do so, the girlfriends are entitled to inflict bottomless torment upon them.

Yes, there are some couples who agree to forgo the usual traditions of dinner, chocolate and roses and do Valentine’s in their own way, but even this is a tacit acknowledgment that the holiday is a thundering cultural force that cannot be ignored.

That is what makes the whole business so unnerving and so fascinating.

Like many other markers on the American cultural calendar—the entire Christmas season springs to mind—St. Valentine’s Day is an attempt to collectivize an otherwise profoundly personal concept.

We are faced, then, with two great American values in conflict.  Individualism vs. community.  The private vs. the public.  The specific vs. the universal.

Like Christmas, Valentine’s Day is a sterling expression of commercialism run amok.  But one cannot help seeing something more sinister at work:  Popular culture telling us the true and proper meaning of love, as if such a concept could possibly be universalized.

We can draw some measure of comfort that most of us have long ceased to take St. Valentine’s Day all that seriously.  My consternation and annoyance remains, however, at the fact that all of us are compelled to take it at all.  That for the duly shackled-up, this day, for all its overt silliness, is one that must be regarded with a revered deference, overlooked at one’s extreme peril.

How terribly unfair this all is.  One should not be made to feel obligated to follow a minor social custom with any major effort.  You are free to do so of your own accord, of course, if that’s your thing and you can suit it to you and your significant other’s own purposes.

But celebrating Valentine’s Day automatically?  For the birds.