A comment on Glen McGregor's "Toward a Dogme95 of political reporting"


I spent the first 17 years of my adult life in academia. I've spent the last five years in journalism. And if there's one thing that has struck me about the career switch, it is that the only people on earth whose sense of self-importance rivals that of humanities professors, it is journalists. Which is why it was so disheartening when a journalist sat down to write something self-critical about his profession's use of academic sources, the people who were quickest to take offense were professors.

At issue is a blogpost by Glen McGregor, a parliamentary reporter for the Ottawa Citizen. His post, entitled "Toward a Dogme95 of political reporting," is a trim little call for a return to journalism's basics: pick up the phone, work sources, get stories. It asks journalists to stop filing easy stories skimmed from the froth of partisan posturing, social media, and self-styled rent-a-quote "experts." Fine advice, and, in my opinion, largely non-controversial. (Note: While I'm Glen's editor at the Ottawa Citizen, I had no input into his blogpost.)

But it's the first bullet point of Glen's post that seems to have got the most attention:

* No more quoting political scientists:  It’s lazy and signals the reporter couldn’t find any other apparently neutral or objective source to talk. These people work in academics, not politics, so I’m not interested in their opinions on anything but their own research.

This caused quite a ruckus in the cosy Canadian politics neighbourhood of the twittersphere. A number of academics -- most of whom are well known to journalists and to readers for their comments, op-eds, blogs and in some cases even their actual research -- took this as raised middle finger to their presence in Canadian journalism. I'm not going to bother going over the he-said/she-said of it all; my view is that this comment of Glen's is entirely critical of journalists, not academics, and is less about telling professors to stay out of journalism than it is about telling reporters to stop relying on professors to pad out their stories and launder their political views. But like most serious misunderstandings, this one does a useful job of shedding some light on the relationship between journalism and academic work, and how technology-driven shifts in our conception of status, influence, and research itself called that relationship into question.


The key thing to understand about journalists is that they are the lowest rank of intellectuals. That is to say: they are members of the intellectual class, but in the status hierarchy of intellectuals, journalists are at the bottom. That is why journalists have traditionally adopted the status cues of the working-class: the drinking and the swearing, the anti-establishment values, and the commitment to the non-professionalization of journalism.

The key thing to understand about academics is that they are the highest rank of intellectuals. That is why they have traditionally adopted the status symbols of the 19th century British leisured class -- the tweeds and the sherry and the learning of obscure languages -- while shunning the sorts of things that are necessary for people for whom status is something to be fought for through interaction with the normal members of society (e.g. proper clothing, minimal standards of hygiene, basic manners.)

Despite inhabiting opposite ends of the intellectual status hierarchy, some journalists always saw some appeal in looking up towards academia (instead of down on the working classes) and some academics saw the appeal of journalism. Professors, after all, have the cachet of smarts. Journalists, on the other hand, can become folk heroes. And so within journalism there was a natural alliance to be found between journalists who wanted to give their stories some intellectual heft by quoting a serious researcher on the story at hand, and researchers who wanted an audience for their ideas beyond the faculty lounge and the conference circuit.

So far so good. In the pre-internet world of publishing, journalism served as a useful instrument for brokering academic research to the masses. Academic publishing is slow and research is hard to grasp even for PhDs, while a newspaper comes out every day and the language of the broadsheet is educated but relatively straightforward. The reporter who could become an "instant expert" in a difficult field of research, or the researcher who had a gift for explaining difficult research in straightforward language, played a valuable role in the realm of public debate.

There is a downside to this though. Journalists work under tight deadlines, and -- like everyone else on Earth -- they will take the easy path over the difficult, when given the choice. Meanwhile, it is tough for the lay reporter to know which experts are the ones to trust, and even then, academics can be difficult to reach (the better ones always seem to be on research leave somewhere other than at their home university.) And so there has always been an interest amongst journalists in academics who are easy to reach and are willing to talk about a very broad range of topics, including those outside their areas of research expertise. This -- and I think this alone -- is the combination of lazy journalism and dial-a-quote academic punditry that Glen McGregor suggests we can do without.


It is hard to see how any journalist, or any academic, could object to this. No serious journalist wants to be seen as lazy, and no serious academic wants to be considered a lightweight. So why, then, did so many people take offense at McGregor's proposal?

I think the problem stems from the shifting place of academics in the popular discussion over the past decade. One of the great benefits of the rise of Web 2.0 was the way blogs gave professors a platform, independent of both mass media and niche publishing, to promote their work and to critically discuss the work of their peers in a forum that was free, public, dynamic, and immediate. And while it had the effect of making it easier for journalists to identify and reach useful sources, the more serious consequence (for journalists) was that it threatened to make them obsolete, by eliminating their role as intellectual middlemen.

The rise of the social web, Facebook and most especially Twitter, has only accelerated this process. The 2011 federal election in Canada was widely referred to as the first "Twitter election," but as I wrote in a blogpost for Canadian Business magazine, it was more accurate to call it the first "economists' election." It was the first election in which a large number of Canadian economists made direct, unmediated, real-time interventions into the debates over policy and the various party platforms.


My suspicion is that many professors interpreted Glen McGregor's manifesto as an attempt at pushing them out of this newly-carved niche in our popular debate. Nothing could be further from the truth. Yet as it progresses, this disintermediation of academic expertise will have a profound impact on how politics and public policy gets debated in this country. It should also have a profound impact on how both journalists and academics do their jobs.

For journalists, it should change their approach to political reporting pretty much along the lines suggested by McGregor.  Thanks to technology, journalists no longer have to play the role of ideas broker between academia and the public. At the same time, there is very little status to be gained by quoting the same stale academic sources in story after story, when more insight can be found coursing through a well-cultivated twitter stream. Finally, it means that reporters should stop trying to launder their political biases through a convenient academic who will say the things the reporter wants to say, but can't, given the conventions of unbiased reporting.

But it should also change the way academics work as well. One of the more poorly-kept secrets of the academic world is that humanities professors and social scientists are the most ideologically committed members of society. People like to complain about journalistic bias, but journalists are in fact far less politically biased than most professors. A great deal of what passes as academic political commentary is little more than partisan opinion-mongering (I reviewed a particularly egregious example for the LRC a few years ago). And so if academics are smart, they'll take Glen McGregor's no-academics pledge as a challenge: to offer comment to a reporter only when their research puts them in a unique position to inform or clarify the public debate, and serves the needs of the story the reporter is trying to tell.

If there is a big takeaway from "McDogme95" (as Stephen Maher calls it) it is this: It is an opportunity for political journalists to retrench and concentrate their energies on what they are best positioned and best qualified to do: work sources, file ATIP requests, comb through public databases, and break stories that are in the public interest. That in turn creates a space for academics to insert themselves directly into the conversation through their own devices (Twitter, blogs, etc), or through more traditional means such as op-eds or essays. (I can't think of a better example of this than Peter Loewen's recent essay for the Citizen looking at what Stephen Harper is up to.) 

Canadian politics is in need of both better reporting and better contributions by academics. Glen McGregor's manifesto is an excellent first step at articulating the proper division of labour that will take us in that direction.


Gun violence: the economics of abolition

There's lots of talk about America needing to step up on gun control. I suspect that for a lot of people, this is a disguised way of talk about abolition -- that is, the elimination of the private ownership of guns of any sort. 

If so, that's fine. It is certainly worth putting that option on the table and airing it. I doubt it would go anywhere, not in Canada, and not in the USA. But imagine for the sake of argument a government passed a bill outlawing private ownership of guns: it would still be faced with the problem that there are a lot of guns in the country. Stats I've seen vary, with estimates between 250 million and 310 million private weapons on the USA. 

If you wanted to take these off the street through a buyback, what would it cost? Again, buyback programs vary. $200 for a handgun is common, with some programs offering as low as $20 for a rifle. Some programs I've seen have offered $100 gift cards to places like Target. But these are voluntary buybacks, taken advantage of by people who either want to go clean, or have guns they no longer want. A forced buyback program would be far more expensive. 

Assume you wanted to take 280 million guns off the street, at an average buyback price of $200.  Total cost would be $56 billion dollars. (That's probably low, but it's a ballpark).

Would it be worth it? 

[Note: I fixed the math in the next graph thanks to Andrew Coyne's heads up]

Last year in the US, there were 11500 homicides caused by guns. The actuarial value of a human life is $7.4 million. Multiply that, and you have a savings of $85.1 billion, in one year. 85.1-56 is a net savings, minus the buyback costs, is 29.1 billion, call it $30 billion in savings in the first year. But that's not a one-off -- that's $85.1 billion a year after that, every year, compounded. (I think. I forget how to calculate these sorts of things). 

There would be other benefits: lots of people are wounded by guns, so their health care costs and associated other costs would be eliminated as well. But there would be costs as well: it would be foolish to suppose that private gun ownership is a 100% deadweight loss to the economy. 

At any rate, the upshot is that the American government could, if it wanted, easily afford to pay for gun abolition, and it would more than pay for itself in about 8 months. 



Politics: The Naïve and Cynical


1. Here is a naïve view of how politics works.

Politics is about policy. Groups of like-minded people coalesce around a set of ideas about how the world should work. This group is called a party. The party puts forth a platform of policies that will put those ideas into action. The role of the party then is to serve as the interface, or point of friction, where ideas become policies. To gain power, the party promotes and sells these policies to the public as better than those of their opponents.

Thus, the adversarial nature of politics is essentially a debate between objectively superior policies. An election campaign is when the marketplace of ideas is open for business. It is like a graduate seminar in philosophy, where ideas are freely debated, the principle of charity is in full operation, and the best ideas win, whatever their source. 

The goal of this public debate is truth: Truth regarding the demands of justice, the requirements of redistribution, and the scope and character of the public goods that state should offer.  The more people have input into the process, the closer we will be to the truth.

When the party with the best ideas wins, and the better policies are thereby implemented, the country as a whole is better off. As John Stuart Mill taught us, truth is both partial and non-rival -- that is, everyone can share in the truth without it being minimised or depleted.

The crucial trait of a successful politician is that he or she be intelligent. Political leaders should be smart people. Better: they should be policy wonks, charismatic academics, philosopher kings who will rule in the better interest of all. The model naïve politician is someone like Pierre Trudeau, or Jack Layton.

2. Here is a cynical view of how politics works.

Politics has nothing to do with policy, it is about power. Joining a political party is not like joining a faculty club, and is more like joining a tribe or a gang. Their overriding function is to gain power and relative status for their group at the expense of people of other tribes and gangs.

Therefore, a party platform is not a list of policies seen as being in the objective interest of the country. Rather, it is a statement of brand affiliation, or, more simply, identity. The function of the party is to sell its brand or identity as more appealing than that of their opponents. Policies are implemented because of how they appeal to the group and buttress its identity.

Elections are basically popularity contests, not much different from the process of voting for class presidents (read Robin Hanson on this point.) So the point of an election is to make one tribe’s leader seem more appealing than that of the other tribe. The ultimate goal of the exercise is to win power for one tribe. If that requires demonizing the other parties as bad patriots, or bad people, so be it.

For cynics, to govern is to choose between competing interests. There will be winners and losers, with some groups inevitably rising and dropping in status. This is because power is indivisible and rival. One group can only hold it at the expense of others. 

The best politicians are charismatic figures, or gang leaders. They are polarizing figures, ruthless at pursuing the interests of their tribe at the expense of others. Loved, or at least greatly admired by their followers, they are loathed by their opponents.

The successful cynical politician is not necessarily intelligent. What matters is that he is authentic. The relevant question is not “does he have good ideas” but rather “is he a proper representative of my tribe?” The model cynical politicians are men like Jean Chrétien, or George W. Bush.


As used here, the terms "naïve" and "cynical" are not intended invidiously. Instead, they are intended to describe the two extremes of a continuum. Different countries might have different political cultures: some might tend to be more naïve in practice, while others might be more cynical. Citizens of different countries might prefer to be at different points on the spectrum. Some institutions might be more conducive to one form over another.

Yet there is an obvious normative quality to this continuum. Not only can it be used to describe how politics does work, it can also be used as a language in support of reform (or in support of the status quo): we may think that politics ought to be more cynical, or ought to be more naïve.

In fact, the most significant political divide in Canada, and perhaps other polities, is not between left and right, but between those who are cynical and those who are naïve about politics. It informs almost all other opinions about how our political machinery -- including Parliament, the courts, the party system, the electoral system, the media -- should function.

Some examples:

  • The naïve will be in favour of coalition or minority governments and proportional representation. The cynical will prefer majority governments and first past the post.
  • The naïve will have faith in a deliberative approach to democracy. The cynical will rest content with more Schumpeterian forms.
  • The naïve will desire more power for individual MPs or representatives, calling for more free votes in particular. The cynic sees the party as paramount, with party discipline the basis of all political engagement.
  • The naïve will curse the growing reliance on negative advertising as antithetical to the truth-seeking essence of politics. The cynical will see such framing, and the resulting culture of "truthiness," as useful to the in-group/out-group definition that is at the core of political engagement.   

Most arguments between pundits and academics consist of disguised disagreements over which mode of politics is better, the naïve or the cynical. Indeed, most apparently partisan disagreements are, if you scratch the surface, differences of opinion between cynics and naïfs.

To decide whether one is cynical or naïve is the most important meta-political decision one has to make. It is unfortunate that we spend so much time arguing about our partisan biases, and pay so little attention to our meta-political commitments. Whether that itself suggests that we are all, deep down, cynics (or perhaps meta-cynics) is an important question.



Why the truth squads can't beat truthiness

My latest column for the Citizen looks at the entirely salutory development, during the last U.S. election cycle, of media getting back to their old role as fact-checkers. The problem, though, is that fact checking is only effective when truth is seen as a necessary element of political success.

In the age of truthiness, the  "problem with the effort to truth-squad our way back to fact-based politics is it misunderstands the way political persuasion works. Successful politicians don’t win over the electorate by giving them a set of plausible facts that in turn motivate a set of policies, they sell them on an attractive narrative. The best politicians, from Reagan to Bill Clinton to Barack Obama, are storytellers."



There is no Muslim Tide

This isn’t to say that there aren’t problems with Muslim immigrant populations in parts of Europe, especially France, Germany and Holland. But in every case, the troubles can be traced to one of three causes: fallout from past colonial relationships; domestic policies that hinder the ability of immigrants to work, to worship and to naturalize; or the particular character of the immigrant community and how it interacts with the host country. So, Bangladeshis and Pakistanis in London are not the same as North Africans in Paris or Somalis in Ottawa. But regardless of how these isolated problems are (or are not) resolved, the key point is that they have virtually nothing to do with a grand Islamic takeover project.

That's from my review of Doug Saunders' new book, The Myth of the Muslim Tide. 


Is Mormonism crazier than other religions?

(The planet Kobol, as imagined on Battlestar Galactica)

Mormons have some pretty wacky ideas. For example, they believe that some of the native peoples of North America were followers of Jesus Christ hundreds of years before he was actually born. Mormon scripture refers to a planet called Kolob that is, or is near, the physical throne of God -- a belief that was the inspiration for the planet "Kobol" in the sci-fi show Battlestar Galactica (the show's creator was a Mormon). Craziest of all: Mormons refuse to consume alcohol, caffeine, or tobacco.

But is Mormonism wackier than other religions? Is all religious belief equally plausible, or implausible? Or, does plausibility fall on a continuum – a line running from the completely absurd to the thoroughly reasonable?

It would seem that as a rule, most of us -- believers and atheists alike --  instinctively seem to accept that there is a continuum. For example, consider the evolutionary biologist Richard Dawkins, who in recent years has made a name for himself as the leader of a new group of aggressive atheists, a group that also includes Sam Harris, the philosopher Daniel Dennett, and the late Christopher Hitchens. Last month, Dawkins went on a long twitter rant accusing Mitt Romney (who in addition to being the Republican nominee for president was also a Mormon bishop) of being a “massively gullible fool.”

The focus of Dawkins’ attack was Romney’s adherence to the teachings of the Book of Mormon, which is the sacred text of the Latter Day Saints religion. While the book was published in 1830 by Joseph Smith, Mormons believe it contains the writings of prophets who lived in North America between 2200 BC and  AD 421.

“Bible & Koran genuinely old, written in the language of their time. Book of Mormon written by 19thC charlatan. Romney too stupid to see it,” Dawkins tweeted. When he was challenged by readers who pointed out that president Barack Obama was also a believing Christian, Dawkins responded: “Christianity, even fundamentalist Christianity, is substantially less ridiculous than Mormonism (and Obama, if he is Christian at all, is certainly not fundamentalist),” he explained. “The idea that Jesus visited America is preposterous, and the idea th[at] Adam and Eve did too is even worse (it is at least arguable that Jesus existed).”

Another example:  many Canadians will remember when Stockwell Day, an evangelical Christian who believes that the Earth is somewhere between 6000 and 10000 years old, was leader of the Canadian Alliance. During the 2000 federal election, Liberal operator Warren Kinsella mocked Day’s beliefs by brandishing a Barney the purple dinosaur doll on television, claiming "this was the only dinosaur ever to be on Earth with humans."

What makes this interesting is that Kinsella himself is a self-declared practicing Catholic. Yet as
Kinsella's mockery of Day and the glee with which the "Flintstones" theme of his campaign was picked up by the media makes clear, there is a widespread sense that Catholics are less brainlessly
credulous than Young Earth evangelicals.

So the idea would seem to be that the more a religious belief accords with generally accepted scientific views of the world and the universe, the more credible it is.  Let’s call this the Kinsella-Dawkins thesis.

According to this thesis, it is one thing to believe in an omniscient, omnipotent, and benevolent deity who, a few thousand years ago, impregnated a middle eastern woman named Mary with His only begotten Son, and then sacrificed that Son to atone for all the sins of Mankind (sins which were invented in the first place by said deity.) It is something else entirely to believe that 600 years before his son Jesus was born, that same deity led a people from Jerusalem to the Americas, where they grew and split into a pair of warring factions.

Or again: It is one thing to believe in the central doctrine of Christianity, namely, the literal resurrection of Jesus. It is something far stupider, though, to believe that the Earth is at most 10 000 years old, and that God put dinosaur bones and other artifacts in the historical record to test our faith (as many young-Earthers maintain.)

This thesis definitely has a lot of plausibility. It would help account for our folk hierarchy of belief, which seems to allow for degrees of respectability between childish fears of the supernatural, at one end, and the wisdom of millennia that we find in the more robust religious traditions, especially the monotheistic ones.

But for a committed atheist, the Dawkins-Kinsella thesis concedes too much. What it gives up in the name of superficial plausibility is the underlying principle at the heart of the atheistic worldview. To properly see why this is the case, it's useful to recall something that Dawkins himself wrote in his best book, the primer on evolutionary biology The Blind Watchmaker. As Dawkins points out, what we are trying to explain through religion is exactly how organized complexity came to exist in the universe. The theistic answer is: God created it.

The problem with this answer is that it presupposes exactly what we are trying to explain. Whatever else God may be, he is organized and complex. If we can simply posit organized complexity, then we haven't really explained anything.

That is why evolutionary theory is so unanswerably powerful. Only evolution by natural selection, or some similarly "blind" process, is capable of explaining how organized complexity came from disorganized chaos. Every explanation that relies on a consciousness, a higher power, or any sort of pre-existing organizing principle is simply assuming the problem away.

But if that's the game we're playing, then what difference is there between Catholicism and Mormonism, or Hinduism and Islam? It's all of a piece: an equally adolescent commitment to wishful thinking and to the supernatural. After all, once you have accepted that there are conscious, invisible and unknowable forces at work in the universe, does it really matter how many of them you buy into? If God can resurrect his son for a long weekend, surely he could also arrange things so that a 10000 year-old planet appears to be billions of years older. If there is an omniscient power in the universe, is it less plausible that he lives on a planet a few thousand light years away than that he resides in an unknowable realm where he hears our prayers and grants salvation according to whim?

Arguing over religous belief is like playing tennis without a net: almost any hit counts as good return. Under these circumstances, it is pointless to debate the question of who is the better player. The only
issue is why anyone finds it useful to play at all.


The past is the future of paid content

With the news that pretty much every newspaper in Canada is going to some sort of pay wall/"metered model," debate is raging once again over whether consumers will ever pay for content. They have and they will. The trick, as someone from the recording business taught me long ago, was to make it seem free, without actually being free. The model is radio. Here's a column I wrote nearly five years ago on the subject. My belief in the soundness of the central argument hasn't changed. 




Mon Feb 25 2008 
Page: 14 
Column: OPINION 


The lengths to which some people will go to avoid picking up the cheque. At the end of January, a 28-year-old Brit named Mark Boyle began what promises to be a 30-month trek from England to India, for which he is bringing some T-shirts, bandages, and an extra pair of sandals. Significantly, he is leaving his wallet behind, hoping to survive entirely off the kindness of strangers.

Mr. Boyle is walking to promote the values of the "freeconomy" movement, a group that claims 3,000 members in 54 countries. Advancing the bold and original thesis that money is the root of all alienation, freeconomicists believe we need to shift from a "money-based, community-less society" to a "community-based, moneyless society." And so Mark Boyle will strike a blow for community by spending the next 2 1/2 years cadging free meals from Bristol to Porbandar.

It comes as no great surprise then that Boyle is a former dot-com businessman. It is cyberculture, and its confluence with hippie values, that is helping drive the copyright wars, one of the most pointless economic conflicts in recent memory. Dedicated to the proposition that "information wants to be free," the Free Culture movement believes content such as news, books, film, games, but above all music, should be free in two senses: free as in speech (there should be no censorship or control over how culture is used); and free as in beer (the culture should be free for the taking).

This movement is opposed by music producers, film studios, and other content producers, who are lobbying for more stringent penalties for illegal downloading and for stricter controls on how content can be used and copied. Here in Canada, the Conservative government is preparing to introduce an updated copyright bill, but it is facing stiff resistance from "copyleft" activists who worry that the new legislation will give in to Big Copyright's most outrageous demands.

And so the two sides are locked in an increasingly polarized dance, with each advocating a perverse and unsustainable business model. It was left to Paul McGuinness, the long-time manager of U2, to try to knock some sense into them. At a conference in France last week, McGuinness gave a speech in which he blamed internet service providers (ISPs), fund managers, and the hippie culture of Silicon Valley for destroying the recording industry, and he went on to propose that a fee for legitimate downloading should be collected by ISPs and paid out to copyright holders.

For his efforts, McGuinness was flogged around the blogosphere, where he was variously accused of being greedy, hypocritical and -- worst -- "corporate." Except that he's right about the influence of hippie values on Internet culture, as well as his suggestion for how to bring the copyright wars to an end.

The profound influence of the counterculture on cyberculture is not remotely controversial. Scratch a file-sharing activist and, more often than not, you'll find someone who deep down just doesn't like the idea of paying for music.

But that is a bit of a cheap shot. After all, nobody likes paying for music, any more than they like paying for food or drink or shelter or anything else. People pay for things when there is stuff they want and shelling out is better than the alternatives of stealing it or going without. All the Internet has done is make theft the most palatable option of the three, while a halfway measure such as 99-cent downloads on iTunes only serves to foreground the main question, namely, why should you pay for something that other people are getting for free?

If you're trying to square the notion of free culture with how the economy works, a handy rule of thumb is this: in the end, the consumer pays for everything. So when it comes to seemingly free media like radio and television, they are funded for the most part by commercial advertising, which is in turn paid for at the cash register by consumers.

The trick to resolving the copyright wars once and for all is to come up with a scheme for making downloading a similar experience to listening to the radio or watching TV: it would seem free, while ensuring that copyright holders actually get paid.

So how can we make file sharing seem free without it actually being free? Canada currently has a levy on blank recording media (such as CDs) that is collected by the Copyright Board and passed on to copyright holders, but a plan to extend the levy to MP3 players was struck down in early January by the Federal Court of Appeal. The most promising idea is a version of McGuinness's tax-and-distribute model, in which the government charges a basic Internet access tax, collected by ISPs, that would give users an unlimited right to download songs, videos, books, games, and so on. The fee would then be paid out in royalties by the Copyright Board in much the same way it is currently done for radio.

Most importantly, it would allow artists to be paid, in a way that doesn't rely on draconian copyright controls on the one hand, or the kindness of strangers on the other. In the end, you get the culture you pay for, which is why the motto that everyone involved should be rallying around is "Free Lunch." As in, there's no such thing as a.



"the incoherent bleating of the Wasposphere elites"

Terry Glavin is a friend and a comrade, and man alive I hope it stays that way. If I ever find myself on his bad side, I hope he is a good long hike from the nearest computer. 

Terry took a summer break from columnizing for the Ottawa Citizen, but he's back today, weighing in the hand-wringing over the closing of the Canadian embassy in Tehran and the expulsion of Iran's diplomats from Ottawa. His big target is Ottawa Centre MP Paul Dewar, who lamented the absence of more "robust diplomacy." After giving a short laundry list of the sorts of things Iran's fellow travelers get up to:

It is by these instructive evidences that “robust diplomacy” betrays itself as something worse than mere war. It’s cannibalism with table manners, and nobody has any business calling themselves a socialist, a liberal, a progressive or a social democrat if they engage in anything of the kind.          


Two concepts of secularism and Canada's two solitudes: A limited defence of Pauline Marois

There is a lot that divides anglo and franco in Canada: Leafs vs Habs, Corner Gas vs Tout le Monde en Parle; Air Farce vs Juste Pour Rire. But nothing says “two solitudes” more than the distinct approaches to secularism you’ll find in Quebec and in the ROC.

The depth of the mutual incomprehension was revealed recently during the Quebec election campaign, when Parti Québécois  leader Pauline Marois released a platform item called the Charter of Secularism, which would forbid public employees from wearing any religious symbols while at work. So, no turbans or hijabs or kirpans or that sort of thing. On the other hand, a crucifix necklace would be ok, as is the crucifix that hangs in the legislature in Quebec City.

According to Marois, the new charter (and its notable exceptions) would serve a dual purpose. First, it would assert the principle of the neutrality of the state. And second, it would affirm the particular place of Catholicism in Quebec’s history.

"Wanting to take a step toward ensuring the neutrality of the state doesn't mean we deny who we are," she said while campaigning. "It simply means we are at a different moment in our history.

For this she was given the standard moralizing treatment the Anglophone media traditionally reserves for Quebecers at home and Republicans abroad: From the Globe and Mail: “On tolerance of minorities, Pauline Marois is showing the opposite of leadership.” From the Toronto Star:  “PQ’s ‘secularism’ masks European-style intolerance”. (According to the Citizen’s Robert Sibley, Marois’ problem is not that she is intolerant, but that she isn’t intolerant enough. But that’s another argument).  The upshot is that when it comes to asserting both state neutrality and Quebec’s Catholic origins, Marois was seen in the ROC as a hypocrite at best, but more likely a rank xenophobe.

Here’s a more charitable interpretation of the Charter of Secularism: It expresses a philosophically legitimate approach to the question of the proper relationship between church and state, albeit an approach that may no longer be appropriate to the challenges that Quebec faces in dealing with minorities.

There are two broad theoretical versions of the secular state (taken from Charles Taylor’s essay, “How to Define Secularism”). Each affirms the idea of a neutral state, but the form that neutrality takes is shaped by the problem it is designed to solve.

On the first view, the goal of secularism is to control religion, to “define the place of religion in public life, and to keep it firmly in this location.” As Taylor points out, this doesn’t need to involve any overt repression, “provided various religious actors understand and respect these limits."

On the second view of secularism, the point is not to control religion narrowly understood, but to manage the entire spectrum of comprehensive worldviews. These include organized religious outlooks, but also encompass vague types of spiritualism, scientism, atheism, and competing philosophical doctrines such as utilitarianism and deontology. All of these have differing (and possibly incompatible) notions of the good, and will hence come into conflict in the public sphere. The point of the neutral state is to find a way of accommodating and mediating between all of these worldviews.

 Let’s call the first secularism “French,” and the second, “English”. They correspond, more or less, to the forms the emerged out of post-Enlightenment France and England, and they are responses to the distinct challenges religion posed to each society. For France, secularism was a response to monolithic and heavy-handed Catholic authoritarianism. In England, secularism was part of the liberal tradition that sought to mediate between multiple competing worldviews.  

You can see the downstream effects of both these challenges in the way France and the UK approach secularism today. France continues to treat it as a way of controlling religion by purging it from the public sphere – hence the 2004 ban on conspicuous religious symbols in schools (sound familiar) and Sarkozy’s 2011 ban of the burqa and other face coverings.

For its part, the UK has fully embraced the interpretation of secularism – usually called “multiculturalism” – as a device for allowing the maximum amount of freedom that is compatible with the same degree of freedom for others.

It doesn’t take a great imagination to see how these two conceptions of secularism have been transposed into the Canadian context. The ROC is a fairly typical multicultural state, again with a few tweaks and variations from the sort found in the USA, the UK, and Australia. For its part, Quebec has pretty much copied the French approach, with a slight difference: Quebec still sees value, and little harm, in permitting Catholic symbols in official and public spaces.

But (goes the objection), isn’t this “slight difference” a major problem? Isn’t Marois’ proposal to allow the crucifix to remain in the legislature a sign of her profound bad faith?

I’m not so sure. Montreal is one of the most secular and irreligious cities on Earth, but its residents go about their business in the shadow of a giant cross that glowers down at them like a stern bishop. But no one takes it seriously, any more than anyone takes seriously the crucifix in the national assembly. The reason rests in the big difference between the French Revolution and Quebec’s: The complete absence of violence here in the New World. Quebec’s was a Quiet Revolution; there were no beheadings, and there was no need to strangle the last king with the entrails of the last priest. All Quebecers had to do to shuck off the church was take control of their education and health care systems.

This is an important point: Quebecers don’t see any need to ban Catholic symbols from public space, not because they are hypocrites, but because those symbols are no longer a threat. That battle has been fought and won. But the symbols of other, foreign religions are seen as a threat to the secular order, hence the perceived need to control their use.

If this sounds like special pleading, consider a comparable case: the ongoing public funding of a separate school system for Catholics in Ontario. By any reasonable standard, the separate school system is an affront to liberalism, multiculturalism, anglo-secularism, whatever you want to call it. There is simply no rational justification for it, apart from the fact that it’s in the constitution. But when Conservative leader John Tory ran an election campaign a few years ago promising to even things up by allowing funding for all religious schools, he was mocked into political oblivion. Ontario Liberal premier Dalton McGuinty’s ongoing defence of the status quo is far more hypocritical than Pauline Marois’ charter when it comes to consistent secularism.

This is not to say that when it comes to French versus English secularism there is nothing to choose, that each is a matter of local taste mixed with historical happenstance. Each form of secularism was an institutional response to a specific threat, and each served their respective societies quite well. But looking to the future, we can’t say that each is equally suited to the challenges that they face. In particular, the French/Quebecois model is overly focused on the threat of religion narrowly understood.

Neither France nor Quebec are in any danger of being taken over by a single religion – they are not about to revert to theocracies. The real problem is the same one confronting every other major industrialized democracy, viz., the challenge of managing deep  and irreconcilable diversity. Spending time and energy and capital trying to keep each new religious group in its place is political and cultural wack-a-mole. Instead, they should concentrate on putting in place programs and policies and institutions that will allow the fantastic diversity that their societies have to offer to co-exist.

If the rest of Canada has better policies, and better practices, it is largely thanks to historical factors that are none of our doing. If we insist on taking credit for them while making invidious comparisons with Quebec, the least we could do is make sure that we’ve purged our institutions of all inherited biases.




Space: The abandoned frontier

I wrote this for Maclean's a few years ago. I have somewhat (but not completely) different views on this right now (see the previous blogpost). But the generally wistful sentiment remains. 


The news media reported last week that NASA's robot rover Spirit, stuck in the Martian equivalent of a ditch, is still spinning its wheels in the deep powder like some suburban doofus trying to free his SUV from a snowbank.

NASA scientists have been working hard trying to figure out some way of rocking the space buggy free, and they hope to give this a shot in a few weeks. But in the meantime, the trapped robot explorer serves as a perfect metaphor for humanity's entire extraterrestrial ambitions.

For space keeners, this should be a week of at least mild celebration. After six tries, the space shuttle Endeavour finally made it into orbit, on its mission to complete the construction of a Japanese-designed veranda that will house science experiments outside the pressurized space station. There are more humans in orbit than ever before, including two Canadians. Encouraging, no?

No. The mission comes framed against the attention given to the 40th anniversary of the Apollo 11 mission that saw humans bounce around for the first time on another world. And in light of what Armstrong and Aldrin accomplished, and the era of great exploration that everyone expected would follow, the baker's dozen of astronauts spinning around in low orbit, still caught in the clutches of the earth's gravitational pull, looks pretty pathetic. As Tom Wolfe, the prose-poet of America's quest for the stars, put it in a recent op-ed for the New York Times, "If anyone had told me in July 1969 that the sound of Neil Armstrong's small step plus mankind's big one was the shuffle of pallbearers at graveside, I would have averted my eyes and shaken my head in pity."

But here we are, four decades gone, and the spacefaring dreams of humanity are dead and buried. Not only have there been no manned missions to Mars and no permanent moon bases, no human has so much as ventured out of orbit since 1972. It's as if humanity, having learned to swim by being tossed right into the deep end, opted to spend the rest of the time by the pool clutching the edge.

For decades now, the "space program" has amounted to little more than strapping some humans to a tube, sending them roaring thuggishly up through the atmosphere, and -- once finally free of the cloying wetness of air -- stopping dead, only to whirl about the earth in the name of science. Imagine if Columbus, having brought the Nina, Pinta, and Santa Maria safely back from the new world, spent the rest of his career tacking back and forth in the harbour at Palos, studying seasickness or testing chronometers.

Of course there are loads of excuses for why we've spent the last four decades doing space doughnuts. It's expensive. It's hard. It's slow. It's cold. There's no air. No gravity. And when they aren't crashing, getting lost, forgetting to return phone calls, or getting stuck in space dust, robots can do whatever sciencey things we need done up there.

But we all know the real reason we abandoned space exploration: Communism failed, the Americans won, and history ended. John F. Kennedy did a good enough job wrapping the moon mission in a lot of "for all mankind" hokey-pokey, but that's not the UN flag stuck in the dirt in the Sea of Tranquility. As the Lyndon Johnson character in The Right Stuff put it, "I for one do not go to bed at night by the light of a Communist moon."

The space race, and all the hopes and fantasies it inspired, was always a creature of the Cold War, an exercise in superpower one-upmanship. That doesn't mean the ideals it inspired were false or not worth pursuing, only that it is on this field of striving, the prideful struggle for recognition, that courage, honour, and daring find their home.

There is nothing noble or honourable about our ambitions in space these days, no serious pride to be taken in what we're accomplishing. Putting together the space station is dangerous work, but big deal. So is working on an oil rig, and we don't build monuments or sing hymns to oil rig workers.

It would be nice if the Chinese got more aggressive in space, especially if they were to make a serious go at Mars. Perhaps the fear of the red planet becoming a Red planet would help shake the Americans out of their orbital slumber. But it is not America that is the real problem here, nor is it about "the West." It is the honour of all humanity that is on the line.

Because the odds are that some day, eventually, we're going to be visited by an alien civilization. It may be next week, it may be in the year 12009, but over the near-eternity of time this galaxy is surely going to fill up with a buzzing curiosity of life. Intelligent races will rise who will look to the spiral arms of the Milky Way, wonder what's around the next bend, and set out to take a look.

When they get here, what will they find? An intelligent but distracted species fussing with Facebooks and iPods and Xboxes while a great game unfolds over their heads. Indeed we may have missed our window of opportunity to leave earth; with all the developments in information technology, the appeal of moving in outer space fades in comparison to the easy amusements of virtual space.

But the shame of it all. On their way here the aliens will see the Spirit rover, stuck for millennia in the Martian mud. They will look around and see our footprint on the moon, no bigger than a baseball field. And they'll point at us, galactic laughingstocks, the species that looked briefly to the stars and said, "no thanks."


Space: The impossible frontier

Neil Armstrong's death on Saturday has spurred the usual reminders of how drawn-back our collective ambitions are for space exploration. There are lots of reasons for why we don't have moon-bases, some having to do with lack of ambition or lack of money. I'm increasingly inclined to the view that the problem of space travel is simply intractable, for three main reasons: 

First and most obvious is the problem of speed and distance. Space is too big, and we travel too slowly, to get anywhere beyond this suburban cul de sac of a solar system, in this already unfashionable arm of the milky way.

The second is lack of gravity. Spending months and even years in zero G is far too tough on the human body. The effects on bone density, muscle mass, and eyesight are big problems, and any plausible interplanetary spaceship needs to find a solution.

Finally, there is the problem of our ecological niche. This is the least-understood of the problems with space travel, but probably the most serious. We don't just need food and oxygen -- things that are easy enough to bring into space. We need, for long-term space travel, an entire ecosystem, from intestinal flora on up.

The upshot is that the human body isn't a sort of computer module, a plug-and-play device that can be severed from its connection to the entire ecology of planet earth and sent on its way to the stars. Any possibility for human space travel will require, I think, that we find some way of bringing earth with us. That is, any reasonable space ship will allow us to survive over millennia, and provide us with a sustainable ecosystem that we carry with us. It will also have to have some sort of gravitational field generator that will exert something close to 1-G of pull on our crappy little monkey spines.

Which is just another way of pointing out that we are already traveling through space, on the only plausible spaceship we can imagine. It's called Earth, and it is carrying us slowing through the cosmos, to wherever and whenever, only heaven knows. 


A sentence that should be tattooed on the forehead of every libertarian

"Increases in the price of what the federal government buys relative to what the private sector buys will inevitably raise the cost of state involvement in the economy"

Lawrence Summers in the WaPo


A plea for no excuses: Why the ref didn't cost Canada soccer glory

Heartbreak for Canada as they lose to the Americans 4-3 in overtime.

UPDATED: I wrote a column-length version of the argument for the Ottawa Citizen.

Success in sports is a function of five components: preparation, strategy, tactics, execution, and chance. The relative importance of any one of these components varies considerably from one sport or event to another (for instance, strategy plays a bigger role in the 10 000 metres race than the 100 metres), but every competitor's outcomes are determined by the interplay of all five components. 

Preparation involves everything the athlete does to make sure that he or she is able to perform properly. Most obviously, this involves training, practice, rehearsal, and so on. But increasingly preparation also involves mental preparation -- techniques of visualization, of focus, of dealing with anxiety, and all the other ways athletes can undermine good training and proper execution through mental errors. It also involves other forms of preparation as well, such as nutrition. 

Strategy is your plan for achieving your desired objective or outcome. But it is more than just a plan in the sense of a fixed set of decisions. Rather, strategy is about controlling the terrain or circumstances of the competition that best suits your training and abilities, so that any emerging possibilities or options can be exploited to maximum advantage. In a swimming race, strategy might involve prior decisions about pace - go out hard and try to lead, or hang back and come from behind. In volleyball, a strategy might be to consistently serve and set blocks in such a manner as to force setters to favour one side of the court over another. In soccer, strategy is largely (but not exclusively) about shape -- 4-3-2-1, or 4-4-2, and so on. 

Tactics are the techniques you use to gain advantage in a competition, in light of the options that arise within a given strategic context. To use a military analogy, if strategy involves choosing and shaping the field of battle, tactics are the weapons and manoeuvres you use in battle itself. How a cyclist responds to a breakaway by a handful of riders -- join the breakaway, or stay with the peloton -- is a tactical decision. A hitter's decision in a volleyball game over whether to try to hit over the block or to tip short is tactics, as is a midfielder's decision to try to hold possession, or push the ball upfield aggressively. Most people, when they talk about sports, talk about tactics. (Hockey analysts, in particular, are notorious for completely ignoring the strategic dimension of the game). 

Execution is the performance of a tactical decision. In the simplest events or sports, execution of a task or movement is what puts the competition in motion and, iteratively, drives it towards its conclusion. As H.A. Dorfman puts it in his book The Mental ABCs of Pitching, "the execution of pitches, one at a time, is the singular task that moves a baseball game from its beginning to its close." The pitcher's role consists entirely of selecting a pitch (fastball, slider, changeup) and a location (inside, outside), focusing on the target (the catcher's mitt), and delivering the pitch to the target. Sports like diving and gymnastics have a lot in common with pitching, in that they are almost zen-like in their simplicity: manoeuvre selection (the kind of dive, or vault), its mental visualization, then physical execution. But every sport ultimately comes down to execution, from stroke or stride quality and consistency in swimming or sprinting to serve delivery in volleyball to proper pace and accuracy of passing in soccer. 

Chance is the most difficult and morally fraught aspect of sports to come to grip with. Every game, sport, or event involves elements of chance: lane assignments in rowing or swimming; weather that can affect performance in cycling road races; lucky bounces in tennis or volleyball; bad refereeing in soccer; the politics of judging in gymnastics, diving, and other events. 

For many athletes (and many fans and observers), success in sports boils down to the intersection of execution and chance. Athletes perform, and the result is determined by how their proper execution is compromised or amplified by chance. A lucky bounce can make up for poor execution and put you on the podium; a bad decision by a referee can undo the effects of proper execution, lead to a loss, and undermine years of hard work and preparation. 

For many -- indeed, for almost all -- athletes, chance is a security blanket. It is what provides the excuses for when things don't go the way the athlete hoped. The track was wet. I drew an outside lane. The referee jobbed us. Almost every athlete reaches for the excuses at one time or another, and for good reason: it deflects criticism away from poor execution, or from questions of whether the athlete trained properly, or if the coach chose the proper strategy. 

The best athletes - and by this I mean the athletes who are so elite they make mere Olympians look mortal -- have a different perspective. For these competitors, execution is all there is. As Dorfman puts it, what every pitcher has to realize is that "as he is competing, the execution is all that matters -- because it is what he can control". 

Here are two examples I have come across over the course of the London Olympics. The first, purest expression of execution as the entirety of the sport, comes from Usain Bolt after his victory in the 100 metres, in which he ran the second-fastest time in history:

"It wasn't a perfect start, so I had to execute from 50m and I knew I was going to do well after that," said Bolt. "I just ran. I'm not going to say it was a perfect race because I know my coach is going to say no."

At first glance, it sounds a bit like he might be making a bit of an excuse for why, despite his victory, he didn't set a world record. But what Bolt is doing is simply explaining how the race went. Slow start, but instead of getting rattled or worried, he trusted his training and his ability, and stuck to the game plan: execute over the last half of the race and no one can beat him. Here's what he says was going through his mind as he began to execute:

"I never remembered I was running against the clock until it was 30 metres to go, then ‘world record’ popped into my head. I looked across at the clock but it was too late to do anything about it then."

This is an athlete who was completely in the zone. You can see it in the replays of the race. Unlike most Bolt races, where he seems overly conscious of the crowd before the race starts, and of the other sprinters during the race (he tends to look around more than any other sprinter during the race), in the 100 metres at London he was executing from the very start. The other sprinters, the crowd, even the timing clock, might as well have not existed. 

Consider then Clara Hughes, who competed for Canada in cycling. Hughes is a six-time Olympian who has won six Olympic medals: two bronze in cycling (road race and individual time trial) from the 1996 Games in Atlanta and a gold, silver and two bronze in long-track speed skating from the 2002, 2006 and 2010 Winter Games. Hughes finished what for many was a disappointing 32nd in the women's road race, in pretty harsh conditions. Here's what she said after the race:

“It was terrifying,” said Hughes. “It was like really technical, and the roads were pretty slippery. Crashes. I mean racing in the rain is not fun. This is like three out of three Olympic road races for me in the rain.”

Excuses? Not remotely. What Hughes was doing was giving a largely dispassionate analysis of the circumstances of the race. In fact, what many people had trouble understanding was why Hughes didn't seem even slightly upset with her position. Again, after the race: “It was epic. It was awesome, though,” she said, smiling. 

For Hughes, the only thing that mattered was her execution. Yes, she said she had got stuck behind another rider just as she was getting ready to go with a breakaway group. But as far as Hughes was concerned, the race went as well as she might have hoped. Here's how Jonathan Gatehouse of Maclean's reported it (and note the title of his piece):

“When I look at my placing, you can say that I’m disappointed,” she said. “But when you look at my effort and everything I put into this, I’m not. I felt good—in the sense that it felt like hell. But in terms of what my effort was I suffered, and that means it was good. I gave everything I had, but it just wasn’t good enough.” 

The refusal to make excuses is what distinguishes mere Olympic athletes from true legends. Again from Dorfman: Excuses engender weakness, rather than courage. Worse, they prevent the excuse-maker from making the adjustments or corrections they need to make. 

This is why I'm a bit concerned by the reaction coming from the Canadian women's soccer team after their epic loss to the USA in their semi-final match today. It was as close as can be, with the Americans scoring in the last minute of overtime to snatch a 4-3 victory. It was the only time in the game the Americans had led: despite being heavily favoured, they had fought back from 1-0, 2-1, and 3-2 leads. The Canadians played very hard, and despite being outmatched at almost every position by the Americans, it was a game they should have won. 

Why didn't they? Well, to hear the Canadians say it after the game, they were robbed by the referee. Ahead 3-2 with twelve minutes to go, the Canadian keeper was penalized for delay of game for holding on to the ball for longer than the six seconds the rules allow. It's a penalty that is never called, as everyone, including the Americans, conceded. But the Americans were given an indirect kick inside the box; the rocketed shot hit a Canadian player in the shoulder, and the ref awarded a penalty shot. Abby Wambach buried the kick for the Americans, tying the game. 

Afterwards, the Canadian goalkeeper Erin McLeod said "“We feel like we got robbed in this game.” Team captain Christine Sinclair -- one of the greatest players in the world, who scored all three goals for Canada in the game,  was more blunt. “We feel like it was taken from us,” she said.  “We feel cheated.”

Of course it was a bad call, but it did not hand the Americans the victory. Keep in mind a few things.

First, the Canadians had the lead twice before that in the game, and lost it both times. That is, they were not able to maintain the aggression needed to keep the more talented Americans off-balance. As any competitive athlete knows, sealing the deal is the most difficult thing to do. Why? Because it is about continuing to execute when the pressure is on, when there is more at stake. As my old volleyball coach used to berate us: "When you've got them by the nuts, you have to squeeze."  

Second, anyone watching the Canada-US game with a strategist's eye would have noticed something fairly obvious, which is that the Canadians were having a hard time all game with American attacks down the (American) right side. In particular, the American midfielder Megan Rapinoe caused endless chaos for the Canadian left backs, creating chance after chance. The Canadian defenders had a difficult time getting goalside of the Americans and closing down the attacks to the left of their goal. 

Mostly it was because Rapinoe is a beast of a player. But could the decision by the Canadians to play a 4-3-2-1 shape have had anything to do with it? Did the coaches fail to adjust their formation or players accordingly? Did the players fail to adapt their tactics to the chaos Rapinoe was causing? 

It's hard to say. But a few things are clear. First, that it was not remotely surprising that the winning goal for the Americans came from a beautiful cross from -- you guessed it -- their right wing, after a Canadian defender made a poor tactical decision, and left her post to chase down a ball wide to her left. She was way too late, the cross came back in and found Alex Morgan's head, and that was it. 

This is not to take anything away from the Canadians. They played a magnificent game -- it was one of the greatest moments of this Olympics, and one of the greatest nights for Christine Sinclair. The refereeing was bad, but the Canadians didn't play perfectly either.(The first American goal in particular was brutal -- a short-side defensive breakdown that let Rapinoe score directly off a corner kick.)

If they are going to have a hope of winning the bronze medal on Thursday against France, they need to stop blaming the ref for their loss against the USA, and think about adjusting their strategy on defense.

In short, they need to focus on execution going forward, not excuses for what is past. That is the path to glory and greatness. 

Getting to Copenhagen

Is Denmark the world's perfect country? I spent a week there in March so I'm clearly an expert on the place. And the answer, increasingly, is yes. I mean, check this out: They are building bicycle superhighways -- wide, smoothly paved bikepaths to serve as commuter arteries for cyclists coming into Copenhagen from up to 14 miles outside the city. The first opened in April, with another 26 planned. 

Why are Danes such keen cyclists? One observer says it's purely about the convenience:

In Denmark, thanks to measures like the superhighway, commuters choose bicycles because they are the fastest and most convenient transportation option. “It’s not because the Danes are more environmentally friendly,” said Gil Penalosa, executive director of 8-80 Cities, a Canadian organization that works to make cities healthier. “It’s not because they eat something different at breakfast.”

But there has to be more to it than this. There is something in the water in a place like Copenhagen -- a combination of high levels of social trust, powerful network effects, and smart planning. But something like this can't be reduced to cost-benefit analysis:

Superhighway users can also look forward to some variation on the “karma campaign,” now under way in Copenhagen, in which city employees take to the streets with boxes of chocolate to reward cyclists who adhere to the five rules of cycling: be nice, signal, stay to the right, overtake carefully and, rather than let bicycle bells irritate you, do your best to appreciate them.




The church of organic

A number of people forwarded me an article by Stephanie Strom that was published in the New York Times over the weekend, about the ongoing battle between organic purists and the increasingly powerful forces of Big Organic. The main character in the story is a man named Michael J. Potter, the head of the independent organic foods producer and wholesaler Eden Foods. 

The spectre of Big Organic has been haunting the industry for most of the past decade, at least since Walmart started selling truckloads of the stuff eight years ago. The rise of industrial-scale organic is what spurred the locavore movement, just as the mass-marketing of local was the impetus for the artisanal craze. 

And so there's not much new to the story -- it's pretty much the boilerplate co-optation fable, where the energetic, ambitious, DIY upstarts have their scene taken over by corporate interests, who then sell a mass-marketed version to the masses, with all the value-laden authenticity bleached out of it. In this case, the word "organic" merely takes the place of "punk". Think of Eden Foods as Henry Rollins, while Wholesome & Hearty is more like Blink 182. 

At the core of the dispute here is the setting of standards for what constitutes organic food. The big food corporations like Heinz and Kellog have come to dominate the board that approves non-organic ingredients and additives and other inputs: "At first, the list was largely made up of things like baking soda, which is nonorganic but essential to making things like organic bread. Today, more than 250 nonorganic substances are on the list, up from 77 in 2002."

What is interesting about the debate as it plays out in this article is that the question of whether these various "synthetics" should be allowed or not is entirely political.  That is, Strom goes the entire article without ever confronting what should be the central issue, which is whether any of the controversial ingredients or inputs are healthy, or good for the environment, or contribute to the taste of the product. It's clearly seen as irrelevant to the debate: the term "sustainability" is never used in the article, which is sort of like writing about the Occupy movement withouth once using the term "inequality".

Instead, the argument over what is properly "organic" is over whether some ingredient meets some mythological standard of purity, fine-graining the ideology in the manner that was perfected by Marxists, and before them, the deeply religious. Indeed, substitute "kosher" for "organic" in this article and you get a fairly healthy sense of how the debate is playing out. The difference of course is that for orthopraxic religions like Judaism, the following of the rules for their own sake is the entire point. The rules surrounding organic are -- in theory anyway -- directed at a more practical end, like environmental sustainability or better health outcomes. 

That's why the last line of the piece is so perfect:

“People keep telling me that all the work we’re doing with organic farming and agriculture and processing, some of that could be deemed charitable work,” Michael Potter says. “Maybe we should start a church.”

I submit that he already has.