You are currently browsing the monthly archive for December 2008.

A recent post by Jonah Lehrer brings to the fore the positive and negative aspects of our brains’ unconscious cognitive processes. He talks of how our dopamine neurons drive us to find patterns of regularity in the world around us. For a language-using, tool-using animal, the ability to detect patterns in things is obviously enormously important. It is by recognising patterns that we can employ words, both verbally and graphically, and it is by recognising social patterns of behaviour that we can engage in sophisticated forms of cooperation. Finally, it is by recognising causal patterns in nature that we can develop sophisticated future-directed methods and stategies for sustaining and protecting ourselves.

But there is a downside to this sub-personal tendency to find patterns in things. We see order in randomness – we see things that are not really there. Jonah uses the example of stock markets; traders employ all sorts of complicated models to predict them, but these are often next to useless. However, I don’t think this is a particularly good example, as the desire to find patterns in stock markets is probably driven by the desire to make money, rather than the feeling of reward that pattern-finding simpliciter yields.

What interests me here is the double-edged nature of decision-making that is informed by pattern recognition and that is driven by dopamine neurons. On the plus side we take a joy in capacities that are so important for us: locating patterns of regularity in an overwhelmingly complex natural world; locating patterns of meaning in an overwhelmingly complex social world. In my opinion, the joy we take in music, poetry and the arts is deeply rooted in our pattern-finding capacities. A perfectly crafted three minute Motown song might lay down a pattern we find very easy to follow but now with heightened emotional intensity. A Bach fugue might thrill us with the excquisite nature of its counterpoint variations on a theme – taking our natural ability to detect increasingly fine-grained patterns of similarity and difference to new heights. A Mondrian painting might distill down the essential elements of landscape and our subjective view of it. And, even a conceptual work like Robert Rauschenberg’s telegram to the Galerie Iris Clert draws a pattern holding between things that we perhaps haven’t seen or thought about before.

Art is a wondeful example of the double-edged nature of our enjoyment in pattern-finding. On the one hand, we revel in tracking the patterns it traces out – that is, we revel in the familiarity of certain patterns like the human form, but also in testing our skill in tracing complex or unusual ones. On the other hand, some art actually disrupts our pattern-finding abilities – stops us from staying in familiar grooves, creates a dissonance that stirs us to try to understand anew. Adorno wrote about this latter kind of possibility for art, citing the compositions of Schoenberg and Berg as examples of art that disrupts our usual cognitive patterns and forces us to pay close attention to the precisie material structure of the object we are engaging with.

I think there is a kind of joy in both these forms of aesthetic pleasure because they tap into three levels of pattern-finding that we excel at as human-beings. At the frist level, there is the unconscious pattern-finding that serves us so well as natural and social animals. The reward at this level is a kind of comfort in familiarity. At the second level, there is the joy in tracking complex unexpected patterns. The sense of reward here is not so simple – it is a reward that engages our conscious brains; we become aware of our abilities to track patterns because they are pushed to their limits and also because we notice unusual or unexpected ones. To a certain extent, this kind of feeling of reward helps us hone the unconsicous level of pattern-finding that informs simple decision-making. But it also enables us to make decisions that engage far more complex data. Finally, at the third level we are forced to reflect on our pattern-finding capacities per se – we reach a meta-cognitive state where we realise that we are constantly imposing patterns and that there are inherent dangers in doing this. In other words, at this level we actually cognise the double-edged nature of our dopamine driven pattern-finding capacities – we see that they enable us to track features of natural and social reality, yet we also see that in constantly employing this capacity we may in fact be simply imposing form rather than discovering it.

The lessons from this three-level view of pattern-finding and decision-making are the following. Our unconscious cognition can aid us greatly and in many situations should be trusted and not reduced to more self-conscious cognition. But this level of cognition needs to be trained and honed. It needs to be informed by experiences and learning that constantly calibrate its tendency to simplify – we need to be continually open to the new patterns around us. And the aesthetic pleasure taken in art can help us here. But lest we become self-satisfied about our honed capacities, we need to, if we can, reach the meta-cognitive level where we come to see that pattern-finding per se has a double nature – the wonder of our tracking patterns in the material world and the delusion of our imposing patterns that aren’t actually there.

I think there is a joy in exposing this delusional tendency. It is the joy of being – or aiming to be – both an embodied creature connected up reliably through perception to the world, and a self-critical and rational animal capable of reflective insight into its own cognitive predicament.

As reported in the Guardian, Neuroscientists at the Karolinska Institute in Sweden have managed to fool people that they inhabit another physical body. The experiment involved an optical illusion which convinced subjects they were inhabiting the body of a dummy across the room. The illusion was so powerful that subjects flinched when the dummy was threatened with a knife and even felt they were ‘shaking hands with themselves’ (not quite sure what this really means!) when they walked across the room and shook hands with the dummy. The neuroscientists concluded that the proprioceptive sense of self (the sense of inhabiting one’s own body) is generated by the way the brain integrates multisensory perceptual signals (our sense of inhabiting our own bodies is a kind of ‘trick’ the brain plays and be manipulated).


Jonah Lehrer reports in his blog that there is a strong correlation between lower IQ and a dietary lack of iodised salt. He also reports that there is a strong correlation between high levels of lead and lower IQ in children (lead is often found in old paint in the USA and lead-painted apartments are often occupied by the poor). These two findings suggest that disadvantage in the poor may be much more grounded in environmenntal factors than previously thought.


Jonah also reports on a study that claims that happiness is something not only possessed by individuals but networks of people as well. Apparently happiness clusters around groups of happy people and its waxing and waning is far more reliant on factors external to individuals than previously thought – it spreads like a contagion amongst groups and this probably has an evolutionary basis in securing social bonds.


Finally, the Neurophilosophy blog reports on a study which shows that the brain’s response to fear is fine-tuned by culture – that basic responses are hard-wired but that these are calibrated by cultural factors.


This apparent ragbag of studies expresses in several key ways how brain science might effect the way we understand ourselves in the twenty-first century:


(1)   Our sense of ‘self’ is in part generated and sustained by multiple sub-personal physiological mechanisms, so that it no longer seems possible to think of ourselves as wholly self-directing and autonomous (as Descartes, Kant, Sartre et al thought we should).

(2)   Disadvantage, especially historically entrenched disadvantage, may well in part be grounded in environmental factors beyond the control of individuals (such as where they live, how much they have to eat, what they have available to eat). Is it still possible, in light of this, to expect individuals to drag themselves out of disadvantage by willpower alone? Shouldn’t we at least be focussed on alleviating environmental determinants of disadvantage first?

(3)   The whole ‘self-help’ approach to mental well-being that goes hand in hand with a certain form of individualist capitalism seems to misunderstand how people actually become happy. Happiness seems to be a largely socially embedded and constituted phenomenon, so that learning how to be happy involves learning how to maintain emotional connections to others (so that one’s happiness depends on relations, not intrinsic individual properties).

(4)   Despite what neruobiology might tell us, neuroscience does not fund determinism. There are certain hard(ish) limits on cognitive-emotional processes, but within these bounds, how we behave, perceive ourselves, that’s all up to us. Elisabeth Gould’s studies of neural plasticity also support this view of the brain as not only determining us, but something the functioning of which we can to a certain extent determine ourselves.


The overrall conclusion then is that ‘we’ are far more determined by extrinsic factors (both social and natural) than previously thought. So where appropriate we need to learn to work with the hard(ish) constraints our brains place on us. But within these constraints, we have the abilities to calibrate ourselves and our world to reflect the values and ideals we hold dear. And the hard constraints aren’t all bad – that we can only be properly happy by learning how to thrive in emotionally attuned networks is a constraint that’s just fine with me.

Karen Matthews is a terrible woman. There’s a sentence we’d all sign off on, wouldn’t we? Shouldn’t we? Well, it depends what is meant by ‘terrible’.


What, I hear you say, are you suggesting? That we feel sorry for her? To a certain extent and in a certain way, yes, I am going to suggest that.


When a neglected child commits some offence we say, quite rightly, that we shouldn’t blame her. We say that she is merely a victim of a certain causal history, one that brought cruel fate. But when Karen Matthews does something heinous we forget what has happened to make her the way she is. It’s as if something magical happens at eighteen that allows us to disregard the causal history in the case of a ‘consenting’ adult.


In fact, nothing magical happens at eighteen. What might happen is that a person develop the cluster of abilities that allow her to make informed and responsible choices – that she  be able to get enough distance, as it were, between her own desires and the decisions she makes. But that clearly didn’t happen with Karen Matthews, who went on living in an adult world with  (at least in part) the mind of a selfish child.


That the vast majority of people do in fact develop the autonomy to rise above the egoism of childhood is why we back up laws with punishments. Part of being an autonomous adult is thinking ahead to the consequences of one’s actions, and punishments are there to sway those that teeter toward temptation yet have the strength of will to resist. But it seems quite clear that Karen Matthews could only think ahead to the gratification of her own desires.


At the danger of reading too much into this case (a danger Chris Dillow suggests we resist), discerning why someone acts as Ms Matthews did, as Polly Toynbee has pointed out, is usually not hard to discern – a broken home, unloving parents, a history of physical and sexual abuse, a lack of education, bad diet, drug and alcohol abuse, perhaps even just genetically inherited low intelligence (although I doubt this is a factor in many cases, and even where it is it can usually be overcome). It is almost always for these reasons that such a  person sails past her eighteenth birthday without learning the abilities we unthinkingly ascribe to consenting adults.


This isn’t to suggest Ms Matthews can’t exercise choice simpliciter. It is to say that the choices that appear to her as possible and salient are limited (and in her case downright perverse) because of her underdevelopment as a person. If that’s so, why don’t we view her in the same light as the neglected child? Why do we say she is terrible for making the choices she did? It seems to me the factoring in of the relevant causal history in the one case and not the other is utterly arbitrary.


What usually happens when someone turns eighteen (or thereabouts) is  simply that a certain causal history does occur – one which enables the autonomy requisite to live the moral life of a fully developed adult human being.  When it has so occurred we take ourselves to be justified in blaming someone for lapses in judgement thereafter precisely because the right array of choices appear to her as possible and salient.


The villain of the piece here is the Kantian-Christian idea that each of us can act morally regardless of our causal histories. It seems to me that this is false. Moral action requires moral deliberation and that in turn requires possessing the cluster of abilities that allow the right choices to appear to one.


This is not just a philosophical position (although I happen to think the position is justified by philosophy alone), it is a position that is beginning to be backed up by science. Neuroscientists can produce brain scans that display the difference between normal and neglected children. What sets them apart is that the former and not the latter possess the neural pathways that fund various cognitive abilities, including the abilities essential for responsible behaviour. Neuroscientists can also produce brain scans that show the difference between adults who can and can’t delay their own gratification in order to exercise self-control and think ahead to consequences, and the difference is the same lack of relevant neural pathways. But it’s not all doom-and-gloom causal determinism: neuroscience also suggests that for most of our adult lives, neural pathways can be engendered anew.


So in a sense Karen Matthews is a terrible woman – she is terrible at being a woman because she still acts with the egoism of the selfish and confused child. An awful failure of socialisation afflicts her just as it afflicts the neglected child.


The bonus of seeing things this way is that the failure, as a social failure, concerns us. And we would do better to be thus concerned than to indulge in the self-congratulatory vindictiveness that seems to abound. After all, did we choose the causal histories that meant for us turning eighteen did actually mark passage into responsible adulthood?

Yesterday, Daniel Finkelstein took up two suggestions for educational reform. The first, from Malcolm Gladwell’s book Outliers, was that kids should spend a lot more time at school in order to improve skills in core subjects like mathematics. But also, given the book’s aim to give an account of the production of geniuses (the account is roughly, that geniuses are not produced by individual exceptionalism, but by sheer hard work and support), the suggestion was that kids spend more time at school in order that there be more geniuses. The second, from the Chief Executive of the RSA Matthew Taylor, in his blog, was that kids spend less time at school; that older kids in their final year of schooling have one day a week where they supervise themselves in independent study.

Finkelstein suggested that because these ideas contradict one another, there must be something at fault. That turns out to be ‘survivor bias’ – both Gladwell and Taylor are guilty of focussing on the cases that support their idea and ignoring the (perhaps numerous) cases where the idea has not been borne out by reality. So for every pupil that thrives on longer or shorter hours spent at school, there will perhaps be fifty that don’t. In other words, both ideas are vouchsafed by pointing to the cases of success and ignoring  the cases of failure. We resolve the contradiction then, by saying that one idea serves some people better, and the other some other people better.

Finkelstein goes on to suggest, that in light of this, we keep an open mind about what does and doesn’t work in education. I have no problem with that. But I feel that survivor bias is differently weighted with regard to Gladwell and Taylor’s ideas.

First, we need to take into account the degree of behavioural adjustment each idea requires. A school in New York with long hours might work if it draws on super apirational parents and their expectations. Or it might work culturally in China and Korea for various reasons I don’t feel competent to comment on. But it might be a disaster in many other (say) parts of America and the UK. Conversely, the suggestion that kids in their final year spend one day unsupervised requires far less behavioural adjustment. The wider culture it is being introduced to already values self-reliance, so the suggestion is not alien. But perhaps more important, it is only one day a week in the final year. If it doesn’t work for some kids, it only doesn’t work for them for one year of their schooling.

Second, the Gladwell idea presumes the idea of education is to produce really smart kids. Which of course it is. But long hours slogging in a classroom produces a particular kind of smartness – kids who are really good at maths. Are we sure that is a powerful enough reason to massively change the education system? Won’t there be cons as well as pros to such a change?

Finally, Taylor’s suggestion is in response to pressures such as demographic change and lack of resources to fund public services. Thus it has other reasons in its favour than mere educationalist dogma. Gladwell responds to pressures also: the apparent falling behind Asia of the North Atlantic world. I’m not going to comment on which pressure is more powerful, but in analysing the merits of each idea we should at least take the distinct kinds of pressure into account.

Timothy Garton-Ash recently wrote about the differences between China and ‘the West’ with regard to state intervention and free markets. He made the case that things are not as clear-cut as we might think. China sometimes has a very light touch in terms of Government regulation, and its spending on the public sector compared to even that bastion of free markets the US, is meagre. He also pointed out that there is a very strong entrepeneurial culture in China, perhaps too strong – one of the problems the Chinese communist party faces is tempering inequality and environmental degradation in the face of almost untrammelled private sector growth.  

Of course, there are strong statist elements in the Chinese economy, especially in the banking sector (although the West has recently caught up here!). But the point of Garton-Ash’s article was to disabuse us of a tendency to think in terms of a crude dichotomy between West and East along free-market/statist lines.

I wonder whether this way of analysing things really works anymore – are the challenges we face to be met by getting the right blend of free-markets and state control? At a certain level yes. But what about thinking a bit harder about how social organisation in institutions and businesses reflects what kind of individuals we are, what kind of individuals we want to be? What about thinking about where we want forms of social organisation to take us – where do we want to go, is it really high-growth consumerism that we want?

Here’s the connection with motivation: it seems quite clear now that one reason for adopting neo-liberal capitalism is the claim that people won’t be motivated to work hard by anything other than self-interest. Or, that even if they can be otherwise motivated, the motivation of self-interest is so powerful, producing so much surplus wealth, that discounting other forms of motivation can be justified. Here in the UK, New Labour has bought into this motivational model wholesale.

But this claim has hardly any credibility any more. At the level of the brain, research is showing that altruistic and other-regarding concerns have their own distinct neural pathways. At the level of analysing individual behaviour, in game-theory, behavioural economics and social psychology, it has been shown that people are at least sometimes as motivated by such concerns as they are by self-interest. (The jury is out on whether altruism can be reduced to self-interest, but the point is that most theorists now accept that it is optimally rational to be as swayed by ‘pro-social’ concerns as self-interested ones – for example, game-theorists recognise that it is rational to maintain one’s social reputation as a ‘good person’ through altruistic acts.) And, as Chris Dillow suggested in his blog recently, at the level of social organisation, it appears  hierarchically oriented competition between individuals may actually harm innovation and efficiency in many settings (that some workplaces and institutions function far better if more ‘horizontal’ models of collaborative endeavour are adopted).

In light of all this, and in light of the spectacular recent seizing up of psychological facilitators of economic activity such as trust, seeing the role of Government merely as a corrector of the individualist excesses of markets, purely through the blunt instruments of taxation and regulation, looks increasingly unimaginative and crude.

So the global downturn is not, as Garton-Ash seems to suggest, just an opportunity to rethink the ratio of free-market to statist economic solutions. It is an opportunity to rethink how we organise institutions and civic society so that individuals can become more than self-interested consumers. What we need is a new approach to how we think about the aims of economic activity and the forms of social organisation that facilitate it. It is now time to start thinking seriously about incorporating social capital and indicators of environmental impact into the mainstream economy. The claim that made such moves appear pie in the sky – that individuals can only be motivated by self-interest, that only competition, regulated by Government, can deliver efficient economic activity – is being exposed as false by the day.

President elect Obama’s reading list has been coming under close scrutiny recently. Some commentators, such as James Crabtree, have noted he is reading up on Lincoln‘s presidency. I want to come to that via the theologian-cum-political-theorist Reinhold Niebuhr (also on Obama’s reading list), and land back in the topic of yesterday’s post on cultural theory.


Niebuhr is perhaps best known for his analysis of the dangers for the US of private unselfishness transmuted into national egoism through the conduit of moralistic patriotism. He also made a compelling case against post-war isolationism. But the book to which Obama adds a back-sleeve comment, The Irony of American History, is largely a sermon warning of the potential for slippage from vainglorious yet corrigible power, to vainglorious yet incorrigible power. The most interesting part of the book presents a nuanced theory of how a specific layering of ironies can keep the powerful humble enough to avoid such slippage.


The first level of irony that Niebuhr explores is exemplified by the Americans being saved from the excesses of individualism by, well, the excesses of individualism. The former kind of excess is the inveigling illusion that American style liberal-democracy is the perfect political system, to be spread everywhere, brooking no dissent; the second, the economic collapse of 1929 that brought about the social-democratic reforms of the New Deal. Soviet Russia for Niebuhr was not similarly saved, acting out the totalitarian tendency that also lies at the heart of the American polity (‘evils which were distilled from illusions, not generically different from our own,’ is how Niebuhr puts it). The lesson of this level of irony is that your enemy resembles very much yourself, and that America’s avoidance of totalitarianism was perhaps only a matter of luck.


The second level of irony is one where a hero is caught up in ‘pretensions which result in ironic refutations of his pride.’ This is the level at which individuals or nations become aware of the ironies of their own corruption by power. Niebuhr, mordantly parodying Kipling’s ‘If’, puts it thus: ‘If virtue becomes vice through some hidden defect in the virtue; if strength becomes weakness because of the vanity to which strength may prompt the mighty man or nation; if security is transmuted into insecurity because too much reliance is placed upon it [oh the prescience!]… in all such cases the situation is ironic.’ By highlighting this level of irony Niebuhr sought to wake America up to its complicity in producing these pretensions and so to cause an ‘abatement’ in them. The operative tenor that those who have learnt the lesson of irony should strive for is humility, in this case self-critical and reflexive. When such humility abides, the virtues that have reversed into ironic pretensions can be saved.


The third level of irony is where individuals or nations are brought to see that there is something constitutively faulty with a whole system that presents itself as good – that by being and intending to be good, the system, by that very fact, cannot be wholly good. The model here, as David Bromwich points out in his excellent LRB article, is that of Don Quixote. The reader, by the end of that book, has been moved to see that there is something wrong with the very idea of a noble knight – that is, a knight whose nobility convinces him he has an intrinsic grasp of the way the world is, will by definition have no way of distinguishing his illusions from reality.


But the idea at this level of master irony (as it were) is not to mend the systemic faultiness. It is rather to accept it as inevitable and always to factor in its effect. In the case of America the master irony is that only the guiltless and good should wield power, embodied in the Founding Fathers. But by wielding power it becomes impossible for them to remain guiltless and good (because power, by its nature, corrupts). Moreover, and perhaps more important, if a nation takes itself to embody moral good without remainder, it will, by an inexorable logic of irony, actually do evil (its peremptory attitude to opposition will eventually lead to totalitarianism). Again, the way to head off this possibility is not only to try to correct the evil out there, but to be aware of the evil within oneself – to see the beam in one’s own eye as well as the mote in the other’s.


Niebuhr says that: ‘If we [America] should perish, the ruthlessness of the foe would be only the secondary cause of the disaster. The primary cause would be that the strength of a giant nation was directed by eyes too blind to see all the hazards of the struggle; and the blindness would be induced not by some accident of nature or history but by hatred and vainglory.’ Yet he is so sensitive to the subtleties of ironic pretension that he warns against taking the self-reflexive humility achieved by understanding the three levels of irony as itself a form of exceptionalism – that is, he warns against a form of vainglory which says: ‘our system is  morally perfect because of its inbuilt humility.’


This brings me to Lincoln, whom Niebuhr praises – ‘chooses as his hero’, to paraphrase Heidegger. Lincoln waged war on the South with circumspect humility – with an awareness of how the waging of even a righteous war damages the wager. After the war, Lincoln was not triumphal but humbly contrite about the North’s ironically reversed good intentions (the North’s oppression of the South was an inevitable and regrettable product of its pursuit of good through war for Lincoln). So Lincoln is a true hero for Niebuhr because he ascended to the master level of irony and never lost sight of the humility thereby entrained.


Obama writes on the sleeve notes of Niebuhr’s book: ‘There’s serious evil in the world, and hardship and pain. And we should be humble and modest in our belief we can eliminate those things.’ Does this quote and Obama’s (perhaps) choosing of Lincoln as his hero point to his having travelled the three levels of irony? The quote is revealing, it seems to say: ‘there is a limit to how much good we can do out there, history has taught us that.’ This makes it sound like Obama has only reached the first level of irony – the level where liberal capitalism is seen as imperfect, and contingent factors are reckoned into any journey of social progress.


If Obama is to be a truly great president, like perhaps Lincoln was, we would have to hope he has reached the third level of Niebuhrian irony – that he has realised that one constant brake on the doing of good is the need for a self-reflexive, circumspect humility about the beam in one’s own eye as well as the mote in the other’s.


Americans see Europeans as having internalised these levels of irony and to having as a result ended up as nihilistic and decadent self-doubters. It is no accident that Niebuhr is a theologian as well as political theorist. He sees the need for a divine judge who ‘laughs at human pretensions without being hostile to human aspirations.’ If Obama has reached the level of Niebuhrian master irony, then no doubt he wards off European-style nihilism through his faith.


In the end, it doesn’t really matter how one gets to this highest level of irony, it is the attendant attitude of humble self-criticism that is the goal. But can an atheist or agnostic stay afloat at this level? Here there is perhaps a deeper truth. Modernist anti-heroes epitomise the impossibility of achieving the apparent potential of Enlightenment rationality. But they still rely on the idea that it is at least intelligible that this potential could be achieved. What if we got rid of that idea? What if we thought of our rationality, as cultural theorists do, as a constant shifting of emphasis between different stances towards the world? On that view there is no perfect endpoint, just the constant challenge of re-jigging social reality in order to best serve our ends (whatever we decide they are).


Is there a humble yet striving pathos that can come of such a conception? Nietzsche, in his so-called ‘middle period’, the period of ‘the cheerful science’, thought that this was the pathos of the post-metaphysical age (the age where ‘God is dead’). But then Nietzsche couldn’t himself stay within the pathos, becoming shriller and more hyperbolic as he digested what it’s like to live amidst dizzying layers of irony. Can those of us who are non-religious keep hold of the gentler pathos of circumspect humility? Perhaps a first step would be to rid ourselves of the idea of a single perfect endpoint of Enlightenment rationality. And perhaps cultural theory can help us do that. In the meantime, we can always just choose Lincoln as our hero. Let’s hope Obama does.

In this post I am going to try to weave together cultural theory and Barack Obama’s future presidency.


Let’s start with cultural theory. I won’t go into too much detail, but the basic idea as that a person makes sense of her life in terms of five basic forms of rationality: the egalitarian, the hierarchical, the individualist, the fatalist and the hermit’s position (this latter is a withdrawal from the other four). Reflecting the first four forms of rationality are forms of social organisation or ‘solidarities’. For example, individualists identify with markets or networks of groups. Hierarchists with rule-governed institutions or ordered groups of networks. But for cultural theorists the two domains are not separate: an individual just is the attitudes and stances she actualises through the forms of social solidarities she identifies with.


This gives us the central unit of analysis in cultural theory, the ‘dividual’ – the individual viewed as a node in a network of social structures. For cultural theorists, the four forms of rationality and corresponding social solidarities exhaust the possibilities for human action and behaviour. But moreover, the four forms require one another in order to exist: individualists can only define their attitudes and solidarities in opposition to egalitarians and so on. Cultural theorists may take the further step of arguing that unless all four forms of are in play, in reasonable proportion, solutions to problems will be too ‘neat’ – too biased to one form of rationality. What we want are ‘clumsy’ solutions that are not biased by being neatly tapered down to a dominant rational monopoly or duopoly. With such solutions, everyone comes away happy, as it were, but also, the resultant solutions are better, because they draw on a richer array of possible forms of behaviour and social organisation.


A very obvious candidate for a far too ‘neat’ set of solutions to a nest of problems is the neo-conservative approach to foreign policy – so recently lauded, yet so recently crestfallen. Neo-cons undoubtedly systematically stacked solutions in terms of individualistic concerns (with, in places, the fig-leaf of egalitarianism). And everyone can agree that this has had distastrous consequences.


Now, Obama seems to me to be one of those human beings who is ‘well rounded’ – he has the ego-led charm of the individualist, as well as the self-confidence. Yet he displays strong egalitarian impulses – ‘So let us… look after not only ourselves, but each other.’ He is also not anti-hierarchist, as Bush was: he wants to work with multilateral institutions such as the UN (but he does not naively believe in their intrinsic goodness or effectiveness), and he sees a positive role for Federal institutions within the US. And he has something of the humility of the fatalist: he accepts that his personal journey has been somewhat fortuitous and that both the power of the United States and its erosion are to some extent dependent on contingent historical events.


So it looks, in cultural theory terms, like a good package: all four forms of rationality are well represented in Obama’s personality and character, and thus in the forms of social solidarity he seeks to forge. He is certainly less likely to be as one-sided as Bush. But what of his own model for his presidency? He is said to be avidly studying Roosevelt’s first hundred days in office. If that’s his model it bodes well and perhaps not so well in different contexts. On the home front, an even-handed approach to social progress: a humble fatalism that says you can’t control everything, including the economic situation you inherit; a balance between egalitarian concern and individualist energy and innovation; and this balance delivered through renewed but responsive hierarchies of expertise.


But what of foreign policy? Like Roosevelt, he may effect a split. Roosevelt’s version of the split was social progress at home, isolationism abroad. Obama is perhaps more likely to split between the former and focussed hard-power abroad (hence his call to arms over Afghanistan). The rest of the world must hope his egalitarian impulses and hierarchist sympathies overcome his individualist and fatalist tendencies here  – that he tries to use the moral exemplariness of soft rather than hard power, wherever possible. 

Welcome to This is your first post. Edit or delete it and start blogging!