Thursday, 23 August 2007

Locking Down the Left-Field

Locking down the left-field is a phrase that first surfaced among some neocon defence strategists in the period after 9/11. As with so many other fresh ideas and approaches, it fell into disuse after the invasion of Iraq as the emphasis of foreign policy strategy shifted.

Intriguingly, now that the Iraq War is winding down (at least from the point of UK and US involvement), the phrase “locking down the left-field” is once again being bandied about.

The original meaning of the phrase was along similar lines to that of thinking the unthinkable. Until 9/11 it hadn’t been unthinkable that terrorists would try to bring down the twin towers of the World Trade Center (indeed, they’d already tried) but it had been unthinkable that they would do so by flying passenger aircraft into them.

In the months following, the FBI and CIA attempted to ensure that every scenario, even the most unlikely, was not only anticipated, but that plans were in place to prevent it happening. To do this, they even asked writers and film-makers to produce prospective terrorist scenarios. This became known as locking down the left-field and it was at this point that neocon defence strategists started applying the phrase more widely.

Their argument was that if terrorists could act in such an unexpected way, so global events could develop in a way that was impossible to predict based on current information and expectations. They argued that for the USA to protect its position in the future, it had to imagine every conceivable threat and ensure that a plan was in place to counteract it.

There are some similarities here to ring-fence theory (see below), but whereas that is primarily a defensive policy, locking down the left-field is proactive. The argument is that America should seek to neutralize, economically or militarily, all potential future threats. Critics argue in response that this was the sort of imperialist policy that paved the way for the 9/11 attacks in the first place, and some of the people using the phrase now have clearly stated that you can only lock down the left-field by building bridges, not by trying to stamp out all those who disagree with you.

(coming next - unclickable extras)

Friday, 13 July 2007

Disengagement Theory

Disengagement Theory is designed specifically to tackle the perceived terrorist threat against the West from Islamic extremism. Intriguingly, it was developed not by a specialist in the Middle East or in Islam, but by Peter Samuels, a Cold War Historian from Cambridge.

Samuels at first accepted the received wisdom that the battle against Al Qaeda was unlike any previous conflict, including the Cold War, in that it did not involve any state players. Then he began to consider the fact that both the current tension and that of the Cold War had an ideological foundation, and from there he quickly saw the similarities and a potential way forward.

One of the key elements in the emergence of Al Qaeda was the continuing presence of US troops on “holy” soil in Saudi Arabia after the first Gulf War. Although Saudi Arabia had invited this presence, Samuels suggests that it was no less an affront to the “Islamic Nation” than the presence of American troops in Poland during the Cold War would have been to the Soviet Union. Of course, you might argue that American troops would never have been welcomed into Poland in the first place and if they’d invaded, war would have ensued, to which Samuels response would be – exactly!

Samuels has argued, as have many others, that from the perspective of Muslims, their culture and religion seem to be under siege from Western philosophy and popular culture. The secret to mollifying the Islamic threat, then, is to disengage from the Islamic world, to avoid any military presence in Islamic countries, to avoid pronouncements or policy decisions that might seem interventionist.

Of course, the Islamic world is still predominantly a geographical entity so such a policy is easier to carry out than might be first thought. However, Samuels is not blind to the fact that there are sizeable minority Muslim populations across the Western world, and it’s with regard to these that his message becomes quite uncompromising.

Firstly, Samuels suggests that, even with disengagement, the threat from “home grown” Islamic terrorists will persist for decades after it has been neutralized elsewhere. As a result, Muslims living in the West would have to be the focus of intelligence operations for years to come.

He also suggests that the flip side to disengagement from the Islamic world is that Islamic communities in the West should be left in no doubt that they are in the West and are therefore expected to adopt Western practices and assimilate. This might be Samuels’ Cold War thinking coming through, but he argues that the nature of disengagement theory is to contain any potential sources of conflict and that this can only be achieved by creating a clearly defined cultural border between the Islamic world and the West.

(coming next - locking down the left field)

Thursday, 21 June 2007

School's Out

There are many people who believe that any social, cultural or political structure which has remained in its present form long enough to attain ‘sacred cow’ status is almost certainly due for overhaul. Many of the proponents of pot theory (see below) have taken great pleasure in applying it to such structures.

So it is in the early 21st Century that various challenges are arising to the orthodoxy of universal education for children and the further orthodoxy of what precisely that education is expected to achieve.

The title of this post comes from a pamphlet written by Jan Huysmans of the University of Amsterdam. The pamphlet was actually called School’s Out Forever, somewhat bizarrely mirroring the Alice Cooper song of the same title from the 1970s.

I say ‘somewhat bizarrely’ because the pamphlet makes a very serious argument, quoting extensively from John Stuart Mill and other champions of liberty, that all formal education should be abolished. His argument was that compulsory education was a gross infringement of the personal liberties of children and counterproductive in the modern age. He argued instead that the government would be responsible for ensuring that educational programming was shown on television during the day and that combined libraries and education centres would be open to people of all ages for additional support.

Huysmans is a colourful character and admitted subsequently that he didn’t envisage these changes coming about, nor did he even think they were entirely workable, but he wanted to start a wide-ranging discussion about the purpose of education and to look at what it achieved. He also quite rightly pointed out that for a significant proportion of the school population, the achievement of education was little more than crowd control.

These arguments have been taken up by many others, but perhaps the one we should consider is Eileen Kempson who was working on her book The Idea of a School at the same time Huysmans was writing his pamphlet.

Kempson’s book was based on her experiences of running The Silver Mountain School which she established in her native Colorado after teaching in both the private and public education sectors on the East Coast of the US.

Her experience suggested that formal education was of benefit to very few people and thwarted many more. Children with problems suffered either academically or socially (or both) from day one. Children who were bright but not academic were made to feel like failures, soon lost faith, and were never allowed the opportunity to explore learning experiences that might have suited them and prepared them for life after school. The most able children were often bored and were once again expected to confine their intellectual development to very tightly prescribed studies.

At first, she too toyed with the idea of abolishing education. She quickly saw that the single biggest problem, even more than a populace that was illiterate and innumerate, was one of providing childcare for children who would no longer have anywhere to go during the day.

Her first criteria, then, when setting up The Silver Mountain School was to make its day match the working day. The second was that children had to be literate and numerate by the age of eleven (this might seem late, but in Continental Europe, formal education often doesn’t begin until seven, yet children surpass their American and British counterparts in literacy and numeracy within a year or two). Beyond that, the aim was to allow the children as much freedom as possible to explore their own interests. Controversially, though Huysmans would approve, this even included watching TV all day if they so desired.

It’s impossible to judge the success of The Silver Mountain School by normal standards because not all children are expected to pass a given academic benchmark. It’s perhaps also difficult to judge it because the parents choosing to send their children there are generally from the higher socio-economic groups. Having said that, it’s record in terms of very high employment and college admission rates among former students, and very low rates of criminality, suggest it’s doing something right. It’s not surprising that a relatively high percentage of former students go on to set up their own businesses. What’s most dramatic is a survey of four hundred former students, in which not one said they hadn’t enjoyed their time at Silver Mountain, and in which only two students thought they might have received a better education elsewhere.

This raises a lot of questions about education. How much is it simply about childcare? How much is it about providing a skilled workforce (something the current education system is failing to achieve in most of the Western World)? Should education be more about socialization (again, for many people, the current education experience is socially tortuous and does nothing to prepare them for adult society)? Governments pay lip service to such questions but don’t seem much interested in the fundamentals – it’s reassuring then that communities and individuals like Eileen Kempson are exploring the answers for them.

(coming next - disengagement theory)

Sunday, 10 June 2007

The Black Flower

The Black Flower is unique among the concepts listed here in that is has no verifiable sources. It is a policy initiative which is rumoured to have been created by the CIA and further rumoured to be now commonplace among western intelligence agencies. The Black Flower is designed primarily to counteract the greater transparency which has resulted from the growth of the world wide web.

So why am I including a concept which might well be nothing more than the delusional creation of conspiracy theorists? Well, because The Black Flower is indeed an interesting concept, one which, even if it isn’t in use now, will surely be used at some point or in some form in the coming years.

The thinking is as follows. The power of governments in the past has been heavily dependent upon the restriction of information. If a government or, more specifically, a government agency such as the CIA, wanted to carry out operations without public interference, the relevant information could simply be withheld from the media and the population beyond. The emergence of the world wide web naturally threatened this status quo and suddenly it was almost impossible to control the flow of information.

As a result, The Black Flower was developed as a project which would search for anything indicating a leak and then swamp the web with spurious material on a similar theme. The truth would still be in plain sight, but rendered useless to anyone who lacked the specific knowledge to separate it from the white noise created by the intelligence agencies.

The theory of The Black Flower came into its own with 9/11. One conspiracy website, The Zero Commission, now defunct, went as far as to suggest that many of the conspiracy theories surrounding the 9/11 attacks had been part of the Black Flower operation, using a wall of chatter to cover up inconsistencies in the official accounts.

To give a fictional account of how that would work, let’s imagine shortly after the attacks, stories emerge pointing to the possibility that the WTC had been rigged with explosives. There could be a rational explanation for this – perhaps after the first attack on the WTC (the truck bomb) the authorities concluded that the buildings should be rigged to allow controlled demolition rather than a devastating “topple” in the event of another attack. On the morning itself, someone is forced to come to the devastating conclusion that the buildings have to be brought down rather than risk them toppling and taking out a swathe of Manhattan. Naturally, that hugely difficult decision cannot be made public, but rumours start circulating quickly that there were a series of controlled explosions. It’s vital these stories are quashed, but the easiest way to do that is to surround them with white noise. Operation Black Flower swings into action, releasing stories on the net about the planes having no logos, about missiles, about the complete absence of a plane at the Pentagon, and soon enough the public has lost the original inconsistency in a sea of delusional fantasies.

Of course, it’s impossible to say how close The Black Flower comes to real operational policy, let alone whether there is a project going by that name. But what can be said categorically is that the web is already being used as an instrument of warfare (not least by Al Qaeda) and the western intelligence agencies will need to consider a Black Flower strategy at some point in the future, even if they aren’t using it now, because intelligence work is all about the control of information and the web weakens the levers of control.

And finally, it’s worth pointing out that some conspiracy theorists claim our inability to verify the existence of The Black Flower is in fact proof of the project’s greatest success so far – but then they would say that, wouldn’t they?

(coming next - school's out)

Monday, 4 June 2007

Demographic Optimization

Demographic Optimization is a rather dry term to describe an intriguing area of social and economic policy which is already important in some developed countries and is set to become even more so.

The phrase was coined by Stanford academic, Rainer Weiss, to sum up a range of policies for tackling the problem of falling birth rates and ageing populations. This, of course, is an issue which is already current in Japan, set to become important in Europe and eventually in the USA (where the demographic continues to be skewed by immigration).

The problems of this demographic shift come in several guises. Older people generally require greater levels of care and there are fewer people to provide that care. There are also fewer people in the working-age population to support them, and greater longevity also has an impact on pension and insurance schemes which were not designed for the lengthy retirements currently being enjoyed. There is also a smaller proportion of the population who are productive and who are available to carry out essential jobs.

For Weiss, the solution is demographic optimization, or put rather more simply, making the most of the demographic you have. So, for example, Weiss suggested that less academic children should be allowed to leave high school at fourteen to engage in a mixture of work and skills-based education. At the other end of the spectrum he suggested that retirement should be more flexible and that there should be tax incentives to encourage the elderly to work. He also argued that increasing the number of elderly people in work, even part-time or voluntary work, would also maintain a healthier population and therefore reduce the burden of health and social care – he admitted though, that the latter point was conjecture rather than evidence-based.

At the same time, Weiss argued that governments and industry should be seeking to enhance non-human productivity (approachable language is clearly not Weiss’s strong suit) – in other words, as many of life’s mundane tasks as possible should be mechanized.

Interestingly, Weiss also condemns the simple-mindedness of those who suggest that greater immigration is the solution to the demographic malaise in the developed economies. As he has said, ‘This is a short-term and blinkered approach to a long-term problem. And remember, migrant workers get old, also.’

What he doesn’t say, but that perhaps needs saying, is that the developing economies of the world will all eventually face the same problem, and that barring miracles of medical science, a demographic skewed towards the aged will become the norm. To that extent, the countries which will thrive in the next century or two will be those that tackle the underlying structure of the problem rather than those that simply try to use migration as a solution.

(coming next - the black flower)

Thursday, 31 May 2007

Start with a Village

(this is the second part of the post below - villages first)

Considering that Start with a Village is a derivative theory that arose out of Malcolm Coulton’s Villages First policy, it’s perhaps surprising that there’s some disagreement about the origins of the idea.

David Sergeant certainly wrote a paper titled Start with a Village for the Environmental Policy Review, but the small environmental group, Defending Devon’s Villages, produced a pamphlet of the same name at around the same time and claimed to have been using the phrase in its meetings for some time beforehand. There was also some confusion about whether or not Sergeant had ever attended any of DDV’s meetings.

But the issue of provenance is something of a distraction. The essence of the theory is that, just as in African development, so in western environmental planning, the village should be the starting point.

For the purposes of his paper, Sergeant took the village of Ampney Magna in Gloucestershire and invited the Parish Council to explore ways in which the village could become more environmentally friendly. To his surprise, their first observation was that the river passing through the village had two weirs and a disused mill, all of which could be used to generate power. They’d suggested several times to the district and county councils that the resource should be utilized but without success.

The eventual blueprint that Sergeant produced for Ampney Magna would have seen the village meeting nearly all of its energy needs from a variety of renewable sources within the parish boundaries. Extrapolating from this, Sergeant argued that if the power to make changes were placed at a “parish” level, the nation as a whole would make far greater moves towards environmental sustainability.

DDV made similar arguments, but interestingly, they also added a dimension relating to house-building, a source of some contention in rural England. They pointed out that the present top-down system (central government decides how many houses are required in each region, the region decides how many will go in each county, the county council decides how many will go in each borough and the borough decides where to put them) is destroying the countryside. They suggested it would be better for parishes to decide how many houses were needed locally and where it was possible to put them, acting as the first step in a bottom-up process. Naturally, the counter-argument from construction companies and some politicians was that this would lead to nimby culture (though equally, it could be seen as a way of exploring simby culture – see below).

Finally, it should also be pointed out that some business thinkers have looked at how Start with a Village could be applied in the organization of large companies. In business however, this is not a truly original concept as it’s been long-established wisdom that small mutually supportive units work better and produce more than a large flat workforce. In other words, most businesses dream of becoming cities, but cities function best when they are made up of distinct neighbourhoods.

(coming next - demographic optimization)

Monday, 28 May 2007

Villages First

Villages First was the policy initiative developed by Professor Malcolm Coulton of the University of Bristol for the charity, Save the Village. The charity put the Villages First policy at the heart of its work in the Brong-Ahafo region of Ghana in Africa.

One of the consequences of development (and this is a pattern which has occurred throughout history) is rural flight. Young people in particular will leave the countryside, partly because mechanization means fewer people are needed to work the land, but partly because the city is seen to offer greater opportunities and greater access to education and healthcare.

The results of rural flight in developing nations are mixed. Of course, the migrants provide a workforce for the city, but they also put enormous strains on the resources and infrastructure of the city. Furthermore, many migrants not only fail to find work, but are also reduced to living in appalling conditions in squatter slums.

It’s also worth noting for western governments that this migratory flow from the countryside to the city provides the foundation of the subsequent migratory pressure that sends Africans into Europe and Mexicans into North America. Trying to stem the flow at the border or even in the large cities is an intervention too late.

Save the Village aimed to counter this process and simultaneously improve the lives of the rural poor. The logic was simple – to improve the quality of life for those living in rural villages to such an extent that they would be less inclined to move to the city.

Coulton wasn’t so naïve as to think that he could completely stem the flow from the country to the city, but he believed that by concentrating development efforts in the countryside it would be possible to avoid many of the negative impacts that come with industrialization.

Save the Village started its work in the Brong-Ahafo region with a cluster of villages which were gradually given clean water and improved power supplies, an impressively equipped health centre and school for the area, and a postal bus service linking the cluster to its central facilities. Efforts were also made to improve farming and business opportunities in the area.

The early feedback was encouraging, though the scheme didn’t run long enough to quantify whether such development would successfully stem the tide of migrants to the regional capital of Sunyani, let alone to Kumasi in the neighbouring Ashanti region or Accra, the capital.

Eventually, Coulton also had plans for much more extensive infrastructure and communications work but a fraud case involving an unrelated charity with a similar name badly impacted on Save the Village’s fundraising and it was forced to wind up its operations after two years.

No one has since explored the complete Villages First philosophy in a development context, but with some irony, the policy is being given increasing credence in both environmental and business circles in the west. And Start with a Village, one of the buzz phrases of Coulton’s original paper, has effectively become an offshoot of the original idea.

(coming next, the second part of this post - start with a village)

Tuesday, 22 May 2007

The New Eugenics

Eugenics emerged as a popular cultural force in the late nineteenth century, riding in on the coat-tails of the new science of genetics. Even though eugenics developed in very unscientific directions, it continued to influence people and governments later in the twentieth century than might be supposed.

If there were grains of truth in the obsessions of the eugenicists they were well and truly lost during the period of Nazi Government in Germany. Yet Sweden continued to practise eugenics until 1976, routinely sterilizing people of reduced intelligence or disability, those of mixed race, even those who had descended into anti-social behaviour.

For decades, understandably, eugenics was not considered seriously, consigned to the realm of scientific curios along with trepanning and leeches. It took a Danish economist, Carsten Steffensen, to bring the subject back into the public sphere.

In the late 90s, Steffensen asked a Swedish colleague if any studies had been made to verify the results of that country’s fifty year experience with eugenics. His colleague reacted with horror, but like any true economist, Steffensen was interested in quantifiable data – he wasn’t suggesting the human cost was worth bearing, but he wanted to know if it had achieved its end.

Intrigued by the knee-jerk response to the term, Steffensen started to look at the underlying principles of eugenics, then looked at the incentives and disincentives to reproduction that currently existed in various western countries. His conclusion was startling.

Steffensen concluded that high-tax welfare-state economies had been inadvertently carrying out a form of negative eugenics since the end of the Second World War. Even worse, he believed the effect was becoming more amplified with each generation.

The first foundation of Steffensen’s argument was the lowering of barriers to education which had happened in many European countries mid-century. The percentage of students who were the first in their family to attend university fell steadily year on year as “the bubbles rose to the top” – in other words, the increased social mobility was increasing the correlation between intelligence and social position.

Controversial as it was, Steffensen pointed out that the entire purpose of universal education was to ensure that the professional classes today are, on average, more intelligent than they were fifty years ago, and that the working classes are, on average, less intelligent because of what might be called “bright-flight”, the escape of more intelligent workers into the higher strata of society. Indeed, he cited as evidence, the growing underclass in western societies and predicted that this class would become more entrenched in the coming decades.

Given that model, Steffensen looked at the way society treated its various classes with regard to reproduction. His conclusion was that the underclass, dependant on welfare benefits, often had little understanding of the responsibilities of parenthood, nor of the cost. With each new child, parents in the underclass were certain not only of receiving healthcare and schooling, their benefits were also increased proportionally. The least intelligent, and more importantly, the least productive members of society were actually being given, relative to their circumstances, considerable incentives and no disincentives to have children at a younger age and in greater numbers than their middle class counterparts.

Conversely, working people existed on a sliding scale with few incentives to have children and ever greater disincentives the more educated they were. To take an average university-educated couple, they have already fallen behind their counterparts in the underclass, many of whom will have had children during the couple’s student years. But graduates then go to work, and many have little choice but to put off a family until well into their thirties. When they do have children, the professional couple have to weigh the consequences of each additional child very carefully, not least in the cost of childcare or in the cost of one parent giving up work. The result, understandably, is that people have fewer children later in life the more educated and productive they are.

In short, Steffensen argued that a system that was introduced for very good reasons in the middle of the last century is actually reducing the average intelligence of the population in European society at a time when intelligence, even down to the level of skilled workers, is becoming paramount.

Steffensen’s solution to this problem was the New Eugenics. Half of his model was aimed at removing incentives to reproduce among the underclass. Mothers who were dependent on benefit would receive assistance with a first child, but there would be no increases in benefits for subsequent children and parents would be reminded of this fact – the idea was to instil a sense of individual responsibilities. As people in this social class are often uneducated as well as being of lower intelligence, he also proposed that benefits were tied to a course of education in childcare, parental responsibility and birth control.

The other half of the model centred on those in the working population and here Steffensen not only suggested a bolstering of existing legislation to support parental leave and childcare facilities within the workplace, but also suggested dramatic tax breaks for those with children. He also argued for those tax breaks to be transferable for those couples where one parent wanted to give up work to raise the children.

Steffensen predicted another knee-jerk reaction to New Eugenics and to some extent he got one. Critics argued with the underlying basis of his theory, claiming that there was no proof of an intellectual chasm opening across the classes, often citing school exam performance as proof. Others argued that he was penalizing the poor whilst rewarding the rich.

But whatever the merits of his underlying arguments, fewer people were ready to criticize his policy recommendations, perhaps because it’s difficult to argue with the logic of them. Certainly, his policies with regard to easing the path to parenthood for professionals and working people chime perfectly with concerns over falling birth rates across the western world. And although there was less open endorsement for his suggestions regarding the underclass, there was little criticism of a system which sought to enforce a greater sense of individual responsibility.

Ultimately, whether or not the New Eugenics ever works its way into the policy framework, the lesson that Steffensen reinforces is a sound one. If you make changes to any complex system, the results will never be restricted solely to the sphere you seek to influence, and the ultimate consequences may even be worse than the problem you were trying to fix.

As Steffensen himself said, ‘People tell me its dangerous to try to engineer society – they don’t understand that welfare benefits are a form of engineering and so is tax. We’ve made so many mistakes because we didn’t understand that.’

(coming next - villages first)

Monday, 21 May 2007

The Destruction Window

First espoused by Englander and Jancar, The Destruction Window describes the unmatched technological development required by one force to enable it to destroy its enemies with relative ease.

To take one example, by the time they set off on their global trading and raiding missions, the European Powers had inadvertently built up a destruction window over the civilizations they encountered. The imbalance was so great that colonization was a foregone conclusion.

Western powers are still more advanced in military terms than those in the developing world but the destruction window has all but closed, ensuring that conflicts in those regions can no longer be predicted quite so easily.

In one sense, the briefly total and still partial destruction window opened by the USA in its development of nuclear weaponry was a false window, in that the weapon concerned is so catastrophic as to almost entirely rule out its use. For example, the USA had a destruction window over North Vietnam but was unable to use it.

Regardless of this leapfrogging, it is still imperative for all powers to race towards greater sophistication in weaponry and to engage in espionage to ensure that no other power is opening a destruction window against it.

Naturally, even if the planet became a unified political entity, it would still make sense to continue with the development of weapon systems to ensure that any extraterrestrial visitors do not automatically have a destruction window against Earth. After all, in space terms, we might be Pacific Islanders.

Opponents of The Destruction Window Theory argue that it is unnecessarily focused on military solutions and does not allow for the ability of advanced societies to engage in diplomacy. Englander and Jancar argue in response that diplomacy is a necessity which only ever takes place when no destruction window exists.

(coming next - the new eugenics)

Friday, 18 May 2007

The People You Know

The People You Know was extrapolated from the Six Degrees of Separation by the environmental activist, Carl Bailey. His argument was that the Six Degrees was actually little more than a mathematical exercise with no real-world significance.

Perhaps the easiest way to explain Bailey’s dismissal of the earlier theory is with a personal example. Within three or four degrees – a surprisingly small number of steps – I can find a person to person link between myself and the President of the USA, the Prime Minister of Great Britain, the Queen, the Pope, the Dalai Lama, and the presidents or prime ministers of at least three other G8 countries.

Does that mean I can contact any of these people? No, partly because of the nature of human relationships, because we adhere to surprisingly strict but unwritten rules as to what is and isn’t acceptable to ask of our acquaintances and friends. What’s more, given those rules, it only takes one of the three links to be less than a very close friend for the negotiations to become impossible.

But even if I could use my chain of three or four degrees to contact all those people, could I utilize that contact to any significant ends? No. I might use it to obtain a signed photograph, perhaps an invite to tour the White House or attend a garden party at Buckingham Palace, but the mere existence of a connection doesn’t confer any serious consideration.

Bailey’s point is that the Six Degrees does not represent a global old boy’s network. In fact, it represents nothing more than a sort of randomly generated exponential growth.

As a committed environmentalist, Bailey was interested in finding systems of cooperative action which could start at a very local level and expand to have a global impact.

The People You Know was the book that resulted, and although it enjoyed only modest sales at the time of publication (and in truth, it is a somewhat dense read), the theory itself gained increasing currency among environmental and local action groups.

The idea is simple, that with a realistic goal and a high enough level of collective commitment, the people you know will have the necessary skills, connections and resources between them to achieve their objective.

The first successful campaign attributed to the application of the theory was that against the building of the LVO Waste Processing Plant in Vermont in the US. In this case, a group of close friends mobilized and used small but vital spheres of influence to ensure that LVO eventually pulled out of the project.

The fact that LVO subsequently built the Waste Processing Plant in an impoverished area in Arkansas also points to one of the criticisms of this theory – people tend to be friends with like-minded people. The Vermont activists were university educated and included teachers, lawyers, a doctor and a local politician among their number.

No such group was available to oppose the plant in its new location, which opens the theory to the criticism that it’s merely glorifying an age-old middle class tendency to protect its own backyard at the expense of poorer or less educated communities.

But the concept of The People You Know still underpins a great deal of activism in the environmental movement and in other locally-based initiatives. And as Bailey is quoted as saying, ‘Anything that motivates people to join together to save what they value can never be said to have failed. It may not always succeed but the only failure is doing nothing.’

(coming next - the destruction window)

Thursday, 17 May 2007

singularity

Singularity is the political philosophy of stripping the processes of government down to one layer. To some extent, whilst recognizing the need for the natural checks and balances of a bicameral system of government, it even supports the removal of extraneous layers of elected government (for example, in the UK, why have both borough and county councils?).

Where singularity really comes into its own is in the actual day to day workings of government. Singularity is based upon an assumption that government, and particularly the bureaucratic infrastructure of government, will naturally lean towards the proliferation and even outright duplication of its own workload. In the process, the activities of government are also obfuscated, and it’s the argument of singularity theorists that this is also in the interests of government at the expense of the population.

A typical beacon for singularity proponents is the flat rate tax system introduced in some of the Baltic States. They see this as a step towards pure singularity – one tax charged at one rate to all, with completely transparent allowances.
By contrast, the UK tax system is a typical example of rampant proliferation. Income Tax is charged at varying rates with various tax free allowances, and considerable sums are collected in taxation only to be repaid as benefits to the same taxpayers. A similarly complex system of allowances and reliefs surrounds Corporation and Capital Gains Tax. In addition, both employees and employers have to contribute to National Insurance. Large numbers of people pay and reclaim VAT (sales tax), as well as various other punitive taxes on certain goods and services, on selling houses and on death.

Singularity theory would scrap National Insurance and see the deficit raised through Income and Corporation Tax. It would scrap benefits for all but the poorest working people and increase the threshold at which people would pay tax in the first place (for example, child benefit would be scrapped, but the personal tax allowance of parents would be increased accordingly for one child and rather more for the second). Minor taxes would likewise be stripped out of the system and the deficits recouped from an increase in universal taxation.

Singularity’s supporters are the first to admit that such changes require a strong stomach and a willingness on the part of politicians to harm their own vested interests and open up their activities to greater public scrutiny. They also admit that in the first instance, many in the public would react unfavourably to changes which might seem unfair (for example, the rate of Income Tax might well go up, even if the net take by the Government remained the same).

But the benefits are clear to see. The government savings secured by stripping away layers of bureaucracy would be matched by savings in the business world from the removal of the need to deal with that bureaucracy. The economy would be leaner, more efficient and more prosperous as a result.

Likewise, the population as a whole would benefit from a system which was transparent and easy to understand, and it would be more easily able to hold the government to account (there is no place in which to hide “stealth taxes” in a singularity economy).

Unfortunately, whilst most politicians pay lip service to singularity – and they have little choice because the logic is unquestionably sound – none has so far shown the willingness to campaign on the issue or to carry through the scale of reforms that are mooted.

Plenty of politicians talk about the need for small and accountable government but in the final analysis they can’t help bolstering a system that treats them very well indeed.

(coming next - the people you know)

Wednesday, 16 May 2007

Q & A Strategy

This is part two of the post below on groundhog thinking.

When military historian, Karl Maschler, asked a set of simplified questions about the Vietnam War, he was surprised to find that his third grade audience simply responded with more questions, and no matter how many times he answered, more questions came back at him.

Such inquisitiveness is, of course, natural in children, but as Maschler continued to answer the questions he realized that the solution was becoming clearer, to the extent that one little girl eventually provided the ultimate answer when she said, ‘I don’t understand why they’re fighting.’

To continue the third grade analogy, Maschler came to the conclusion that most policy makers and strategists, both in government and business, are guilty of “show and tell thinking". They are convinced that they know everything there is to know about a given subject and that no one else is better placed to make decisions.

What’s needed, Maschler argued, is a Question and Answer Strategy, one in which each answer, as it becomes available, is immediately met with another question. Taken to its logical conclusion, this process will get to the heart of whatever problem is being tackled and avoid the risk of falling into the trap of groundhog thinking.

It may sound deceptively simple but the two alternate responses to 9/11 that Maschler obtained from different groups of students is instructive. The first group were asked simply to develop a response to 9/11. Even with the benefit of hindsight, this group decided on an immediate military response in Afghanistan, it remained split on Iraq, but several members of the group suggested military strikes and economic sanctions throughout the Islamic world.

The second group were asked to employ the Q & A Strategy, the initial result of which went as follows –

Q: What happened?

A: Terrorists flew planes into the World Trade Center.

Q: Why?

A: They hate America.

Q: Why?

(A twenty minute discussion ensued)

A: Because they think we’re anti-Islamic.

Q: Why?

(Another ten minute discussion in which it was agreed that Israel/Palestine had not been a factor in the WTC attacks)

A: Because our troops are stationed in Saudi Arabia and because of our foreign policy against Muslim countries.

(A heated debate followed about US foreign policy and whether it was being misinterpreted in the Islamic world.)

Q: Even if they’re wrong, how can we change that perception?

The intriguing thing about Q & A Strategy is that is eschews knee-jerk responses and actually draws the strategists directly towards the underlying causes. It’s hardly surprising then, that Q & A Strategy is gaining popularity in business. Political commentators, on the other hand, have argued that it is impossible for politicians to engage in such a strategy without also reflecting the wishes of the people.

(coming next - singularity)

Tuesday, 15 May 2007

Groundhog Thinking

Groundhog Thinking is a phrase that arose out of a series of experiments carried out by the military historian, Karl Maschler, in which he asked his students to revisit the Vietnam War at various stages of America’s involvement and develop strategies accordingly.

The first group of students were knowledgeable in the general field of military history but had not studied the Vietnam conflict specifically. Nevertheless, Maschler was somewhat surprised to see the students making many of the same strategic and policy mistakes that had been made during the original conflict.

The experiment was modified and the next batch of students studied the conflict for half a semester. They were given extensive reading lists, given access to RAND reports and Government documents from the time and also watched a number of films and documentaries on the social and cultural costs of the war as well as the financial and military costs.

In addition, the students were told about the experiment they would be taking part in, so they were forewarned to make note of the mistakes and miscalculations that had been made by policy-makers at the time. To Maschler’s astonishment, the students still managed to propose strategies and policies which were either strikingly similar to those which had actually been employed or sought to deviate from original policy mistakes whilst suffering from the same underlying mistaken assumptions.

Maschler tried using the same technique on various disastrous episodes, from the First World War to 9/11 and Iraq. Even with the knowledge of hindsight, and despite a concerted attempt to avoid the known outcome of the original events, the students inevitably repeated many of the mistakes of the key players in those events.

The key characteristic of these sessions was that students, in their determination to avoid the original outcome, were swept up in strategic details without ever stepping back to examine the underlying causes.

Maschler coined the term Groundhog Thinking to encapsulate this behaviour. Named after the hit film, Groundhog Day, in which the central character repeatedly wakes up to face the same day, Maschler’s hypothesis is nevertheless more complex than a repackaging of the old saw that we are destined to repeat the mistakes of our forefathers.

Maschler realized that the students often made the same mistakes not because they had ignored all the evidence but because their knowledge of the Vietnam War was dwarfed in the decision making process by the socialization process to which they had been subject throughout their lifetimes.

In other words, you may know what has happened in the last ten or twenty years, you may know what is happening right now, but your decision making processes are dominated by what has happened in the last two thousand years, by the fabric of the society in which you live, by the moral universe of which you are an integral part.

To develop successful strategies, particularly when dealing with different cultures, Maschler suggested it was vital to escape from Groundhog Thinking and much of his subsequent work has focused on finding techniques for achieving that.

Among the early techniques he found to be successful was the empathy technique. He took a new batch of students and painted a scenario in which their own country was under attack from a much more powerful aggressor which claimed to be acting in their interests. Having encouraged the students to demonstrate how difficult it would be for this aggressor to achieve its objective, he turned the tables and told them that the country under attack was Vietnam. This group of students was the first to unanimously suggest that the US should have recognized Vietnamese independence in 1945.

Perhaps even more successful was the technique that arose out of his attempt to try the original experiment, albeit in much simpler form, on a group of third grade children. That technique was Q & A Strategy.

(coming next - the second part of this post on Q & A strategy)

Monday, 14 May 2007

Super-City State Theory

When Doctor Louise Egerton and her colleague Doctor Simon Fraser stopped over in Singapore on the way back from an economics forum in Sydney, Australia they were struck by how successfully the city state was adapting to the challenges and opportunities presented by the global economy.

According to Egerton and Fraser, nearly all of the tenets of Super-City State Theory were established during that two day stopover and on the second leg of their flight to the UK.

They gave consideration to the dominance of city states throughout most of the history of human civilization and to the relatively brief dominance of nation states as the default political and economic unit. They also considered the growing field of research into world cities, seen by many to represent a de facto return to the dominance of city states in the global economy.

Their starting point, then, was one of accepting that most modern nation states will struggle to compete on their own terms in the new world economy. London, for example, is a dynamic city of globally significant proportions, but it is hampered by being an integral part of the UK hinterland. They also accepted that the trend was moving firmly in the direction of the city state and away from the nation state.

So what hope could there be for nation states? Egerton and Fraser used the UK as their model and quickly developed the concept of the Super-City State.

Their idea was to imagine the country as a constellation of city states, all centred around the hub of London, but with other cities such as Manchester, Leeds, Cardiff and Birmingham all operating as semi-independent players in the global economy.

For example, they imagined each city having the power to offer specific incentives for inward investment as well as having the power to introduce special zones in which tax incentives could apply. They also imagined the constellation cities having greater power over the way they developed, including swift and powerful tools for the compulsory purchase of derelict and brown-field sites, the right to insist on a locally-tailored design brief for developments and, again, the right to offer tax incentives to developers.

Egerton and Fraser imagined the role of central Government in this process as twofold. It would oversee the system of checks and balances that would govern the actions of all of the cities, including London. It would also ensure that the channels of communication between the cities provided as level a playing field as possible, both in an advanced and integrated transport system, and in electronic connectivity.

The gains, they argue, could be enormous. Firstly, development pressure would be reduced on London, and indeed on smaller towns and rural areas across the south of England. Secondly, parts of the country which have been left behind by the stratospheric success of London in the global economy would actually regenerate spontaneously whilst feeding off their proximity to the capital city. And the net result for Britain, operating as a Super-City State rather than as a global city (London) subsidising a struggling hinterland, would be to enhance its overall position as a global power.

Egerton and Fraser have subsequently started looking at how Super-City State Theory could be adapted to other geographical areas. For example, they are currently looking at ways in which the cities around the Baltic could form a latter-day Hanseatic Constellation.

As for America, they see two possible ways of applying Super-City State Theory. One is to imagine the country as a galaxy of separate constellations working together – California, the North East, etc. The other is to imagine the entire country as already being a Super-City State, and a successful one at that, but then making future policy decisions accordingly.

But as Egerton says, ‘We still think relatively small maritime trading nations are the ideal candidates to reap the greatest rewards from Super-City State applications – the Netherlands, Japan, and of course, Britain.’

(coming next - groundhog thinking)

Sunday, 13 May 2007

Simby Culture

Nimby, the acronym which stands for Not In My Back Yard, emerged in the 1980s and grew in popularity as a pejorative term for people who opposed development in their local area but were happy to insist that development needed to take place elsewhere.

The implication was that a nimby was selfish and narrow-minded. Even the Green movement was keen to distance itself from nimby culture because it wasn’t truly green – if anything, the nimby would often take a stance that was illogical and even damaging to the environment in order to protect their private domain.

Just as the term nimby started its life in the USA, so simby culture was coined by The Pauline Foundation, a radical green-conservative think tank based in Palo Alto, California. Simby stands for Start In My Back Yard.

But the acronym suggests a simpler approach than is actually the case. Far from suggesting individuals should welcome the developers, simby culture actually asks that, instead of simply refusing the need for development point blank, protesters should strive to produce alternative solutions.

To take the divisive issue of housing developments. Simby culture acknowledges that there is a need for housing, it also accepts that building companies need to make profit, but then looks for solutions that will address both issues whilst arriving at a satisfactory conclusion from an environmental perspective.

The same approach can be applied to any business-centred environmental problem – do not simply criticize the business for being a business, proffer a solution that will satisfy both business and environment.

It has to be said that this approach has not made The Pauline Foundation popular with hard-line environmental groups, many of which have accused it of being in the pocket of big business.

The founder, Hal Rubin, who was himself arrested for environmental activism in the early 1990s, has dismissed the criticism, particularly from his former colleagues in the Red Earth Collective, as a failure to understand that more can be achieved through cooperation than conflict.

Rubin is vindicated to some extent by the increasing tendency, particular in the Western States of the USA, of businesses to approach simby culture from the other direction. Businesses have found that by consulting local residents about their hopes and fears at an early stage, projects have faced considerably less opposition and have often benefited from tapping local knowledge.

Whether the true meeting of minds envisaged by simby culture can ever be realized is one thing – after all, no one wants to see the view from their window destroyed – but the idea of promoting greater and earlier dialogue between potentially opposing parties is certain to catch on as its benefits become more apparent.

(coming next - super-city state theory)

Saturday, 12 May 2007

Ring-fence Strategy

Ring-fencing is a long-term international relations strategy, the aim of which is to prevent nations being dragged into conflicts that do not directly impact upon their interests. It will play a key role in the nature of conflicts in the 21st Century and beyond and also has important business applications.

Ring-fencing will be of greater importance in the interconnected world of the global economy than it ever has been before, but the cost of failing to ring-fence is best demonstrated in the historical example of the United Kingdom in the 20th Century. At the beginning of that century, Britain was the dominant global power, but by the midpoint its standing had been significantly reduced and would have been more so had it not been for London’s continuing influence as a global city.

Of course, the rise and fall of great powers is dependant on a complex array of factors, and it’s arguable that the application of Ring-fence Strategy might merely have extended the decline of the British Empire over a much longer period. But there can be no doubt that a failure to ring-fence cost Britain dearly.

Ring-fencing involves a determination to act only in defence of direct national interests, but also requires a forward thinking policy of ensuring that a preventable situation is not left unattended and that non-essential diplomatic ties do not come to dictate foreign policy decision-making.

In the case of the First World War, Britain allowed itself to be tied into defending other European countries where the British national interest wasn’t under threat. This was also true of the Second World War but exacerbated by a failure to recognize and act upon the danger signals emerging from Europe from 1919 onwards.

These two wars undermined Britain’s global status and yet the country still did not learn from it’s failure to ring-fence. The Falklands Conflict was a perfect example of a costly war which might easily have been prevented. And the Iraq Conflict continued the pattern when the UK entered a war based on its diplomatic ties with the USA, not on its own national interests.

It’s the future of those two countries, the UK and USA, that also best demonstrate the applications of ring-fencing, and the risks of failing to apply it, in the coming century. Ring-fence strategy naturally has to be employed at a global level, but for the purposes of this exercise, let’s explore it in the context of a future conflict in Asia.

Much attention has focused recently on the dynamism of the Asian economies, but dynamism often brings tension with it and in Asia there are many vehicles for that tension, from demographic anomalies in China and India to widespread territorial disputes.

Take the Spratly Islands in the South China Sea, a group of small but strategically significant islands which are claimed by China, Taiwan, Malaysia, The Philippines and Vietnam. Or Japan, which has territorial disputes with all of its neighbours – Taiwan, China, South Korea and Russia.
Add to that the status of Taiwan and the long-running disputes between Pakistan, India and China and the scope for conflict is clear. And notice, that’s without even mentioning North Korea.

At the moment, the USA is doing the opposite of ring-fencing in this region. It has stated its determination to defend the status quo in Taiwan and has a strategic commitment to both Japan and South Korea. There are indications that the US wants to extricate itself to some extent, by encouraging Taiwan to defend itself, for example, but any sudden movement towards such ring-fencing could precipitate conflict between Taipei and Beijing.

As things stand, any war in Asia in the next twenty years would inevitably involve the USA – at great cost but in no way defending America’s national interests. If the country does not begin to ring-fence in that region and elsewhere, the 21st Century could well do for the USA what the 20th did for Britain.

Not of course, that Britain, or even Europe, is totally isolated from the threat of Asian conflict. European business is investing heavily in the region and both businesses and governments need to examine possible impacts and contingency measures if an Asian conflict destabilizes the global economy.

But Britain, whilst not directly involved militarily in the region, has a need to ring-fence that is almost as urgent as that of the USA. Should Britain make clear to the US that it will offer diplomatic support but will not become involved in any conflict in the region? Perhaps that wouldn’t be so difficult, but would Britain stand by if Japan was attacked, or if Australia became directly involved in a conflict?

These are the questions raised by Ring-fence Strategy and they demand urgent attention. The use of pragmatic or reactive foreign policy allowed Britain to sleepwalk into two World Wars and undermined its position as a great world power. Ring-fence Strategy is entirely aimed at insuring against such a calamity, and its application will play a big part in determining which nations look back upon this century as winners or losers.

Friday, 11 May 2007

The Ostalgie Paradox

Ostalgie is a familiar enough term, referring as it does to the nostalgia felt by former citizens of Communist East Germany for the country that disappeared after reunification.

This regime had been oppressive enough that many of its citizens died attempting to escape it. Yet its certainties (whether in music or childhood cartoons or familiar foodstuffs) became attractive as people struggled to adapt to the apparently precarious world of capitalism, with its endless choice, its volatility, its lack of guarantees.

It was social scientist Sarah McClusky of the University of Alberta in Edmonton, Canada, who first coined the broader term, the ostalgie paradox, to sum up similar behaviour in varying situations.

The ostalgie paradox manifests itself in people who are institutionalized but can also be found in many other circumstances – workers released from bonded labour, citizens of newly independent nations, and even those released from grinding poverty by sudden wealth (McClusky carried out an exhaustive survey of lottery winners in the USA).

In all cases, people become nostalgic for the very thing they bridled against and sought to overthrow. As long as they didn’t suffer any particular punishment and only suffered under a widespread oppression, they will inevitably express longing for aspects of that past.

McClusky also carried out an elaborate experiment in which various people who believed passionately in the liberalization of drug laws were invited on a weekend retreat. During the weekend, the discussion group was interrupted by a news broadcast which announced that the Canadian government was planning to legalize all drugs. In the hour that elapsed before doubts started to arise among the group, McClusky was surprised to see many of them deflated and demoralized and even exhibiting signs of panic.

The conclusion? The ostalgie paradox can best be explained by the fact that even if people hate what they have now, they hate nothing more than sudden change. On the plus side, McClusky found that after a brief window of critical risk, ostalgie is usually benign and might even act as a pressure valve, in that it provides a coping mechanism for those who would otherwise be alienated by the new world in which they find themselves.

Thursday, 10 May 2007

Brave New Drugs

Brave New Drugs is the name of a pamphlet written by Al Hofmann (born Alistair Greene, he claims to be the “spiritual son” of LSD creator, Albert Hofmann, and changed his surname in 1984).

A long-time proponent of the legalization of all recreational drugs, Hofmann was influenced to adopt a different approach after reading a feature on pot theory. The result was Brave New Drugs, but before considering Hofmann’s new argument, it’s worth exploring his former beliefs.

The reason the feature on pot theory produced such an impact on Hofmann was that he realized his existing manifesto fell within its parameters – this in turn made him realize that, as with most pot theory thinking, the changes he was proposing would always be too much for politicians and the public to stomach.

Hofmann’s legalization manifesto had called for the legalization of all drugs, but whilst Hofmann is a somewhat colourful character who famously released short films of himself under the influence of various drugs, his arguments were methodical and clearly thought out.

Hofmann argued that the status quo bordered on the ridiculous. He pointed out that “controlled” drugs were not controlled at all, that decades of law enforcement had failed to stem the tide and had alienated several generations of drug consumers. Illegality had led to a lack of safety or quality-control measures, had been a causal factor in related crime (carried out by addicts in order to feed their addiction) and empowered criminal gangs. Governments also failed to see any revenue from an industry worth billions.

His solution dealt differently with different drugs. Hofmann rightly predicted that public smoking bans would eventually be commonplace and that this would ensure the smoking of cannabis was a private activity. He proposed that packs of cannabis cigarettes would be available from licensed premises and that “coffee shops” would exist for people who wanted to smoke in a social setting or consume edible cannabis treats.

In the next category he included LSD, Ecstasy, Cocaine and other drugs that might be enjoyed on a night out in the clubs. Again, he proposed that specialist shops run along the lines of a pharmacy would be able to sell the drug directly to consumers. But he also proposed that nightclubs could be specially licensed to sell these drugs (a proviso for holding such a license would be a requirement for the club to have chill-out rooms and also to have staff qualified to ensure the safety of the consumers and intervene in the case of adverse reactions).

Finally, Hofmann suggested that Heroin could be made available free of charge in measured doses in clinics across the country (known as the English Method, this system is already used in Switzerland).

The economic and social arguments that Hofmann used to couch these arguments were all impressive, but he was even honest about the possible drawbacks. For example, Hofmann suggested that, despite stringent controls, more children would probably get their hands on drugs than do so currently, but he argued that the drugs would at least be manufactured to safe standards.

The world envisaged by Hofmann wasn’t a stoned paradise – if anything, drugs would be considerably more controlled than they are at present – but he came to realize that getting from here to there would be a process that would always prove too controversial for government and electorate. What he feared most was that incremental steps would be made, resulting in some sort of decriminalization, which he considered the worst of both worlds.

The result of this epiphany was Brave New Drugs, the title of the pamphlet inspired by Aldous Huxley’s novel, Brave New World, in which the entire population is kept permanently content with a drug called Soma.

Put simply, his new proposal was for government to sanction pharmaceutical companies to experiment with the development of new recreational drugs. The logic was that the population might not accept the legalization of Heroin or Cannabis, but it might be more receptive to an enhanced form of Prozac.

Hofmann realized that it wasn’t so much the fear of mood-altering drugs that prejudiced the general public but the fear of unknown drugs with dark associations. He reasoned that any drug created and licensed by a pharmaceutical company and sold under strict conditions would be considered safe for consumption.

The drugs, even if they used some of the same compounds found in currently available recreational drugs, would be safer, possible side effects would be known in advance, treatment of adverse reactions would be more straightforward. Furthermore, the drugs would exist within the mainstream economy and contribute to government finances accordingly. Most drug-related crime would disappear as, eventually, the new drugs supplanted the old.

Perversely, and to the consternation of Hofmann, a number of politicians voiced cautious support for his pamphlet but it drew considerable criticism from some of his former advocates (see the ostalgie paradox). They argued that he was playing into the hands of big business and intrusive government, and that consumers would never really know the impact of the drugs they were being given.

To some extent, Hofmann had probably brought some of this criticism upon himself in his choice of title. The fear was that the drugs created under Hofmann’s proposals would be too similar to Soma and would serve as a tool with which governments could subdue their peoples and eradicate dissent. (This view gained widespread support and in America an indie band even called themselves The Stepford Hofmanns – they were briefly successful, releasing three albums before splitting.)

Hofmann answered his critics by citing voter apathy and the lack of political activism in western societies as evidence that governments hardly needed to introduce a drug to enforce compliance.

The only players not to have expressed an opinion are the pharmaceutical companies themselves, but it’s almost certain that they would be willing to rise to the challenge if sanctioned to do so. As ever with such policy decisions, though, the drug problem may have to get considerably worse before any politician is willing to consider something this radical.

In 2006, Hofmann created his own political party and plans to stand for election to the European Parliament.

Wednesday, 9 May 2007

Pot Theory

The origins of Pot Theory are unclear but it’s thought to have emerged out of a series of seminars at MIT in the late 1980s. Originally, it was used exclusively in relation to policy decisions on drugs, but despite the punning title, it doesn’t refer specifically to that debate and since the mid-90s has been applied to numerous other areas of social and foreign policy, as well as in the search for business solutions.

The “Pot” in Pot Theory refers to Pol Pot who infamously returned Cambodia to the Year Zero when the Khmer Rouge seized control. Put simply then, Pot Theory takes an intractable problem and looks at how it might be tackled if the clock were reset to the year zero.

The connection to drugs policy is immediately apparent. The policy on the use and control of drugs is driven primarily by various cultural and historical factors, rather than by a reasoned system for determining the relative danger of different products.

The application of Pot Theory would see tobacco ruled as of greater danger than many currently illegal narcotics. It might also bring into question the right of government to determine how its citizens might make use of their leisure hours.

The drawback of Pot Theory is perhaps obvious – unless policy-makers can actually introduce Year Zero Thinking (another buzz phrase of the theory) the solutions developed will still be unpalatable to the majority of voters, consumers or shareholders.

The theory also requires radical change and that is rarely welcomed by even those who have a vested interest in seeing drug laws relaxed (for a further examination of this tendency, see a subsequent post on the ostalgie paradox).

Nevertheless, although Pot Theory in its purest form rarely provides solutions which are useful in a normal democratic framework, many political theorists now consider it an ideal method for exploring the range of paradigm shifts available in any given policy area.

Tuesday, 8 May 2007

Boxless Thinking?

Boxless is a term that applies to any technology, medium or area of knowledge so new or groundbreaking that there's no existing paradigm. In other words, there's no box outside of which to think... it's boxless.

Of course, the most obvious recent example is the Internet/web, but that also throws up the importance of this term. Rather than adopt truly boxless thinking, most people tried to make the internet fit into existing boxes, and it's arguable that the first dotcom crash was partly caused by that lack of flexibility.

Note the word "arguable" in there - in my mind, Amazon and Ebay are about as far away from boxless thinking as it's possible to be and they seem to be doing okay. But the fact remains, it's important to spot when something is truly boxless and to act accordingly.